text
stringlengths
112
2.78M
meta
dict
--- abstract: 'Reentrant spin glasses are frustrated disordered ferromagnets developing vortex-like textures under an applied magnetic field. Our study of a Ni$_{0.81}$Mn$_{0.19}$ single crystal by small angle neutron scattering clarifies their internal structure and shows that these textures are randomly distributed. Spin components transverse to the magnetic field rotate over length scales of 3-15 nm in the explored field range, decreasing as field increases according to a scaling law. Monte-Carlo simulations reveal that the internal structure of the vortices is strongly distorted and differs from that assumed for “frustrated” skyrmions, built upon a competition between symmetric exchange interactions. Isolated vortices have small non-integer topological charge. The vortices keep an anisotropic shape on a 3 dimensional lattice, recalling “croutons” in a “ferromagnetic soup”. Their size and number can be tuned *independently* by the magnetic field and concentration $x$ (or heat treatment), respectively. This opens an original route to understand and control the influence of quenched disorder in systems hosting non trivial spin textures.' author: - 'I. Mirebeau' - 'N. Martin' - 'M. Deutsch' - 'L. J. Bannenberg' - 'C. Pappas' - 'G. Chaboussant' - 'R. Cubitt' - 'C. Decorse' - 'A. O. Leonov' title: 'Spin Textures induced by Quenched Disorder in a Reentrant Spin Glass: Vortices versus “Frustrated” Skyrmions' --- Disorder plays a central role in the advent of the most spectacular quantum phenomena observed in condensed matter. The quantum Hall effect observed in a two-dimensional (2d) electron gas[@Klitzing1980; @Thouless1982], the two-current character of the resistivity in impurity-containing ferromagnetic metals[@Fert1968] leading to giant magneto-resistance [@Baibich1988] or the dissipationless conduction observed in the mixed state of type II superconductors[@LeDoussal1998; @Klein2001] are prominent examples. Frustrated ferromagnets represent another type of playground to study the influence of disorder. Such systems show competing ferromagnetic (FM)/antiferromagnetic (AFM) interactions combined with atomic disorder. The influence of quenched disorder, when treated in a mean field model with infinite range interactions[@Sherrington1975; @Gabay1981], leads to a canonical spin glass (SG) when the average interaction $\bar{J}$ is smaller than the width of the interaction distribution or to a reentrant spin glass (RSG) otherwise. Here, we focus on the FM case ($\bar{J} > 0$) of the RSGs where vortex-like textures are stabilized under an applied magnetic field at low temperature. We study their morphology and spatial organization by combining neutron scattering experiments on a Ni$_{0.81}$Mn$_{0.19}$ single crystal and Monte Carlo simulations. We compare them with those expected for skyrmions built upon a competition between symmetric exchange interactions. Altogether, our study shows that one can *independently* tune the number and size of vortex textures in frustrated disordered magnets with the magnetic field, heat treatment and concentration of magnetic species. It provides clues to control and use the influence of quenched disorder in frustrated ferromagnets and skyrmion-hosting systems in bulk state. Reentrant spin glasses and “frustrated” skyrmions ================================================= As a common feature, RSGs show three successive phase transitions upon cooling: a paramagnetic to FM transition at [T$_{\rm C}$]{} followed by transitions towards two mixed phases at [T$_{\rm K}$]{} and [T$_{\rm F}$]{}. Below the canting temperature [T$_{\rm K}$]{} spin components $\mathbf{m}_{\rm T}$ transverse to the longitudinal magnetization $\mathbf{m}_{\rm L}$ start to freeze. The lower temperature [T$_{\rm F}$]{} marks the onset of strong irreversibilities of $\mathbf{m}_{\rm L}$. In this picture, the ferromagnetic long range order of $\mathbf{m}_{\rm L}$ is preserved in the RSG down to T $\rightarrow 0$K. The phase diagram (T, $x$) where $x$ is a parameter tuning the distribution of interactions shows a critical line between SG and RSGs ended by a multicritical point at $x_{\rm C}$ where all phases collapse[@Gabay1981]. Metallic ferromagnetic alloys with competing nearest-neighbor interactions tuned by the concentration $x$ show a magnetic phase diagram (T, $x$) in qualitative agreement with mean field predictions. Well-known examples are Ni$_{1-x}$Mn$_x$[@Abdul-Razzaq1984], Au$_{1-x}$Fe$_x$[@Campbell1983], Fe$_{1-x}$Al$_x$[@Motoya1983; @Boeni1986] and Fe$_{1-x}$Cr$_x$[@Burke1983] crystalline alloys or amorphous Fe-based alloys [@Salamon1980; @Birgeneau1978; @Fernandez-Baca1990; @Senoussi1988a; @Senoussi1988b]. A large body of experimental and theoretical studies have revealed the peculiarities of their magnetic behavior. In this paper, we focus on vortex-like textures observed in the 1980s in the above systems, either in single crystal, polycrystal or amorphous form [@Hennion1986; @Boeni1986; @Lequien1987; @Hennion1988]. They were detected under applied magnetic field in the mixed phases of ferromagnetic, weakly frustrated alloys ($x$ $\ll$ $x_{\rm C}$), using small angle neutron scattering (SANS), which provides a clear signature of these textures and reveals their typical size. Inside the vortices, the transverse spin components are frozen in the plane perpendicular to the applied field, and they are rotated over a finite length scale, yielding a maximum in the neutron scattering cross section versus the momentum transfer. In addition, the transverse spin freezing induces Dzyaloshinskii-Moriya (DM) anisotropy[@Campbell1986], together with a chiral anomalous Hall effect[@Tatara2002; @Pureur2004; @Fabris2006]. Stimulated by these measurements, Monte Carlo (MC) simulations were performed in a 2d lattice, showing similar vortex-like patterns[@Kawamura1991]. The knowledge of their spatial organization has however remained elusive. In this context, it is worth recalling that ferromagnets may also host nanometric spin textures known as skyrmions (SKs). SKs form double-twist solitonic structures, offering many perspectives in spintronics and data storage[@Fert2013; @Nagaosa2013]. As predicted by theory[@Okubo2012; @Leonov2015; @Lin2016; @Rozsa2016; @Hu2017], some anisotropic ordered magnets with competing nearest neighbor (NN) and next nearest neighbor (NNN) exchange interactions may host localized SKs with versatile internal structure and smooth rotation of the magnetization. Different types of modulated phases such as hexagonal or square SK lattices have been predicted, yielding a very rich phase diagram [@Leonov2015]. The size of these “frustrated” SKs, of the order of a few lattice constants, is comparable to the typical vortex size in RSGs and much smaller than the size of chiral SKs stabilized by DM anisotropy in thin films or bulk state, which is usually above 10 nm [@Leonov2016; @Leonov2016b]. Therefore, quenched disorder should affect frustrated SKs much more than their chiral counterparts, expected to undergo a collective pinning by disordered impurities without deep changes of their internal structure [@Hoshino2018]. Experimentally, large SK lattices were observed in non-centrosymmetric frustrated alloys with chemical disorder [@Tokunaga2015; @Nayak2017], showing magnetic anomalies similar to the RSG’s. Frustrated SKs have been suspected in very few systems so far, such as Gd$_{2}$PdSi$_{3}$ (Ref. ). Remarkably, frustrated SKs reveal strong similarities with the vortex textures observed in RSGs. Our study attempts to clarify the subtle differences between these two types of topological defects. To that end, we report on new experiments performed on a weakly frustrated Ni$_{0.81}$Mn$_{0.19}$ single crystal, searching for a vortex lattice and aiming for a better characterization of these field-induced magnetic textures (Section \[sec:sectwo\]). Our experiments are complemented by MC simulations with a minimal model, which clarifies the internal structure of the vortices and identifies their most relevant features (Section \[sec:secthree\]). We discuss the origin of the vortex textures, and compare them with SKs, either chiral or frustrated, observed in bulk materials (Section \[sec:secfour\]). Vortex-like textures in a single crystalline reentrant spin glass {#sec:sectwo} ================================================================= The Ni$_{1-x}$Mn$_x$ system and studied sample ---------------------------------------------- ![image](fig1.eps){width="98.00000%"} In Ni$_{1-x}$Mn$_x$ alloys, magnetic frustration arises from competing interactions between NN pairs, namely the AFM Mn-Mn pairs and the FM Ni-Mn and Ni-Ni pairs[@Marcinkowski1961; @Cable1974]. The NNN Mn-Mn pairs are FM. The multi-critical line between RSG and SG phases is located around $x_{\rm C} = 0.24$, close to the stoichiometric Ni$_3$Mn (see Refs. and Fig. \[fig:phasediagram\]a). Strikingly, the Ni$_3$Mn ordered superstructure of $L1_2$ type and space group $Pm\bar{3}m$ eliminates all NN Mn-Mn pairs. This offers the possibility of tuning the magnetic order by controlling the number of such pairs through an appropriate heat treatment [@Yokoyama1976; @Okazaki1995; @Stanisz1989]. The fully ordered Ni$_3$Mn is a ferromagnet with a Curie temperature [T$_{\rm C}$]{} $\sim$ 450K, whereas a disordered alloy of the same composition (space group $Fm\bar{3}m$) is a spin glass with a freezing temperature [T$_{\rm F}$]{} $\sim$ 115K. Here, we study a Ni$_{0.81}$Mn$_{0.19}$ single crystal, already used for the neutron scattering experiments presented in Ref. . The single crystal form limits the distributions of magnetocrystalline anisotropies and demagnetizing fields within the sample, and provides the best playground to search for a vortex lattice. A thin rectangular plate was cut from the large crystal in a (110) plane for magnetic measurements. Both samples were heated at 900 $^{\circ}$C during 20 hours in a sealed quartz tube under vacuum, then quenched into an ice and water mixture to ensure maximal disorder[@Abdul-Razzaq1987]. They were stored in liquid nitrogen between experiments to prevent any further evolution of the short range order. Static magnetic susceptibility was measured versus temperature under a field H = 20 Oe in both field cooled (FC) and zero field cooled (ZFC) conditions, using a superconducting quantum interference device (SQUID). With decreasing temperature, the ZFC susceptibility strongly increases at the Curie temperature [T$_{\rm C}$]{} = 257K, shows a plateau over an extended temperature range as expected for weakly frustrated RSGs, and then decreases (Fig. \[fig:phasediagram\]b). The freezing temperature [T$_{\rm F}$]{} = 18K, defined similarly to [T$_{\rm C}$]{} by the inflection point of the susceptibility versus temperature in the ZFC state, locates the onset of $\it{strong}$ magnetic irreversibilities. The ratio T$_{\rm F}$/T$_{\rm C} \simeq 0.07$ characterizes the weak frustration of our sample. The canting temperature [T$_{\rm K}$]{} $\sim$ 120K which situates between [T$_{\rm C}$]{} and [T$_{\rm F}$]{} locates much weaker irreversibilities related to transverse spin freezing. It was determined by previous neutron scattering experiments[@Lequien1987]. The three characteristic temperatures merge at the critical point. Small-angle neutron scattering ------------------------------ SANS measurements were performed on the D33 instrument of the Institut Laue Langevin (ILL), using an incident neutron wavelength $\lambda = 6~\text{\AA}$ and a sample to detector distance D = 2.8 m. Data were corrected for the detector efficiency and calibrated cross sections were obtained by taking the sample thickness and transmission, as well as the incident neutron flux, into account[@SM]. A magnetic field H up to 2T was applied to the sample, in two configurations (see Fig. \[fig:schemeconfig\]): *a)* along the neutron beam, which defines the $y$ axis; *b)* along the $x$ axis perpendicular to the neutron beam, namely in a plane parallel to the detector $(x,z)$ plane. Additional measurements were performed in configuration *b)* on the PAXY spectrometer of the Laboratoire Léon Brillouin (LLB) under a magnetic field up to 8T for the same neutron wavelength and sample-to-detector distance. Fig. \[fig:schemeconfig\] shows typical intensity maps recorded in the detector plane for the two configurations. The intensity is measured at 3K in the ZFC state under a magnetic field H = 2T, which almost saturates the sample magnetization (Fig. \[fig:phasediagram\]c). ![image](fig2.eps){width="98.00000%"} In configuration *a)*, the intensity distribution does not show any Bragg spot, rather a broad maximum at a finite momentum transfer. The intensity is isotropically distributed over a ring of scattering in the detector plane. The absence of any Bragg spots strikingly contrasts with the scattering patterns in SK lattices or superconducting flux line lattices observed in single crystal samples for the same experimental configuration[@Lynn1994; @Muehlbauer2009]. It means that although the sample is single crystalline, the magnetic defects are organized in a random or liquid-like way. As discussed below, this is due to the random occupation of the lattice sites and subsequent disorder of Mn-Mn NN AFM bonds. In configuration *b)*, one observes a similar pattern, but the intensity is now modulated according to the orientation of the momentum transfer with respect to the applied field, and is strongly enhanced in the direction $\mathbf{q} \, \| \, \mathbf{H}$. This modulation comes from the selection rule for magnetic neutron scattering, which impose that only the spin components perpendicular to the scattering vector $\mathbf{q}$ contribute to the magnetic cross-section. As schematically explained in Fig. \[fig:schemeconfig\] the dominant contribution to the scattering in this configuration arises from spin components $\mathbf{m}_{\rm T}$ transverse to the magnetic field. In the following analysis, we focus on this configuration, which allows us to better characterize the spin textures. The intensity maps in configuration *b)* can be described as $$\begin{aligned} \nonumber \sigma(q,\alpha) &=& \sigma_{\rm L}(q) \cdot \sin^2\alpha \\ &+& \sigma_{\rm T}(q) \cdot (1+\cos^2\alpha)+I_{\rm bg}(q) \quad , \label{eq:configb}\end{aligned}$$ where $\alpha$ is the angle $(\mathbf{q},\mathbf{H})$, $\sigma_{\rm L}(q)$ and $\sigma_{\rm T}(q)$ are the magnetic scattering cross sections related to correlations between transverse and longitudinal spin components, respectively. $I_{bg}$(q) is an isotropic background which consists of a low-q contribution from crystal inhomogeneities and a constant term which can be calculated exactly, and which is in excellent agreement with experiment (see details in Ref. ). Noticing that Eq. \[eq:configb\] fits the angular dependence of the intensity, we average the scattering map within two angular sectors of 60$^{\circ}$: sector 1 for $\mathbf{q} \, \| \, \mathbf{H}$ ($\alpha$ = 0$^{\circ}$) and sector 2 for $\mathbf{q} \perp \mathbf{H}$ ($\alpha$ = 90$^{\circ}$) (see Fig. \[fig:sans\_setup\] a,b and c). We then combine the intensities from the two sectors to separate the contributions from the transverse and the longitudinal spin components (Fig. \[fig:sans\_setup\]d). As a key result, the intensity from the transverse spin components $\sigma_{\rm T}(q)$ shows a clear maximum in $q$, which arises from the vortex-like textures. As shown below, the FM correlated transverse spin components rotate over a finite length scale to compensate the transverse magnetization, yielding negligible intensity at $q = 0$ and a maximum related to the vortex size. When the field increases, the maximum intensity decreases and its position moves towards high q values (Fig. \[fig:scaling\]b). A signal from the transverse spin components is observed up to the highest field of 8T. On the other hand, the intensity from the longitudinal spin components $\sigma_{\rm L}(q)$ shows no well-defined maximum at $q \neq 0$ (Fig. \[fig:scaling\]a). Above 2T, it becomes very small and difficult to separate from the background contribution[@SM]. In a first step, the transverse cross section was fitted by the phenomenological expression $$\begin{aligned} \nonumber \sigma_{T} (q) &=& \frac{\sigma_{\rm M} \, \kappa \, q}{2\pi q_{\rm 0}} \cdot \left(\frac{1}{\kappa^2+\left(q-q_{\rm 0}\right)^2}-\frac{1}{\kappa^2+\left(q+q_{\rm 0}\right)^2}\right)\\ &+& \frac{I_{\rm bg}(q)}{2} \quad , \label{eq:sq_transverse}\end{aligned}$$ where the first term accounts for the observed peak in the scattering cross section while the second one is related to the background. From Eq. \[eq:sq\_transverse\], one can extract the peak position $q_{\rm max} = \sqrt{q_{\rm 0}^{2}+\kappa^{2}}$ and the integrated cross section $\sigma_{\rm M}$. As shown in Fig. \[fig:scaling\]c-f, these quantities vary continuously with the magnetic field. ![image](fig3.eps){width="98.00000%"} ![image](fig4.eps){width="98.00000%"} To interpret these results, we take into account the liquid-like order of the defects in analogy with chemical inhomogeneities. Having fitted and subtracted the background term, we express the scattering cross section as $$\begin{aligned} \nonumber \sigma_{T} (q) &=&a\, \Delta \rho_{\rm mag}^{2} \, N_{\rm d} \, V_{\rm d}^{2}\\ &\times& \left\{\langle F_{\rm T}^{2}(q)\rangle - \langle F_{T}(q) \rangle^{2} \cdot \left[ 1 - S_{\rm int}(q) \right] \right\} \quad , \label{eq:iq_transverse}\end{aligned}$$ where $F_{\rm T}(q)$ is the normalized form factor of the defects, associated with transverse spin components, and $S_{\rm int}(q)$ is an interference function which takes into account the local correlations between two defects. In Eq. \[eq:iq\_transverse\], $\langle\rangle$ denotes the statistical average over the sample. $\Delta \rho_{\rm mag} \simeq \left|\mathbf{m}_{\rm T}\right|$ is the magnetic contrast between a vortex (where $\left|\mathbf{m}_{\rm T}\right| \neq 0$) and the surrounding ferromagnetic region (where $\left|\mathbf{m}_{\rm T}\right| \rightarrow 0$). $N_{\rm d}$ and $V_{\rm d}$ are respectively the number of vortices and their volume, and $a$ is a constant. In the following, we neglect the local magnetic interaction between defects. This assumption of independent objects is justified for a weakly frustrated system where the vortex centers are randomly distributed and located far away from each other ([*i.e.*]{} $S_{\rm int}(q) = 1$ in Eq. \[eq:iq\_transverse\]). This assumption also holds for a system with concentrated defects, taking into account the specific form factor of the magnetic vortices and the random orientation of the transverse spin components from one vortex to another ([*i.e.*]{} $\langle F_{T}(q) \rangle =0$ in Eq. \[eq:iq\_transverse\]). It is confirmed by analytical calculations of model form factors [@SM] and by MC simulations reported in Section \[sec:secthree\]. For independent defects, the $q$-dependence of the neutron intensity reduces to that of the average squared form factor, and the position $q_{\rm max}$ of the intensity maximum is inversely proportional to the typical size of the vortices. The integrated intensity $\sigma_{\rm M}$ is proportional to $\Delta \rho_{\rm mag}^{2} \, N_{\rm d} \, V_{\rm d}^{2}$, according to Eq. \[eq:iq\_transverse\]. As a toy model, we have considered regular vortices of radius $r_{\rm d}$ having an antiferromagnetic core[@SM]. The squared form factor averaged over all orientations for the transverse components has a non symmetric line shape recalling the experimental one, with a maximum at $q_{\rm max} = \pi / r_{\rm d}$. Therefore, taking into account corrections for the demagnetization factor, the field dependence of $q_{\rm max}$ reflects the decrease of the vortex typical radius $r_{\rm d} = \pi / q_{\rm max}$ with increasing field[@SM]. Over the explored field range, $r_{\rm d}$ obeys the simple relation $r_{\rm d} \propto H^{-1/2}$ (Fig. \[fig:scaling\]e). The corresponding variation of $\sigma_{\rm M} \propto H^{-1/2}$ suggests that the evolution of the defect shape versus the magnetic field occurs in a self similar way, yielding scaling laws for the position, width and intensity of the magnetic signal (Fig. \[fig:scaling\]b). Such laws are actually quite general and, for instance, govern the evolution of the cluster size with annealing time in metallic alloys which tend to segregate when they are quenched in the region of spinodal decomposition [@Hennion1982]. Using Eq. \[eq:iq\_transverse\], one can also infer the field-dependence of the number of defects ([*i.e.*]{} scattering centers) seen by SANS from the quantity $\sigma_{\rm M} / V_{\rm d}^{2}$. For this purpose we assume thin cylindrical defects, and consider an experimental field range $H \ll J$ where the magnetic contrast (or amplitude of the transverse spin component) is roughly field-independent. We obtain $V_{\rm d} \simeq r_{\rm d}^{2}$, thus $N_{\rm d} \simeq \sigma_{\rm M} / r_{\rm d}^{4}$. As shown in Fig. \[fig:scaling\]f, $N_{\rm d}$ *increases* with increasing field and saturates at a finite field of $\simeq$ 6 T. This variation is described by a stretched exponential $$\label{eq:stretched_exp} N_{\rm d} = 1 - \exp \left(-\frac{H}{H_{\rm C}}\right)^{\nu}$$ with $H_{\rm C} = 2.29(3)\,$T and $\nu = 1.64(4)$. This result can be understood as follows. At low fields, vortices are large enough to involve several AFM bonds. Upon an increase in field, they progressively shrink while remaining centred on isolated AFM first neighbor Mn-Mn pairs[@SM], the number of which is fixed by the Mn concentration and heat treatment. Consequently, the number of individual defects $N_{\rm d}$ seen by SANS will *increase*. At higher fields, however, $N_{\rm d}$ should decrease until all defects have collapsed for fields strong enough to overcome the typical AFM exchange interaction. We indeed observe a slight decrease of $N_{\rm d}$ for $\mu_{\rm 0}H_{\rm int} \gtrsim $ 6 T. However we note that the field corresponding to the exchange interaction is of the order of several 100T and is thus well-beyond our experimental range. In turn, this regime can be conveniently explored numerically. This point is addressed in the next section, where we propose a way to verify the above scenario and extend the exploration of the vortex-like textures properties towards arbitrarily large magnetic fields. Monte Carlo simulations {#sec:secthree} ======================= Numerical studies of the reentrance phenomena and magnetic structures of reentrant spin glasses trace back to the pioneering work of Kawamura and Tanemura[@Kawamura1986; @Kawamura1991]. They showed that a minimal model is able to reproduce the main characteristics of the magnetic textures observed in RSG’s. Following their approach, we first performed MC simulations on 2d matrices containing $160 \times 160$ Heisenberg spins placed on a square lattice. While the main interaction is assumed to be FM ($J = 1$), a certain fraction $c$ of the bonds is turned into AFM ($J = -1$). Using a spin quench algorithm, the system’s ground state is found where vortex-like defects appear as metastable configurations ([*i.e.*]{} with energies slightly higher than those of the bulk FM state). For the studied concentrations $c = 5\,\text{and}\, 20 \, \%$, individual defects (similar to vortices or pairs of vortices) are evidenced, all of them being centred around the randomly distributed AFM NN pairs (see Fig. \[fig:mc\_results\_rspace\] for the 5% case and Ref. for further details). In all cases, the average topological charge is $Q = 0$ but individual objects locally display a finite $Q$, being in some cases as large as 0.3 ([*i.e.*]{} similar values as those found for certain types of frustrated SKs[@Leonov2015]). The origin of the non-integer charge is clarified by considering the relatively small size of the defects as well as their irregular shapes and distorted magnetization profiles, related to the ill-defined boundaries between vortices and the ambient FM medium. In other words, the vortex-like textures stabilized under field in RSGs feature both senses of the vector chirality, resulting in a smaller topological charge than in frustrated or chiral SKs (for which $Q = \pm 1$). Extending the MC simulations to a 3d spin matrix, it appears that the vortex-like textures keep their anisotropic shape (oblate along the field direction) in the 3d lattice and can thus be dubbed as “croutons”. As shown qualitatively on the maps displayed in Figs. \[fig:mc\_results\_rspace\]a-c, the average number of defects decreases with increasing field $H$ while spins are progressively aligned along its direction. The computed magnetization $m$ (Figs. \[fig:mc\_results\_rspace\]e) increases as the number of vortices decreases, showing a quasi plateau with finite slope versus the ratio $H/J$. At very high fields, of magnitude comparable to the exchange constant $J$, a prediction of the MC modeling is that the vortices should collapse individually, yielding microscopic plateaus of $m$, the amplitude of which is probably too small to be experimentally observed. In order to compare these results with the SANS experiments of Section \[sec:sectwo\], we have computed the Fourier transforms of the longitudinal and transverse spin components. The longitudinal cross section $\sigma_{\rm L}$ decreases monotonically with increasing $q$ (Fig. \[fig:mc\_results\_qspace\]a,b) whereas the transverse cross section $\sigma_{\rm T}$ shows a broad asymmetric peak (Fig. \[fig:mc\_results\_qspace\]c,d). Both quantities become almost $q$-independent at large $q$ values. When the field increases, the magnitude of the two simulated cross sections decreases, and a fit of Eq. \[eq:sq\_transverse\] to the simulated $\sigma_{\rm T}$-curves shows that the position of the maximum $q_{\rm max}$ moves towards larger values, whereas its integrated intensity $\sigma_{\rm M}$ decreases. This evolution reflects a decrease of the vortex size $r_{\rm d}$ with increasing $H$ according to a scaling law (Fig. \[fig:mc\_results\_qspace\]e) and an apparent increase of the number of vortices $N_{\rm d}$ following Eq. \[eq:stretched\_exp\] with fit parameters $H_{\rm C} = 1.05(5) \,J$ and $\nu = 2.8(1)$ (Fig. \[fig:mc\_results\_qspace\]f). Similar to the experimental case, $N_{\rm d}$ is defined as $N_{\rm d} \simeq \sigma_{\rm M} / r_{\rm d}^{4}$, where $r_{\rm d} = a / q_{\rm max}$ with $a$ the lattice constant of Ni$_{0.81}$Mn$_{0.19}$. As discussed below, these results show that a minimal model is able to capture the essential features of the observed textures. ![image](fig5.eps){width="98.00000%"} ![image](fig6.eps){width="98.00000%"} Discussion {#sec:secfour} ========== Spin textures in a reentrant spin glass: the “crouton” picture -------------------------------------------------------------- The MC simulations presented above strongly reflect the experimental observations, as shown by: 1. The shape of the magnetization curve with a finite slope at large fields (compare Fig. \[fig:phasediagram\]c and \[fig:mc\_results\_rspace\]e), 2. The existence of defects over which the transverse magnetization is self-compensated, yielding a peak of $\sigma_{T}$ at a finite $q$-value. The asymmetric q-dependence of $\sigma_{T}$ is also reproduced, suggesting similar internal structures of the defects (compare Fig. \[fig:scaling\]b and \[fig:mc\_results\_qspace\]c,d), 3. The persistence of inhomogeneities of the magnetization at the scale of the vortex size, deep inside the RSG phase, as shown by the finite longitudinal cross section $\sigma_{L}$ centered around $q = 0$ (compare Fig. \[fig:scaling\]a and \[fig:mc\_results\_qspace\]a,b), 4. The field-dependence of the defect size $r_{\rm d}$ (obtained from the $q$-position of the peak in $\sigma_{T}$), obeying scaling laws $r_{\rm d} \propto H^{-\beta}$ with the same exponent $\beta = 0.5$ (compare Fig. \[fig:scaling\]e and \[fig:mc\_results\_qspace\]e), 5. The field-dependence of the number of individual defects $N_{\rm d}$, increasing as a function of field following the phenomelogical Eq. \[eq:stretched\_exp\], before reaching saturation (compare Fig. \[fig:scaling\]f and \[fig:mc\_results\_qspace\]f), 6. The robustness of the defects, surviving up to very large fields as compared with usual magnetic SKs (compare Fig. \[fig:scaling\]f and \[fig:mc\_results\_qspace\]f, and see Ref. for a detailed discussion). Therefore, the simulations strongly support a description of the magnetic defects observed in Ni$_{0.81}$Mn$_{0.19}$ as “crouton-like” defects, induced by AFM Mn-Mn first neighbor pairs, where the transverse spin components are ferromagnetically correlated and rotate to compensate the transverse magnetization. Their magnitude decreases from the vortex center to the surroundings to accommodate the average ferromagnetic medium. As discussed below, such defect shape is compatible with the interactions generally considered for the RSGs, although other defect textures could be in principle compatible with the experiment. ![image](fig7.eps){width="90.00000%"} The main difference between the experiment and the MC simulations is the field value at which the number of individual defects $N_{\rm d}$ starts saturating ($H \ll J$ and $\sim J$, respectively). This suggests that an accurate determination of their stability range requires a more complex modeling, which is well-beyond the scope of the present work. Indeed, the experimental situation is complicated, involving different moments on Ni and Mn ions, 3 types of interactions, a 3d lattice with high connectivity, a high concentration of magnetic species, and an atomic short-range order (SRO). Therefore many different local environments and moment values exist in the experimental system. Comparatively, the simulations are based on a very simple case, namely a 2d square lattice with a random distribution of AFM bonds involving a single exchange constant. Despite these differences, we stress that the agreement between both approaches is surprisingly good. Let us outline several reasons for that. Firstly, the mean field description, which identifies longitudinal and transverse spin components with different behaviors, is valid, as expected for weak frustration. The present sample behaves as a weakly frustrated ferromagnet (the ratio T$_{\rm F}/$T$_{\rm C} \simeq 0.07$ can be associated to an effective concentration of AFM bonds of $\simeq$ 0.07 in mean field approximation[@SM]), although the concentration of first neighbor isolated Mn-Mn pairs is relatively high (in the 0.2-0.4 range depending on the amount of SRO). Experiments varying the degree of frustration through Mn content or heat treatment could check the validity of this description when approaching the critical point which separates RSG and SG phases. Secondly, both methods involve *a statistical average of different types of defects* which do not interact with each other, but all have a typical size governed by general stability equations. This typical size is dictated by the competition between ferromagnetic exchange ($E = J k^2$) and Zeeman energy, and it is expected to vary as $k^{-1} \propto (J/H)^{0.5}$, hence $r_{\rm d} \propto H^{-0.5}$, as observed experimentally and in the simulation. Such a general law also controls the extension of Bloch walls[@Rado1982] or soliton defects[@Steiner1983] among others. Our findings also suggest that the 2d lattice provides a relevant description of the real case due to the peculiar crouton shape, with much larger extension in the transverse plane than along the field axis. In 2d-XY antiferromagnets, spontaneous vortices are stabilized and undergo a Kozterlitz -Thouless transition with temperature, involving spontaneous symmetry breaking at a local scale [@Kosterlitz1973; @Kosterlitz2017; @Villain2017]. The reentrant transitions have a different nature, but they also involve peculiar symmetry breaking below [T$_{\rm K}$]{} and [T$_{\rm F}$]{}, associated to the Gabay-Toulouse and de Almeida-Thouless lines respectively[@Gabay1981]. As a major consequence, the transverse spin freezing and emergence of vortices strongly impact the spin excitations. A softening of the spin wave stiffness[@Hennion1982; @Hennion1988]occurs below [T$_{\rm K}$]{}, recalling the anomalous sound velocity in glasses[@Black1977; @Michel1987] and the spin wave softening in quasi 2d frustrated antiferromagnets[@Aristov1990]. It is followed by a further hardening of the spin waves below [T$_{\rm F}$]{}. Vortex-like textures and skyrmions ---------------------------------- Among the various classes of spin textures[@Toulouse1976], those studied here show clear differences with the Bloch-type skyrmions observed in bulk chiral ferromagnets, which are primarily induced by DM anisotropy in non centrosymmetric lattices. Both occur in an average ferromagnetic medium, but the vortex-like textures probed in this study are stabilized at low temperatures, do not form a magnetic lattice and can exist for any crystal symmetry or even in amorphous compounds. This is because their primary origin is the competition of (symmetric) exchange interactions combined with site disorder, rather than antisymmetric exchange. In frustrated systems, the role of the latter, yielding DM anisotropy of chiral nature, has been investigated both theoretically [@Soukoulis1983; @Gingras1994] and experimentally[@Campbell1986]. DM interactions explain the macroscopic irreversibilities in spin glasses and RSGs[@Senoussi1983], torque measurements and paramagnetic resonance. Under field cooling conditions, they induce an additional magnetic field of unidirectional nature, which explains the slight decrease of the vortex size in NiMn when the sample is field-cooled[@SM; @Mirebeau1988]. However they play a minor role in the stabilization of the vortex state, as exemplified by the MC results which describe a bare Heisenberg system. Experimentally we point out that across the critical concentration, the vortices disappear in the true spin glass phase, while the DM anisotropy hardly changes [@Martin2018]. The dissimilarities between vortices and frustrated SKs are more subtle but can be understood by a comparative analysis of their internal structure and topological charge, as deduced from MC simulations. This comparison is made in details in Ref. , and its main results are shown in Fig. \[skyrmions\]. Essentially, frustrated SKs are predicted in *ordered* anisotropic magnets with competing interactions and inversion symmetry and do not require antisymmetric exchange[@Okubo2012; @Leonov2015; @Lin2016]. Their center correspond to a magnetic moment $\mathbf{m}$ antiparallel to the applied field ($m_{\rm L} < 0$), which gradually rotates towards the aligned state at the boundary ($m_{\rm L} > 0$). Therefore, they individually possess a large topological charge (+1 or -1) and can form densely packed clusters. These features contrast with the vortex-like textures studied in this work, pinned by locally disordered AFM bonds and unable to form extended ordered phases. Since $m_{\rm L}$ can alternate in sign in the vortex core (as controlled by their internal bond structure), the vortices neither bear smooth rotation of $\mathbf{m}$ nor select a preferred helicity, and their absolute topological charge density is smaller than unity within isolated defects. Although they cannot form ordered phases, the vortices studied in the present work may form a liquid-like order in the limit of small applied magnetic fields and large concentration of AFM bonds. Finally, both vortices and frustrated SKs remain metastable solutions, found at zero or finite temperatures. They are both endowed with a remarkable robustness against the collapse towards the field-induced FM state and do not require well-defined lattice structures to appear. Outlook and Conclusion ====================== Our study suggests that in the general search for SK-hosting systems, the role of disorder should be investigated more intensively. Its main consequences are expected in the low energy dynamics, associated with glassy states which occur both in average ordered or disordered media. Theoretically, the glassy behavior is related to metastable states with hierarchical structure in the ground state manifold. It can be analyzed in terms of replica field theory, initially developed for spin glasses [@Sherrington1975], then extended to RSGs [@Gabay1981], vortex lattices in superconductors (the Bragg glass phases [@Giamarchi1995]) and very recently to the skyrmion glass phase [@Hoshino2018]. The example of Co-Zn-Mn alloys [@Tokunaga2015] where SK lattices are observed at 300K and above, is an interesting playground to study such aspects in detail. There, site inversions should lead to local frustration effects and possibly explain the metastable textures observed experimentally. Our work constitutes an experimental illustration of the importance of frustration and disorder for the emergence of localized spin textures in condensed matter. We suggest a simple mechanism for tuning their properties (density, size) by different parameters such as the magnetic field, heat treatment or concentration. This could open a promising route towards the engineering of bulk systems with well defined sizes and density ranges, for instance the design of vortices by a controllable distribution of bonds. Moreover, while the observed vortices cannot be moved since they are bound to the Mn-Mn pairs, their interaction with electric currents[@Prychynenko2018; @Bourianoff2018] and spin waves[@Continentino1983; @Korenblit1986; @Shender1991] is non trivial. In both cases, the class of frustrated ferromagnets studied in this paper might offer novel ways to encode complex information into electron and heat pulses. Acknowledgements {#acknowledgements .unnumbered} ================ We thank P. Bonville for the SQUID measurements, M. Bonnaud for technical assistance on the D33 spectrometer and S. Gautrot and V. Thevenot for their help in setting the experiment on PAXY. A.O.L. acknowledges JSPS Core-to-Core Program, Advanced Research Networks (Japan) and JSPS Grant-in-Aid for Research Activity Start-up 17H06889, and thanks Ulrike Nitzsche for technical assistance. [81]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , , , , ****, (). , ****, (). , , , , , , , , , ****, (). , ****, (). , , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , **** (). , , , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , , , , , ****, (). , , , , , , ****, (). , , , , , , , , , , , ****, (). , ****, (). , , , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , , , , , , (). , , , ****, (). , ****, (). , Ph.D. thesis, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , , , , , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ** (, ), pp. . , ****, (). , , , , ****, (). . , ****, (). , , , , , , , , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , eds., ** (). , ****, (). **. , ****, (). , ****, (). , ****, (). , ****, (). , (). [*Supplementary material*]{} In this supplement, we provide information about the Small-Angle Neutron Scattering (SANS) data analysis (Section \[sec:sans\]) and calculations of model form factors of spin vortices as seen by SANS (Section \[sec:vortices\]). In Section \[sec:sans\_t\_and\_h\], we give a brief review of the effects of temperature and cooling field on the observed SANS patterns. Mean-field modeling (Section \[sec:meanfield\]), relevant to our weakly frustrated Ni$_{0.81}$Mn$_{0.19}$ sample, is also exposed. Finally, an extended comparative analysis of the internal structure of the vortex-like defects evidenced in this work and that of frustrated skyrmions is presented (Section \[sec:mc\_sims\]). Neutron scattering data analysis {#sec:sans} ================================ Small-Angle Neutron Scattering (SANS) data has been treated following the usual strategy[@Brulet2007], which we briefly summarize here. In a first step, we have to eliminate the extrinsic background arising from direct beam tail and sample environment. The is done by measuring the raw intensity $I_{\rm empty~holder}^{\rm raw}$ obtained with the empty sample holder placed at the sample position. The latter is then subtracted from the scattering pattern to be analyzed, yielding $$I_{\rm i}^{\rm corr} = I_{\rm i}^{\rm raw} - t_{\rm i} \cdot I_{\rm empty~holder}^{\rm raw} \quad , \label{eq:bgsub}$$ where $i$ denotes the studied sample (Ni$_{81}$Mn$_{0.19}$) or a calibrant (in the present case, a single crystal of pure Ni). Then, scattered intensities can be converted to cross sections *on an absolute scale* using the expression $$\sigma_{\rm NiMn} (q) = \frac{I_{\rm NiMn}^{\rm corr} \cdot t_{\rm Ni} \cdot d_{\rm Ni} \cdot e_{\rm Ni}}{\langle I_{\rm Ni}^{\rm corr (q)} \rangle \cdot t_{\rm NiMn} \cdot d_{\rm NiMn} \cdot e_{\rm NiMn}} \cdot \sigma_{\rm Ni}^{\rm inc} \quad , \label{eq:absscale}$$ where $t$, $d$ and $e$ denote transmissions, atomic densities and thicknesses, respectively, while $\sigma_{\rm Ni}^{\rm inc} = \frac{5.2}{4\pi}~$barn$\cdot$sr$^{-1}$ is the incoherent scattering cross section of Ni (taken from Ref. ). As shown by the results presented in Fig. 4 of main text, this strategy allows obtaining cross sections which agree to within a few % between the experiments performed on two different SANS spectrometers (D33 at Institut Laue Langevin[@Dewhurst2008] and PAXY at Laboratoire Léon Brillouin[@PAXYsheet]). To determine the longitudinal and transverse cross sections, we use the configuration b) where the magnetic field H is in the detector plane and $\alpha$ is the angle $(\mathbf{q},\mathbf{H})$. We average the scattering map within two angular sectors of 60$^{\circ}$: sector 1 for $\alpha$ =0 ($\mathbf{q}$ // $\mathbf{H}$) and sector 2 for $\alpha =\pi/2$ ($\mathbf{q} \perp \mathbf{H}$) (see Figs. 2 and 3 of main text). Then we combine the intensities of the two sectors to deduce $\sigma_{\rm T}(q)$ and $\sigma_{\rm L}(q)$. The intensities write: $$\begin{aligned} \begin{split} \sigma_{\rm T}(q)+I_{\rm bg}(q)/2 &= \sigma(\mathbf{q},0) / 2 \quad \label{eq:sigmaT} \end{split}\\ \begin{split} \sigma_{\rm L}(q)+I_{\rm bg}(q)/2 &= \sigma(\mathbf{q},\pi/2) - \sigma(\mathbf{q},0) / 2\quad \end{split} \label{eq:sigmaLT}\end{aligned}$$ In a second step we discuss the origin of the intrinsic background term $I_{\rm bg}(q)$, which must be evaluated to isolate the vortex cross sections $\sigma_{\rm T}(q)$ and $\sigma_{\rm L}(q)$. This background term originates from the sample itself and is quasi-isotropic in the detector plane. It is well accounted by the expression: $$I_{\rm bg}(q) = \frac{a}{q^{\rm p}} + b \quad . \label{eq:bg_def0}$$ The first term $\frac{a}{q^{\rm p}}$ dominates in the low-$q$ region ($q \leq 0.025 \text{\AA}^{-1}$) and strongly increases down to $q \rightarrow 0$ following a power law. This small-angle scattering originates from long range nuclear and magnetic inhomogeneities, such as crystal dislocations or magnetic domain walls. The nuclear part could be estimated in principle by performing measurements at a very large fields when the whole magnetic contribution is negligible, but the magnetic part is always present and introduces a slight anisotropy in the background intensity. By fitting the low-q tail, we find a power-law exponent $p = 3.2$ in all cases. This value has been fixed in the course of data evaluation. The second term of Eq. \[eq:bg\_def0\] is a $q$-independent contribution arising from incoherent scattering and chemical disorder (namely the Laue scattering) in the sample. It can be calculated exactly, knowing the amount of chemical short range order. This term is expressed as: $$b = \underbrace{(1-x) \cdot \sigma_{\rm Ni}^{\rm inc} + x \cdot \sigma_{\rm Mn}^{\rm inc}}_{\text{Incoherent scattering}} + \underbrace{x \cdot (1-x) \cdot \left( 1 + \sum_{\rm i = 1}^{3} z_{\rm i} \alpha(R_{\rm i}) \right) \cdot \left(b_{\rm Ni}^{\rm coh} - b_{\rm Mn}^{\rm coh}\right)^{2}}_{\text{Disorder scattering}} \quad , \label{eq:bg_def1}$$ where $x$ is the atomic concentration of Mn, $\sigma_{\rm Ni,Mn}^{\rm inc}$ the incoherent scattering cross sections of Ni and Mn, $b_{\rm Ni,Mn}^{\rm coh}$ the coherent scattering lengths of Ni and Mn, $z_{\rm i}$ the number of $i^{\rm th}$ neighbors in the face-centered cubic structure and $\alpha(R_{\rm i})$ the corresponding positional short-range order parameters obtained on Ni$_{\rm 0.8}$Mn$_{\rm 0.2}$[^1] by Cable and Child[@Cable1974] (see Tab. \[tab:sroparams\]). Taken together, we obtain $b = 590$ mbarn. This yields a background level of $b/2 = 295$ mbarn for $\sigma_{\rm L}$ and $\sigma_{\rm T}$, in [*quantitative*]{} agreement with the experiment (see Fig. \[fig:scalingSM\], and Figs. 3e and 4a,b of main text). This excellent accordance shows that the corrections of environmental background and the calibration of the data in absolute scale are truly reliable. $z$ $\alpha(R)$ ------------------------ ------ ------------- 1$^{\rm st}$ neighbors $12$ $-0.09$ 2$^{\rm nd}$ neighbors $6$ $+0.07$ 3$^{\rm rd}$ neighbors $24$ $+0.02$ : \[tab:sroparams\]Positional short-range order parameters of Ni$_{\rm 0.8}$Mn$_{\rm 0.2}$, from Ref. . In summary, to account for the intrinsic background, we can either refine by fitting Eq. \[eq:bg\_def0\] to the data with the parameters $p$ and $b$ being fixed, or account for it by subtracting a pattern at the highest measured field of 8T. In Fig. \[fig:scalingSM\] below, we compare the longitudinal and transverse cross sections with and without subtraction. The high field subtraction singles out the magnetic contribution from the vortices. However it is not very accurate at low $q$ ([*i.e.*]{} at $q \leq 0.025 \text{\AA}^{-1}$, see pink area in Fig. \[fig:scalingSM\]), especially for the longitudinal cross section, and it neglects the vortex contribution at 8T, which can still be detected. This is why we have chosen the fitting procedure and we present the non subtracted data in the main text. We emphasize that the qualitative conclusions reached in the main text are unaffected by the choice of data treatment method. Analytical expressions for the form factor of a spin vortex {#sec:vortices} =========================================================== In order to support the results presented in main text, we carry out calculations of a spin vortex form factor as seen by SANS. We assume a regular vortex of radius R, in a field H//x (see Fig.\[fig:vortex\]). $m_{\rm L}(r$) and $m_{\rm T}(r)$ are the components of the local moment along the field and perpendicular to it, respectively called longitudinal and transverse. Analytical expressions for the longitudinal $F_{\rm L}$ and transverse $F_{\rm T}$ form factors are obtained using `Wolfram Mathematica 10.4`.\ We start by defining the spin field that we use to model a regular vortex in the cartesian frame of Fig. \[fig:vortex\]: $$\label{eq:m_rpsidelta} \mathbf{m} = \left( \begin{matrix} m_{\rm L}(r) \\ m_{\rm T}(r) \cdot \cos \left(\psi+\delta\right) \\ m_{\rm T}(r) \cdot \sin \left(\psi+\delta\right) \end{matrix} \right) \quad ,$$ where $r$ and $\psi$ are polar coordinates in the $(y,z)$-plane and $\delta$ the angle formed by individual spins with respect to the concentric vortex lines. The corresponding structure factors are obtained by Fourier transforming Eq. \[eq:m\_rpsidelta\]: $$F_{\rm i} = \frac{1}{\pi} \int_{-\pi}^{\pi} \left( \frac{1}{2 \pi R^2} \int_{-\pi}^{\pi} \int_{0}^{R} \, m_{\rm i} \, e^{i q r} \, r \, dr \, d\psi \right) \, d\delta \quad , \label{eq:fft_m}$$ where $i = \{x,y,z\}$. As expressed by Eq. 4 of the main text, the magnetic neutron scattering cross section explicitly contains $\langle F_{\rm i} \rangle^{2}$ and $\langle F_{\rm i}^{2} \rangle$. First neglecting a possible $r$-dependence of $m_{\rm L}$ and $m_{\rm T}$, symmetry considerations[^2] lead to: $$\label{eq:FFT_mx} \langle F_{\rm x}^{2} (q) \rangle = \langle F_{\rm x} (q) \rangle^{2} = \left(\frac{m_{\rm L}}{2 \pi R^2} \int_{0}^{R} \int_{-\pi}^{\pi} r \cos \left(q \, r \right) \, dr \, d\psi\right)^2 = \left(\frac{2 m_{\rm L}}{qR}\right)^{2} \cdot J_{\rm 1}^{2}(qR) \quad ,$$ $$\begin{aligned} \label{eq:FFT_my} \nonumber \langle F_{\rm y}^{2} (q) \rangle &=& \frac{1}{\pi} \cdot \int_{-\pi}^{\pi} \left(\frac{m_{\rm T}}{2 \pi R^2} \int_{0}^{R} \int_{-\pi}^{\pi} \cos \left(\psi + \delta\right)r \sin \left(q \, r \right) \, dr \, d\psi\right)^2 \, d\delta \quad\\ \nonumber &=& \frac{1}{\pi} \cdot \int_{-\pi}^{\pi} \frac{\pi^2 \, m_{\rm T}^{2} \cdot \left(J_{\rm 1}(qR) \cdot H_{\rm 0}(qR) - J_{\rm 0}(qR) \cdot H_{\rm 1}(qR)\right)^{2} \cdot \cos^{2} \delta}{4 q^{2} R^{2}} \, d\delta\\ &=& \frac{\pi^2 \, m_{\rm T}^{2} \cdot \left(J_{\rm 1}(qR) \cdot H_{\rm 0}(qR) - J_{\rm 0}(qR) \cdot H_{\rm 1}(qR)\right)^{2}}{4 q^{2} R^{2}} \quad ;\\ \nonumber \quad\langle F_{\rm y} (q) \rangle^{2} &=& 0 \quad ,\end{aligned}$$ $$\begin{aligned} \label{eq:FFT_mz} \nonumber \langle F_{\rm z}^{2} (q) \rangle &=& \frac{1}{\pi} \cdot \int_{-\pi}^{\pi} \left(\frac{m_{\rm T}}{2 \pi R^2} \int_{0}^{R} \int_{-\pi}^{\pi} \sin \left(\psi + \delta\right)r \sin \left(q \, r \right) \, dr \, d\psi\right)^2 \, d\delta \quad\\ \nonumber &=& \frac{1}{\pi} \cdot \int_{-\pi}^{\pi} \frac{\pi^2 \, m_{\rm T}^{2} \cdot \left(J_{\rm 1}(qR) \cdot H_{\rm 0}(qR) - J_{\rm 0}(qR) \cdot H_{\rm 1}(qR)\right)^{2} \cdot \sin^{2} \delta}{4 q^{2} R^{2}} \, d\delta\\ &=& \frac{\pi^2 \, m_{\rm T}^{2} \cdot \left(J_{\rm 1}(qR) \cdot H_{\rm 0}(qR) - J_{\rm 0}(qR) \cdot H_{\rm 1}(qR)\right)^{2}}{4 q^{2} R^{2}}\quad ;\\ \nonumber \langle F_{\rm z} (q) \rangle^{2} &=& 0 \quad ,\end{aligned}$$ where $J_{\rm n}$ ($H_{\rm n}$) are Bessel (Struve) functions of order $n$. We note that Eqs. \[eq:FFT\_mx\], \[eq:FFT\_my\] and \[eq:FFT\_mz\] are equivalent to the results obtained by Metlov and Michels for a centered vortex in a ferromagnetic nanodot (see Eq. 13 from Ref. ).\ The average squared form factor $\langle F_{\rm y}^{2} (q) \rangle =\langle F_{\rm z}^{2} (q) \rangle$ shows a maximum vs. q (Fig. \[fig:Fcalc\]), as expected since the transverse spin components compensate within the vortex, yielding zero intensity at q=0. However, in the context of a bulk disordered ferromagnet, we cannot physically consider constant transverse magnetization since it would imply a sudden jump to $m_{\rm T} = 0$ outside of the vortex, [*i.e.*]{} in the average ferromagnetic medium. If we assume instead that $m_{\rm T} (r)$ is maximum at the vortex center ($r = 0$) and continuously decreases away from the center as $m_{\rm T}(r) = m_{\rm T} \cdot (1 - r/R)$ (see Fig. \[fig:vortex\]), this choice restores continuity at the vortex edge. This average description neglects the local variations of the moments orientations such as those induced by different Mn and Ni moments, or by an AF core constituted of first neighbor Mn-Mn pairs, which would yield smooth modulations of the diffuse scattering at larger q-values. With these assumptions, the transverse form factors are now expressed as: $$\begin{aligned} \label{eq:FFT_myz_smooth} \langle F_{\rm y}^{2} (q) \rangle &=& \langle F_{\rm z}^{2} (q) \rangle\\ \nonumber &=& \frac{m_{\rm T}^{2} \cdot \left[J_{\rm 1}(q \, R) \cdot \left(\pi q \, R \, H_{\rm 0}(q \, R) - 4\right) + q \, R \, J_{\rm 0}(q \, R) \cdot \left(2 - \pi H_{\rm 1}(q \, R)\right)\right]^{2}}{q^{4} R^{6}} \quad ;\\ \nonumber \langle F_{\rm y} (q) \rangle^{2} &=& \langle F_{\rm z} (q) \rangle^{2} = 0 \quad .\end{aligned}$$ As shown in Fig. \[fig:Fcalc\], Eqs. \[eq:FFT\_my\]-\[eq:FFT\_mz\] and Eq. \[eq:FFT\_myz\_smooth\] both yield peaks in the transverse form factors, but at different q values. The peak positions $q_{\rm max}$ differ appreciably but have a $1/r_{\rm d}$-dependence in common: $q_{\rm max} \simeq 0.78 \, \pi / r_{\rm d}$ in the former case, $q_{\rm max} \simeq \pi / r_{\rm d}$ in the latter case (see Fig. \[fig:Fcalc\]a). As explained above, the second model seems closer to reality for continuity reasons. This motivated our choice to relate the defect size to $\pi / q_{\rm max}$ (see main text). Eq. \[eq:FFT\_myz\_smooth\] also yields more asymmetric peaks, in closer proximity to the experimental transverse cross sections (see Fig. \[fig:Fcalc\]b and compare with Fig. 3e of main text). The variation of $m_{\rm T}(r)$ should be accompanied with a correlated variation of $m_{\rm L}(r)$ with respect to the average magnetization, which is likely too small to be observed (see main text). Effect of temperature and cooling-field on the SANS patterns {#sec:sans_t_and_h} ============================================================ While the work presented in main text is focused on the low temperature properties of the defects studied by small-angle neutron scattering and their relation to the ground state properties of Ni$_{0.81}$Mn$_{0.19}$, we provide here some details about the temperature- and cooling-field dependence of the experimental patterns. As shown in Fig. \[fig:SANS\_t\_and\_hcool1\]a-c, increasing temperature leads to the progressive rise of the small-angle intensity and a concomitant vanishing of the peak feature in the observed cross section, already well-below the canting temperature T$_{\rm K} \sim 120$ K. This is due to the thermal activation of spin waves, which contribute to the scattered intensity since inelastic processes are not filtered out in a SANS setup. This leads to a non-trivial evolution of the total intensity, which strongly depends on the applied magnetic field (see Fig. \[fig:SANS\_t\_and\_hcool1\]d). It has been shown in Ref. (using a three-axis spectrometer which allows isolating purely elastic scattering at the expense of neutron flux) that the signal associated with vortex-like defects vanishes only at T$_{\rm K}$ while the vortex size remains basically constant as a function of temperature, in agreement with theoretical expectations. Altogether, this justifies our experimental strategy which concentrates on the low-temperature regime, where the properties of the field-induced “croutons” can be conveniently studied and compared to “T = 0” MC simulations. On the other hand, in the Ni$_{0.81}$Mn$_{0.19}$ sample which behaves as a “rigid” system, the application of a magnetic field $H_{\rm cool}$ upon cooling reveals the existence of unidirectional anisotropy induced by Dzyaloshinskii-Moryia interactions, the cooling field acting as an additional bias field [@Ziq1990]. Our SANS study reveals that the transverse correlations are substantially modified by $H_{\rm cool}$, although the bare plateau magnetization is basically not affected [@Mirebeau1988]. Indeed, SANS patterns recorded for different values of $H_{\rm cool}$ at low temperature clearly display different peak positions and intensities (see Fig. \[fig:SANS\_t\_and\_hcool2\]a). Essentially, increasing $H_{\rm cool}$ favors smaller defects at equal applied field value. The scaling laws which govern the evolution of the defect size are however preserved, as shown in Fig. \[fig:SANS\_t\_and\_hcool2\]b where the law $q_{\rm max} \propto \mu_{\rm 0}H_{\rm int}^{1/2}$ remains valid for all fields (except for applied fields smaller than 1 T for $H_{\rm cool} = 2$ T). These aspects highlight the importance of the neutron probe to the study of fine magnetic features of RSGs. Mean-field phase diagram and effective antiferromagnetic bond concentration {#sec:meanfield} =========================================================================== The re-entrant spin-glass (RSG) phase has been studied theoretically by many authors. Here, we use the celebrated model of Gabay and Toulouse[@Gabay1981] to compare the mean-field phase diagram with the experimental one and estimate the effective antiferromagnetic (AFM) bond concentration of our Ni$_{0.81}$Mn$_{0.19}$ sample.\ The phase diagram of RSG’s (Fig. \[fig:mf\_gabaytoulouse\]) is calculated in mean field approximation for interactions with infinite range. The spins of the $\it{i}$ and $\it{j}$ sites interact via independent random interactions $J_{\rm ij}$, distributed according to a normalized Gaussian law: $$p\left(J_{\rm ij}\right) = \sqrt{\frac{N}{2\pi}} \cdot \exp \left[-\frac{N}{2}\left(J_{\rm ij}-\frac{J_{\rm 0}}{N}\right)^{2}\right] \label{eq:pdf_Jij}$$ where $N$ is the number of sites and $J_{\rm 0}$ is the average exchange interaction. Namely $\langle J_{\rm ij} \rangle_b = J_{\rm 0}/N$ and $\langle J_{\rm ij}^{2} \rangle_b=1/N $ where $\langle \rangle_b $ denotes an average over bond disorder, that is over $p\left(J_{\rm ij}\right)$. As shown in Fig. \[fig:mf\_gabaytoulouse\], for $J_{\rm 0} = 0$ and $J_{\rm 0} \leq 1$, the low temperature phase is a spin glass (SG), showing no long range order. A tricritical point is observed for $J_{\rm 0} = 1$. In the region $J_{\rm 0} \geq 1$, long range ferromagnetic order can occur. The system first evolves from paramagnetic (PM) to ferromagnetic (FM) at $T = T_{\rm C} = J_{\rm 0}$ upon cooling. At lower temperatures, two mixed phases, M1 and M2, are subsequently stabilized, corresponding to the freezing of transverse spin components (M1) and strong irreversibilities in the magnetization (M2). Most importantly, the magnetic LRO is not broken and persists both in M1 and M2 phases. This is the main difference between the RSG and SG’s. In the usual terminology, the FM-M1 transition (or Gabay-Toulouse line) takes place at the canting temperature $T_{\rm K}$, while the M1-M2 transition (or de Almeida-Thouless line) occurs at the freezing temperature $T_{\rm F}$. We have kept these notations in the main text. The transition lines are calculated analytically according to Eqs. 10 and 11 of Ref. . The Gabay-Toulouse model yields a rather accurate description of the experimental phase diagrams of RSG’s systems by mapping the average $J_{\rm 0}$ interaction to the concentration of magnetic species. In Ni$_{0.81}$Mn$_{0.19}$, the characteristic temperatures determined by susceptibility (this work, see Fig. 1 of main text) and neutron scattering (Ref. ) are $T_{\rm C} = 257$ K, $T_{\rm F} = 18$ K and $T_{\rm K}\simeq 120$ K, yielding a ratio $T_{\rm F} / T_{\rm C}$ close to $0.07$. We first determine the appropriate value of $J_{\rm 0}$ for our compound. In Fig. \[fig:mf\_gabaytoulouse\], the ratio $T_{\rm F} / T_{\rm C} = 0.07$ corresponds to $J_{\rm 0} = 1.48$. Using this value, we see that $T_{\rm K}$ lies below the calculated transition line. The agreement is however satisfactory, considering: i) the simplifying hypothesis made to calculate the mean field diagram; ii) that the determination of $T_{\rm K}$ is non trivial and the associated error bar ought to be large. Next, we use the derived value of $J_{\rm 0}$ to calculate the corresponding probability distribution function (PDF) of random-bond interactions used for MC simulations, assuming a random distribution of AFM bonds (-J) of concentration c in a ferromagnetic medium (J). Integrating the PDF over negative $J_{\rm ij}$’s, we determine an equivalent AFM bond concentration of $\simeq 7$%. This justifies the comparison between experiment and MC calculation for a weak concentration of AFM bonds (namely 5 %). A comparative analysis of internal structure of frustrated skyrmions and vortex-like defects {#sec:mc_sims} ============================================================================================ In the present section, we give a comparative analysis of internal structure of so called “frustrated skyrmions” (see Ref. for further details) and vortex-like defects investigated in the present paper. In both cases, the competing FM and AFM exchange interactions lead to the stability of particle-like states with non-trivial topology but showing different inherent properties. To stabilize “frustrated skyrmions”, we consider the following model with FM nearest-neighbour (NN) and AFM next-nearest-neighbour (NNN) exchange interactions: $$\begin{aligned} E= -J_1 \sum_{\langle i,j\rangle}\mathbf{m}_i\cdot\mathbf{m}_j+J_2 \sum_{\langle\langle i,j\rangle\rangle}\mathbf{m}_i\cdot\mathbf{m}_j -h \sum_im_i^z. \label{energy}\end{aligned}$$ where $\langle i,j \rangle$ and $\langle\langle i,j\rangle\rangle$ denote pairs of NN and NNN spins of unit length, $\mathbf{m}_i$, respectively, and $J_1,J_2>0$. The third term describes the interaction with the magnetic field parallel to $z$ axis. The stability mechanism is provided by the quartic differential terms (in general, by the terms with higher-order derivatives) that appear in the continuum version of Eq. (\[energy\]) and allow to overcome the limitations imposed by the Derrick theorem [@Derrick1964]. This mechanism is a reminiscence of the original mechanism of the dynamical stabilization proposed by Skyrme [@Skyrme1962]. In “frustrated skyrmions”, the vector $\mathbf{m}$ is antiparallel to $z$-axis at the center and gradually rotates towards the field-aligned state at the boundary, thus, resulting in circular particle-like states (Fig. \[skyrmionsSM\]a,b). The angle between two adjacent spins within the skyrmion cores is controlled by the ratio $J_2/J_1$ in Eq. (\[energy\]) and in the present case ($J_2 / J_1 = 0.5$) may reach the value $\pi/3$. A prominent property of such frustrated magnets with competing exchange interactions is that a skyrmion and an antiskyrmion have the same energy irrespective to their helicity. Skyrmions and antiskyrmions are distinguished conventionally based on the sign of their topological charge being either $+1$ or $-1$, correspondingly. The topological charge density, $\rho_Q$, maintains the same sign within the skyrmion cores (see Fig. \[skyrmionsSM\]b with orange and blue coloring used for skyrmions and antiskyrmions, respectively). With an increasing magnetic field, the skyrmions may undergo the collapse into the homogeneous state. Within the discrete model (\[energy\]), the collapse occurs when the negative $m_z$-component of the magnetization within the skyrmion cores reaches 0; at this point, a skyrmion can be abruptly unwound (see in particular Fig. 6c in Ref. for more details of such a process). In some range of applied magnetic field, the skyrmions represent metastable states: their energy is higher than the energy of the homogeneously magnetized state. Usually, as shown in Fig. \[skyrmionsSM\]a, one observes clusters of such metastable skyrmions and anti-skyrmions with mutual attracting interaction that are embedded into the homogeneously magnetized matrix [@Leonov2015]. With decreasing magnetic field, the skyrmions tend to crystallize predominantly in a hexagonal lattice with the densest packing of skyrmions: at some critical field ($h < 0.35$ in the model (\[energy\]) with the chosen parameters [@Leonov2015]), the energy of an isolated skyrmion becomes negative with respect to the homogeneous state and hence the skyrmions try to fill the space. The extended skyrmion lattice is determined by the stability of the localized solitonic skyrmion cores and their geometrical incompatibility in the corners of hexagons which frustrates regular space-filling. However if the formation of a skyrmion lattice is suppressed, isolated skyrmions continue to exist below the critical field. At the same time, isolated skyrmions have a tendency to elongate and expand into a band with helicoidal or cycloidal modulations and eventually to fill the whole space, since the spiral state represents the minimum with lower energy as compared to the local minima with the metastable isolated skyrmions. Thus, the existence region of “frustrated” skyrmions is restricted by a collapse at high fields and the critical low field at which the energy of an isolated skyrmion becomes negative and thus instigates the formation of the lattice or its elliptical instability [@Leonov2016]. The processes of lattice formation or elliptical instability are obviously not the case for vortex-like defects: residing around the AFM bonds, the vortices cannot form any type of an extended ordered phase. However, as pointed out in the main text, with the decreasing magnetic field and an increasing concentration of AFM bonds, the vortices form a liquid -like order stipulated by the same tendency to fill the space although remaining metastable solutions. Solutions for vortex-like defects induced by interaction disorder are obtained by minimizing the following Hamiltonian: $$\begin{aligned} E= - \sum_{\langle i,j\rangle}J_{ij}\mathbf{m}_i\cdot\mathbf{m}_j -h \sum_im_i^z. \label{energy2}\end{aligned}$$ where the sum $\langle i,j \rangle$ runs only over NN pairs. The $J_{ij}$ are independent random variables taking the values $+1$ and $-1$ with probability $1-c$ and $c$, correspondingly (see Sec. \[sec:meanfield\]). The method used to obtain the vortex configurations in Fig. \[skyrmionsSM\]c,d is described in Ref. and is basically the same as for “frustrated” skyrmions[@Leonov2015]. The stabilized vortices are metastable solutions in the whole range of an applied magnetic field. The vortices do not bear any smooth rotation of the magnetization and do not have any preferred helicity. As a consequence, the topological charge density has both signs within one isolated vortex as depicted in Fig. \[skyrmionsSM\]d. The largest angle value between two adjacent spins obviously may reach $\pi$ (see, in particular, a vortex encircled by a blue dotted line and numbered 2 in Fig. \[skyrmionsSM\]c). The common case, however, are the spatially localized vortices with the positive $m_z$-component (which is impossible for frustrated skyrmions) and rather small angles between spins (see vortices numbered 1 in Fig. \[skyrmionsSM\]c). Having different number of AFM bonds within the cores, the vortices also exhibit different collapse fields. In particular, for the configuration depicted in Fig. \[skyrmionsSM\]c, no vortices rest on single AFM bonds, and vortices on two bonds (blue circles) are close to their transformation into the homogeneous state. Thus, a specified internal structure of a vortex might reflect an exact distribution of AFM bonds in its core or its vicinity, which allows classification of such vortex states with the subsequent engineering of their properties. In particular, an endevour with creating a smooth magnetization rotation by a particular pattern of AFM bonds could be undertaken. Thus, the existence region of vortex-like defects is restricted by a collapse at high fields (that depends on the internal structure of isolated particles) and the critical low field (and/or concentration $c$) at which the vortices form a liquid-like state. Finally, it is instructive to compare the field-dependence of the number of isolated vortex-like defects $N_{\rm d}$ as obtained by MC simulation with that derived from our small-angle neutron scattering experiment (see Figs. 4f and 6f of main text, and Fig. \[fig:nd\_vs\_h\_simvsexp\]). As explained in main text, a qualitative agreement between calculation and experiment is observed, namely a global increase of $N_{\rm d}$ with increasing field, followed by a saturation at a finite field. Remarkably, a fit of a stretched exponential (Eq. 4 of main text) to the data yields similar agreement in the two cases (Fig. \[fig:nd\_vs\_h\_simvsexp\]a,b). This underscores similar evolutions of the defect size and integrated scattering intensity (or square amplitude of the Fourier transform of the transverse magnetization distribution), despite seemingly different local interaction schemes (see main text). However, we point out that the inflection point of the $N_{\rm d}$ *vs.* $H$ occurs around $H_{\rm C} \simeq J$ in the simulation. A mapping to the experimental case suggests that $N_{\rm d}$ should not cease increasing for fields much smaller than several 100 T in our Ni$_{0.81}$Mn$_{0.19}$ sample, as opposed to our observations. The origin of such disagreement might be due to the willingly simplified approach we have followed to model the distribution of magnetic interactions in the MC simulations. In order to achieve a better control on the properties of systems supporting vortex- or skyrmion-like textures, our work could thus motivate further theoretical work, combining the effects of exchange frustration[@Okubo2012; @Leonov2015; @Lin2016] and quenched disorder[@Chudnovsky2017], which have only been considered separately up to now. However, this will not be a simple exercise. Altogether, our combined experimental and numerical investigation of the properties of the field-induced vortex-like defects stabilized in the Ni$_{0.81}$Mn$_{0.19}$ RSG leads to a simple physical picture. At low magnetic fields (Fig. \[fig:nd\_vs\_h\_simvsexp\]c), the defects are large and encompass several AFM (Mn-Mn) pairs, such that their apparent number $N_{\rm d}$ is small. Upon field increase, however, their size decrease while they remain fixed around the AFM pairs (Fig. \[fig:nd\_vs\_h\_simvsexp\]d), the number of which being fixed by the Mn concentration and heat treatment. This leads to an increase of $N_{\rm d}$. Its value is expected to saturate when defects start collapsing, [*i.e.*]{} when spins are locally aligned by the applied field. ![image](fig1SM.eps){width="98.00000%"} ![image](fig2SM.eps){width="98.00000%"} ![image](fig3SM.eps){width="98.00000%"} ![image](fig4bisSM.eps){width="98.00000%"} ![image](fig4terSM.eps){width="98.00000%"} ![image](fig4SM.eps){width="98.00000%"} ![image](fig6SM.eps){width="90.00000%"} ![image](fig7SM.eps){width="98.00000%"} [^1]: Although the results provided by these authors have been obtained on a sample with slightly different composition and heat treatment, we assume that their parameters are likely to be representative of the SRO of our Ni$_{\rm 0.81}$Mn$_{\rm 0.19}$ sample. [^2]: In our choice of vortex morphology, $\mathbf{m}_{\rm L}$ is symmetric with respect to origin such that only the real part of its Fourier transform is non-zero. Conversely, the antisymmetric $\mathbf{m}_{\rm T}$-term simplifies to a sine Fourier transform.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work, we investigated the feasibility of applying deep learning techniques to solve Poisson’s equation. A deep convolutional neural network is set up to predict the distribution of electric potential in 2D or 3D cases. With proper training data generated from a finite difference solver, the strong approximation capability of the deep convolutional neural network allows it to make correct prediction given information of the source and distribution of permittivity. With applications of L2 regularization, numerical experiments show that the predication error of 2D cases can reach below 1.5% and the predication of 3D cases can reach below 3%, with a significant reduction in CPU time compared with the traditional solver based on finite difference methods.' author: - title: 'Study on a Poisson’s Equation Solver Based On Deep Learning Technique' --- Deep Learning; Poisson’s Equation; Finite Difference Method; Convolutional Neural Network; L2 regularization. Introduction ============ Computational electromagnetic simulation has been widely used in research and engineering, such as antenna and circuit design, target detection, geophysical exploration, nano-optics, and many other related areas [@Chew2001]. Computational electromagnetic algorithms serve as the kernel of simulation. They solve Maxwell’s equations under various materials and different boundary conditions. In these algorithms, the domain of simulation is usually discretized into small subdomains and the partial differential equations is converted from continuous into discrete forms, usually as matrix equations. These matrix equations are either solved using direct solvers like LU decomposition, or iterative solvers such as the conjugate gradient method [@Golub1996]. Typical methods in computational electromagnetics include finite difference method (FDM) [@Taflove2005], finite element method (FEM) [@Jin2014], method of moments (MOM) [@Harrington1993], and etc. Practical models are usually partitioned into thousands or millions of subdomains, and matrix equations with millions of unknowns are solved on computers. This usually requires a large amount of CPU time and memory. Therefore, it is still very challenging to use full-wave computational electromagnetic solvers to applications that require real-time responses, such as radar imaging, biomedical monitoring, fluid detection, non-destructive testing, etc. The speed of electromagnetic simulation still cannot meet the demand of these applications. One method of acceleration is to divide the entire computation into offline and online processes. In the offline process, a set of models are computed and the results are stored in the memory or computer hard disk. Then in the online process, solutions can be interpolated from the pre-computed results. These methods include the model order reduction [@Wilhelmus2008], the characteristic basis function [@prakash2003characteristic], the reduced basis method [@Noor1980; @dang2017quasi], and etc. The idea of these schemes is to pay more memory in return for faster speed. Moreover, artificial neural network has also been used to optimize circuit[@zaabab1995neural] and accelerate the design of RF and microwave components [@Zhang2000][@zhang2003artificial]. However, the extension capability is still limited for most of these methods, and they are mainly used to describe systems with few parameters. With rapid development of big data technology and high performance computing, deep learning methods have been applied in many areas and significantly improve the performance of voice and image processing [@Hinton2006; @LuCun2015]. These dramatic improvements rely on the strong approximation capability of deep neural networks. Recently, researchers have applied the deep neural networks to approximate complex physical systems [@Ehrhardt2017][@lerer2016learning], such as fluid dynamics [@Tompson2016; @Guo2016], Schrödinger equations [@Mills2017] and rigid body motion[@Byravan2017SE3]. In these works, the deep neural networks “learn” from data simulated with traditional solvers. Then it can predict field distribution in a domain with thousands or millions unknowns. Furthermore, it has also been applied in capacitance extraction with some promising results [@yao2016machine]. The flexility in modeling different scenarios is also significantly improved compared with traditional techniques using artificial neural networks. In this study, we investigate the feasibility of using deep learning techniques to accelerate electromagnetic simulation. As a starting point, we aim to compute 2D or 3D electric potential distribution by solving the 2D or 3D Poisson’s equation. We extended the deep neural network structure in [@Tompson2016] and proposed a approximation model based on fully convolutional network [@long2015fully]. We apply L2 regularization[@ng2004feature] in the objective function in order to prevent over-fitting and improve the prediction accuracy. In the offline training stage, a finite-difference solver is used to model inhomogeneous permittivity distribution and point-source excitation at different locations, the permittivity distribution, excitation, and potential field are used as training data set. The input data include the permittivity distribution and the location of excitation, the output data is the electric potential of the computation domain. Then in the online stage, the network can mimic the solving process and correctly predict the electric potential distribution in the domain. Different from traditional algorithms, the method proposed in this paper is an end-to-end simulation driven by data. The computational complexity of the network is fixed and much smaller than that of traditional algorithms, such as the finite-difference method. Preliminary numerical studies also support our observations. This paper is organized as follows: In Section 2 we introduce the data model and the deep convolutional neural network model used in the computation. In Section 3 we show more details of preliminary numerical examples and compare the accuracy and computing time with the algorithm using finite-difference method. Conclusions are drawn in Section 4. Formulation =========== Finite Difference Method Model ------------------------------ The electrostatic potential in the region of computation with Dirichlet boundary condition can be described as $$\nabla\cdot(\varepsilon({\mbox{$\mathbf{r}$}})\nabla\phi({\mbox{$\mathbf{r}$}}))=-\rho({\mbox{$\mathbf{r}$}}) \,, \label{eq10}$$ $$\phi|_{\partial D}=0 \,, \label{eq20}$$ where $\phi({\mbox{$\mathbf{r}$}})$ is the electric potential in Domain $D$, $\rho({\mbox{$\mathbf{r}$}})$ represents distribution of electric charges, and $\varepsilon({\mbox{$\mathbf{r}$}})$ represents dielectric constant. describes the Dirichlet boundary condition, which enforces the value of potential to be zero along the boundary. The above equations are solved using the finite difference method. The domain of computation is partitioned into subdomains using Cartesian grids. The electric potential and electric charge density in each subdomain is assumed constant. Central difference scheme is used to approximate the derivative in . If the computation domain is 2D, then we can write as $$\begin{split} &\frac{\varepsilon_{i+\frac{1}{2},j}\frac{\phi_{i+1,j}-\phi_{i,j}}{\Delta x}-\varepsilon_{i-\frac{1}{2},j}\frac{\phi_{i,j}-\phi_{i-1,j}}{\Delta x}}{\Delta x}+\\ &\frac{\varepsilon_{i,j+\frac{1}{2}}\frac{\phi_{i,j+1}-\phi_{i,j}}{\Delta y}-\varepsilon_{i,j-\frac{1}{2}}\frac{\phi_{i,j}-\phi_{i,j-1}}{\Delta y}}{\Delta y}=-\rho_{i,j} \end{split} \label{eq30}$$ and $$\varepsilon_{i+a\frac{1}{2},j+a\frac{1}{2}}=\frac{\varepsilon_{i,j}+\varepsilon_{i+a,j+b}}{2} , a \in \{-1, 1\} \label{eq40}$$ where $(i,j)$ represents the location of the subdomain in the grid. if the comptation domain is 3D, then can be written as $$\begin{split} &\frac{\varepsilon_{i+\frac{1}{2},j,k}\frac{\phi_{i+1,j,k}-\phi_{i,j,k}}{\Delta x}-\varepsilon_{i-\frac{1}{2},j,k}\frac{\phi_{i,j,k}-\phi_{i-1,j,k}}{\Delta x}}{\Delta x}+\\ &\frac{\varepsilon_{i,j+\frac{1}{2},k}\frac{\phi_{i,j+1,k}-\phi_{i,j,k}}{\Delta y}-\varepsilon_{i,j-\frac{1}{2},k}\frac{\phi_{i,j,k}-\phi_{i,j-1,k}}{\Delta y}}{\Delta y}+\\ &\frac{\varepsilon_{i,j,k+\frac{1}{2}}\frac{\phi_{i,j,k+1}-\phi_{i,j,k}}{\Delta z}-\varepsilon_{i,j,k-\frac{1}{2}}\frac{\phi_{i,j,k}-\phi_{i,j,k-1}}{\Delta z}}{\Delta z}=-\rho_{i,j} \end{split} \label{eq30}$$ and $$\varepsilon_{i+a\frac{1}{2},j+b\frac{1}{2},k+c\frac{1}{2}}=\frac{\varepsilon_{i,j,k}+\varepsilon_{i+a,j+b,k+c}}{2} , a,b,c \in \{-1,0,1\}$$ where $(i,j,k)$ represents the location of the subdomain in the grid. The above equation in each subdomain construct a linear system of equations $\bar{\bar{A}}\cdot{\mbox{$\boldsymbol{\bar{\phi}}$}} = - {\mbox{$\boldsymbol{\bar{\rho}}$}}$, where $\bar{\bar{A}}$ is symmetric and positive semi-definite. LU decomposition or conjugate gradient method can be applied to solve this equation. ![2D Modeling Setup: Yellow points are the 11 positions for source , blue area is where we predict[]{data-label="2dmodel"}](2dmodel_t.jpg){height="3cm"} ![3D Modeling Setup: Yellow points are the 11 positions for source , blue area is where we predict[]{data-label="3dmodel"}](3dmodel_t.jpg){height="4cm"} in [1,...,5]{}[ ]{} in [1,...,3]{}[ ]{} ![image](network.jpg){width="12.7cm" height="8cm"} ConvNet model ------------- Neural networks have excellent performance for function fitting. One can approximate a complex function using powerful function fitting method based on deep neural networks [@Tompson2016]. Convolutional neural networks (CNN) have excellent performance in learning geometry and predicting per-piexl in images[@lawrence1997face][@krizhevsky2012imagenet] and fully convolutional networks have been proven powerful models of pixelwise prediction. In this paper, the problem that we model has various locations of the excitation and dielectric constant distribution, so the geometric characteristics of this problem are obvious. They all need to be considered in the design of the network layers. Therefore, the input of the network includes distribution of electrical permittivity and the source information. The electrical permittivity distribution is represented as a two-dimensional or three-dimensional array with every element $(i,j)$ or $(i,j,k)$ represents electrical permittivity at grid $(i,j)$ or $(i,j,k)$. The source information is also represented by a two-dimensional or three-dimensional array, in which every element represents the distance between the source and a single grid. The distance function can provide a good representation of the universial source information and if the case is 2D, then the distance function can be written as $$f(i,j)_n=\sqrt{(i-i_n )^2+(j-j_n )^2}, n \in \{1,2...,11\} \label{eq50}$$ where $i$,$j$ is the location of grids in the predicted area and $i_n$,$j_n$ is source excitation’s location that has 11 different positions. if the case is 3D, then the distance function can be written as $$\begin{split} &f(i,j,k)_n=\sqrt{(i-i_n )^2+(j-j_n )^2+(k-k_n )^2},\\ & \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad n \in \{1,2...,11\} \end{split}$$ where $i$,$j$,$k$ is the location of grids in the predicted area and $i_n$,$j_n$,$k_n$ is source excitation’s location that has 11 different positions. The setup of deep neural network is based on optimization process that adjusts the network parameters to minimize the difference between “true” values of function and the one predicted by the network on a set of sampling points. In this problem, the loss function in optimization is defined to measure the difference between the logarithm of predicted potential and the one obtained by FDM, it can be written as $$loss_{obj}=\|log_{10}(\phi)-log_{10}(\widehat{\phi}) \|^2\,,$$ and L2 regularization expression is included in the cost function to prevent over-fitting, the final cost function is written as: $$f_{obj}=loss_{obj}+ \frac{\lambda}{2n}\sum_{w}w^{2}\,, \label{eq60}$$ where $\phi$ is the predicted potential, $\widehat{\phi}$ is the potential solved by FDM, $\lambda$ is a hyperparameter, *n* is the amount of training samples, *w* is weights of the network. The use of logarithm of potential is to avoid the instability in optimization due the fast attenuation in the distribution of electrical potential. It can also help to improve the accuracy of prediction. L2 regularization implies a tendency to train as small a weight as possible and retain a larger weight that can be guaranteed to significantly reduce the original loss. The internal structure of fully convolution neural network model is shown in the . It consists of seven stages of convolution and Rectifying Linear layers (ReLU)[@glorot2011deep] but there are no pooling layers beacuse features in this problem are not complicated. The input data of ConvNet model includes the permittivity distribution and location of excitation expressed as the distance function, the output data is the predicted electric potential of the computation domain. in [1,...,3]{}[ ]{} One result of scenario 1: *Left*: FDM result, *Midlle*: ConvNet result, *Right*: error distribution in [1,...,3]{}[ ]{} One result of scenario 2: *Left*: FDM result, *Midlle*: ConvNet result, *Right*: error distribution in [1,...,3]{}[ ]{} One result of scenario 3: *Left*: FDM result, *Midlle*: ConvNet result, *Right*: error distribution in [1,...,3]{}[ ]{} One result of scenario 4: *Left*: FDM result, *Midlle*: ConvNet result, *Right*: error distribution in [1,...,3]{}[ ]{} One result of scenario 5: *Left*: FDM result, *Midlle*: ConvNet result, *Right*: error distribution Results And Analysis ==================== In this study, we solve the electrostatic problem in a square region (2D case) or a cube region (3D case). In 2D cases, a square region is partitioned into $64\times 64$ grids, as shown in . The yellow points indicate the location of sampled excitation. In 3D cases, a cube region is partioned into $64\times 64\times64$ grids, as shown in .The yellow points indicate the location of sampled excitation and the value of excitation is fixed at -10. We aim to solve the potential field in the region of $32\times 32$ or $32\times32\times32$ colored in blue. In 2D case, we try 6 possible scenarios as shown in c and the background’s permittivity of all scenarios is 1: scenario 1 ---------- Scenario 1 has a single ellipse located in the center of the square region. This ellipse has different shapes whose semi-axis varies from 1 to 20 and rotation angle is randomly chosen between $\frac{\pi}{20}$ and $\pi$. The permittivity values of the target is randomly selected from \[0.125,0.25,0.5,2,4,6\]. scenario 2 ---------- Scenario 2 divides the square region into four identical parts and each part has a ellipse whose semi-axis varies from 1 to 8 and rotation angle is randomly chosen between $\frac{\pi}{20}$ and $\pi$. The four ellipses have different shapes but their permittivity valuse are the same and randomly selected from \[0.125,0.25,0.5,2,4,6\]. scenario 3 ---------- Scenario 3 divides the square region into four identical parts and each part has a ellipse whose semi-axis varies from 1 to 8 and rotation angle is randomly chosen between $\frac{\pi}{20}$ and $\pi$. The four ellipses have different shapes and their permittivity valuse are different and randomly selected from \[0.125,0.25,0.5,2,4,6\]. scenario 4 ---------- Scenario 4 has a single ellipse whose location moves in a small range. This ellipse has different shapes whose semi-axis varies from 1 to 12 and rotation angle is randomly chosen between $\frac{\pi}{20}$ and $\pi$. The permittivity values of the target is randomly selected from \[0.125,0.25,0.5,2,4,6\]. scenario 5 ---------- scenario 5 has no special shapes and the predicted region is the region of $32\times 32$ colored in blue. The permittivity values of every four grids in the target is randomly chosen from 0.125 to 6. scenario 6 ---------- Scenario 6 includes scenario 1 to 5. in [1,...,6]{}[ ]{} In the 3D case, ellipsoids with different shapes are located inside the predicted region. As shown in , their three semi-axis varies from 1 to 20. The convolution neural network takes two $64\times 64$ or $64\times64\times64$ arrays as input, as depicted in , and output is a $32\times32$ or $32\times32\times32$ array representing the field in the region of investigation. The training and testing data for the network are obtained by the finite-difference solver. For 2D cases, We use 8000 samples for training and 2000 samples for testing in sceranio 1 to 5 and 40000 samples for training and 10000 samples for testing in scenario 6; for 3D cases, we use 4000 samples for training and 1000 samples for testing. The ConvNet model was implemented in Tensorflow and an Nvidia K80 GPU card is used for computation. The Adam[@kingma2014adam] Optimizer is used to optimize objective function in . For more detailed comparison, we use relative error in the ConvNet model to measure the accuracy of the prediction. We first compute the difference between the ConvNet model predicted potential and the FDM generated potential. For a subdomain, the relative error is defined as $$err(i,j)\ or\ err(i,j,k)=\frac{|\phi_{ConvNet}-\phi_{FDM}|}{\phi_{FDM}} \,,$$ where $\phi_{ConvNet}$ and $\phi_{FDM}$ are predicted and “true” potential field, respectively. The average relative error of the $n$-th testing case is the mean value of relative error in all subdomains: $$err_{aver_n}=20\lg10(\frac{\sum_{i}\sum_{j}err(i,j)}{\sum_{i}\sum_{j} 1 }), for\ 2D\ case$$ $$err_{aver_n}=20\lg10(\frac{\sum_{i}\sum_{j}\sum_{k}err(i,j)}{\sum_{i}\sum_{j}\sum_{k} 1 }), for\ 3D\ case$$ shows one result of scenario 1 to 5 in 2D cases, which is randomly chosen from the testing samples. It can be observed that the predicted potential field distribution agrees well with the one computed by finite difference method. The final average relative error of the prediction in scenario 1 to 5 by ConvNet model is -41dB, -41dB, -38dB, -38dB, -40dB respectivly. And in the scenario 6, the final average relative error of the prediction is -38dB. The proposed ConvNet model shows a good prediciton capability and good generalization ability for 2D cases. ![Loss curves of trainning and testing in 2D cases []{data-label="traintestloss"}](traintestloss.jpg){width="6cm"} The result of 3D cases is visualized in . The difference between ConvNet model’s prediction and FDM results is little and the final average relative error is -31dB. The ConvNet model can do good predictions on 3D cases which are more complicated than 2D cases. The good prediction capability and generalization ability of proposed ConvNet model is verified. shows that in 2D cases, the curve of testing loss agrees well with training loss’s curve, which means the ConvNet model do not over-fit the training data. Using this model, the CPU time is reduced significantly for 2D cases and 3D cases. For example, using FDM to obtain 2000 sets of 2D potential distribution takes 16s but using ConvNet model only takes 0.13s, and using FDM to obtain 5 sets of 3D potential distribution takes 292s but using ConvNet model only takes 1.2s. This indicates the possibility to build a realtime electromagnetic simulator. Conclusion ========== In this study, we investigate the possibility of using deep learning techniques to reduce the computational complexity in electromagnetic simulation. Here we compute the 2D amd 3D electrostatic problem as an example. By building up a proper convolutional neural network, we manage to correctly predict the potential field with the average relative error below 1.5% in 2D cases and below 3% in 3D cases. Moreover, the computational time is significantly reduced. This study shows that it may be possible to take advantage of the flexibility in deep neural networks and build up a fast electromagnetic solver that may provide realtime responses. In the future work, we will further improve the accurracy of 3D cases’ preidiction and try to build a fast electromagnetic realtime simulator. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} W. Chew, E. Michielssen, J. M. Song, and J. M. Jin, Eds., *Fast and Efficient Algorithms in Computational Electromagnetics*.1em plus 0.5em minus 0.4emNorwood, MA, USA: Artech House, Inc., 2001. G. H. Golub and C. F. V. Loan, *Matrix Computations*, 3rd ed.1em plus 0.5em minus 0.4emJohns Hopkins University Press, 1996. A. Taflove and S. C. Hagness, *Computational Electrodynamics: The Finite-Difference Time-Domain Method*, 3rd ed.1em plus 0.5em minus 0.4emArtech House, 2005. J.-M. Jin, *The Finite Element Method in Electromagnetics*, 3rd ed. 1em plus 0.5em minus 0.4emWiley-IEEE Press, 2014. R. F. Harrington, *Field Computation by Moment Methods*.1em plus 0.5em minus 0.4emWiley-IEEE Press, 1993. W. H. Schilders, H. A. van der Vorst, and J. Rommes, Eds., *Model Order Reduction: Theory, Research Aspects and Applications*.1em plus 0.5em minus 0.4emSpringer, 2008. V. Prakash, , and R. Mittra, “Characteristic basis function method: A new technique for efficient solution of method of moments matrix equations,” *Microwave and Optical Technology Letters*, vol. 36, no. 2, pp. 95–100, 2003. A. K. Noor and J. M. Peters, “Reduced basis technique for nonlinear analysis of structures,” *AIAA Journal*, vol. 18, no. 4, pp. 455–462, 1980. X. Dang, M. Li, F. Yang, and S. Xu, “Quasi-periodic array modeling using reduced basis method,” *IEEE Antennas and Wireless Propagation Letters*, vol. 16, pp. 825–828, 2017. A. H. Zaabab, Q.-J. Zhang, and M. Nakhla, “A neural network modeling approach to circuit optimization and statistical design,” *IEEE Transactions on Microwave Theory and Techniques*, vol. 43, no. 6, pp. 1349–1358, 1995. Q. J. Zhang and K. C. Gupta, *Neural Networks for RF and Microwave Design*.1em plus 0.5em minus 0.4emArtech House, 2000. Q.-J. Zhang, K. C. Gupta, and V. K. Devabhaktuni, “Artificial neural networks for rf and microwave design-from theory to practice,” *IEEE transactions on microwave theory and techniques*, vol. 51, no. 4, pp. 1339–1350, 2003. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” *Science*, vol. 313, no. 5786, p. 504. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” *Nature*, vol. 521, no. 7553, p. 436. S. Ehrhardt, A. Monszpart, N. J. Mitra, and A. Vedaldi, “Learning [A]{} physical long-term predictor,” *CoRR*, vol. abs/1703.00247, 2017. \[Online\]. Available: <http://arxiv.org/abs/1703.00247> A. Lerer, S. Gross, and R. Fergus, “Learning physical intuition of block towers by example,” *arXiv preprint arXiv:1603.01312*, 2016. J. Tompson, K. Schlachter, P. Sprechmann, and K. Perlin, “Accelerating eulerian fluid simulation with convolutional networks,” *CoRR*, vol. abs/1607.03597, 2016. \[Online\]. Available: <http://arxiv.org/abs/1607.03597> X. Guo, W. Li, and F. Iorio, “Convolutional neural networks for steady flow approximation,” in *Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, ser. KDD ’16.1em plus 0.5em minus 0.4emNew York, NY, USA: ACM, 2016, pp. 481–490. \[Online\]. Available: <http://doi.acm.org/10.1145/2939672.2939738> K. [Mills]{}, M. [Spanner]{}, and I. [Tamblyn]{}, “[Deep learning and the Schrödinger equation]{},” *ArXiv e-prints*, Feb. 2017. A. Byravan and D. Fox, “Se3-nets: Learning rigid body motion using deep neural networks,” in *Robotics and Automation (ICRA), 2017 IEEE International Conference on*.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 173–180. H. M. Yao, Y. W. Qin, and L. J. Jiang, “Machine learning based mom (ml-mom) for parasitic capacitance extractions,” in *Electrical Design of Advanced Packaging and Systems (EDAPS), 2016 IEEE*.1em plus 0.5em minus 0.4emIEEE, 2016, pp. 171–173. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, 2015, pp. 3431–3440. A. Y. Ng, “Feature selection, l 1 vs. l 2 regularization, and rotational invariance,” in *Proceedings of the twenty-first international conference on Machine learning*.1em plus 0.5em minus 0.4emACM, 2004, p. 78. S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” *IEEE transactions on neural networks*, vol. 8, no. 1, pp. 98–113, 1997. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in *Advances in neural information processing systems*, 2012, pp. 1097–1105. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in *Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics*, 2011, pp. 315–323. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” *arXiv preprint arXiv:1412.6980*, 2014.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source and packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.' author: - 'Chun Wang, Ming-Hui Chen, Elizabeth Schifano, Jing Wu, and Jun Yan' bibliography: - 'softrevTechRep.bib' title: Statistical Methods and Computing for Big Data --- \#1 [Key words:]{} bootstrap; divide and conquer; external memory algorithm; high performance computing; online update; sampling; software; Introduction {#sec:intro} ============ A 2011 McKinsey report predicted shortage of talent necessary for organizations to take advantage of big data [@Many:etal:big:2011]. Data now stream from daily life thanks to technological advances, and big data has indeed become a big deal [e.g., @Shaw:why:2014]. In the President’s Corner of the June 2013 issue of AMStat News, the three presidents (elect, current, and past) of the American Statistical Association (ASA) wrote an article titled “The ASA and Big Data” [@Sche:Davi:Rodr:asa:2013]. This article echos the June 2012 column of @Rodr:big:2012 on the recent media focus on big data, and discusses on what the statistics profession needs to do in response to the fact that statistics and statisticians are missing from big data discussions. In the followup July 2013 column, president Marie Davidian further raised the issues of statistics not being recognized as data science and mainstream academic statisticians being left behind by the rise of big data [@Davi:aren:2013]. A white paper prepared by a working group of the ASA called for more ambitious efforts from statisticians to work together with researchers in other fields on national research priorities in order to achieve better science more quickly [@Rudi:etal:disc:2014]. The same concern was expressed in a 2014 president’s address of the Institute of Mathematical Statistics (IMS) [@Yu:let:2014]. President Bin Yu of the IMS called for statisticians to own Data Science by working on real problems such as those from genomics, neuroscience, astronomy, nanoscience, computational social science, personalized medicine/healthcare, finance, and government; relevant methodology/theory will follow naturally. Big data in the media or the business world may mean differently than what are familiar to academic statisticians [@Jord:Lin:stat:2014]. Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the ability of standard software tools to manage and analyze [e.g., @Snij:Matz:Reip:big:2012]. The origin of the term “big data” as it is understood today has been traced back in a recent study [@Dieb:pers:2012] to lunch-table conversations at Silicon Graphics in the mid-1990s, in which John Mashey figured prominently [@Mash:big:1998]. Big data are generated by countless online interactions among people, transactions between people and systems, and sensor-enabled machinery. Internet search engines (e.g., Google and YouTube) and social network tools (e.g., Facebook and Twitter) generate billions of activity data per day. Rather than Gigabytes and Terabytes, nowadays, the data produced are estimated by zettabytes, and are growing 40% every day [@Fan:Bife:mini:2013]. In the big data analytics world, a 3V definition by @Lane:3-D:2001 is widely accepted: volume (amount of data), velocity (speed of data in and out), and variety (range of data types and sources). High variety brings nontraditional or even unstructured data types, such as social network sentiments and internet map usage, which calls for new, creative ways to understand the structure of data and even to ask intelligent research questions [e.g., @Jord:Lin:stat:2014]. High volume and high velocity may bring noise accumulation, spurious correlation and incidental homogeneity, creating issues in computational feasibility and algorithmic stability [@Fan:Han:Liu:chal:2014]. Notwithstanding that new statistical thinking and methods are needed for the high variety aspect of big data, our focus is on fitting standard statistical models to big data whose size exceeds the capacity of a single computer from its high volume and high velocity. There are two computational barriers for big data analysis: 1) the data can be too big to hold in a computer’s memory; and 2) the computing task can take too long to wait for the results. These barriers can be approached either with newly developed statistical methodologies and/or computational methodologies. Despite the impression that statisticians are left behind in media discussions or governmental summits on big data, some statisticians have made important contributions and are pushing the frontier. Sound statistical procedures that are scalable computationally to massive datasets have been proposed [@Jord:on:2013]. Examples are subsampling-based approaches [@Klei:Talw:Sark:Jord:scal:2014; @Ma:Maho:Yu:stat:2013; @Lian:etal:resa:2013; @Macl:Adams:fire:2014], divide and conquer approaches [@Lin:Xi:aggr:2011; @Chen:Xie:spli:2014; @Song:Lian:spli:2014; @Neis:Wang:Xing:asym:2013], and online updating approaches [@Schifano2015]. From a computational perspective, much effort has been put into the most active, open source statistical environment, [@R]. Statistician developers are relentless in their drive to extend the reach of into big data [@Rick:stat:2013]. Recent UseR! conferences had many presentations that directly addressed big data, including a 2014 keynote lecture by John Chambers, the inventor of the language [@Cham:inte:2014]. Most cutting edge methods are first and easily implemented in . Given the open source nature of and the active recent development, our focus on software for big data will be on and packages. Revolution Enterprise () is a commercialized version of , but it offers free academic use, so it is also included in our case study and benchmarked. Other commercial software such as , , and will be briefly touched upon for completeness. The rest of the article is organized as follows. Recent methodological developments in statistics on big data are summarized in Section \[sec:meth\]. Updating formulas for commonly used variable selection criteria in the online setting are developed and their performances studied in a simulation study in Section \[sec:varsel\]. Resources from open source software for analyzing big data with classical models are summarized in Section \[sec:R\]. Commercial software products are presented in Section \[sec:comm\]. A case study on a logistic model for the chance of airline delay is presented in Section \[sec:case\]. A discussion concludes in Section \[sec:disc\]. Statistical Methods {#sec:meth} =================== The recent methodologies for big data can be loosely grouped into three categories: resampling-based, divide and conquer, and online updating. To put the different methods in a context, consider a dataset with $n$ independent and identically distributed observations, where $n$ is too big for standard statistical routines such as logistic regression. Subsampling-Based Methods {#sect:stat:resamp} ------------------------- ### Bags of Little Bootstrap @Klei:Talw:Sark:Jord:scal:2014 proposed the bags of little bootstrap (BLB) approach that provides both point estimates and quality measures such as variance or confidence intervals. It is a combination of subsampling [@Poli:Roma:Wolf:subs:1999], the $m$-out-of-$n$ bootstrap [@Bick:Gotz:van:resa:1997], and the bootstrap [@Efro:boot:1979] to achieve computational efficiency. BLB consists of the following steps. First, draw $s$ subsamples of size $m$ from the original data of size $n$. For each of the $s$ subsets, draw $r$ bootstrap samples of size $n$ instead of $m$, and obtain the point estimates and their quality measures (e.g., confidence interval) from the $r$ bootstrap sample. Then, the $s$ bootstrap point estimates and quality measures are combined (e.g., by average) to yield the overall point estimates and quality measures. In summary, BLB has two nested procedures: the inner procedure applies the bootstrap to a subsample, and the outer procedure combines these multiple bootstrap estimates. The subsample size $m$ was suggested to be $n^{\gamma}$ with $\gamma \in [0.5, 1]$ [@Klei:Talw:Sark:Jord:scal:2014], a much smaller number than $n$. Although the inner bootstrap procedure conceptually generates multiple resampled data of size $n$, what is really needed in the storage and computation is a sample of size $m$ with a weight vector. In contrast to subsampling and the $m$-out-of-$n$ bootstrap, there is no need for an analytic correction (e.g., $\sqrt{m/n}$) to rescale the confidence intervals from the final result. The BLB procedure facilitates distributed computing by letting each subsample of size $m$ be processed by a separate processor. @Klei:Talw:Sark:Jord:scal:2014 proved the consistency of BLB and provided high order correctness. Their simulation study showed good accuracy, convergence rate and the remarkable computational efficiency. ### Leveraging @Ma:Sun:leve:2015 proposed to use leveraging to facilitate scientific discoveries from big data using limited computing resources. In a leveraging method, one samples a small proportion of the data with certain weights (subsample) from the full sample, and then performs intended computations for the full sample using the small subsample as a surrogate. The key to success of the leveraging methods is to construct the weights, the nonuniform sampling probabilities, so that influential data points are sampled with high probabilities [@Ma:Maho:Yu:stat:2013]. Leveraging methods are different from the traditional subsampling or $m$-out-of-$n$ bootstrap in that 1) they are used to achieve feasible computation even if the simple analytic results are available; 2) they enable visualization of the data when visualization of the full sample is impossible; and 3) they usually use unequal sampling probabilities for subsampling data. This approach is quite unique in allowing pervasive access to extract information from big data without resorting to high performance computing. ### Mean Log-likelihood @Lian:etal:resa:2013 proposed a resampling-based stochastic approximation approach with an application to big geostatistical data. The method uses Monte Carlo averages calculated from subsamples to approximate the quantities needed for the full data. Motivated from minimizing the Kullback–Leibler (KL) divergence, they approximate the KL divergence by averages calculated from subsamples. This leads to a maximum mean log-likelihood estimation method. The solution to the mean score equation is obtained from a stochastic approximation procedure, where at each iteration, the current estimate is updated based on a subsample of size $m$ drawn from the full data. As $m$ is much smaller than $n$, the method is scalable to big data. @Lian:etal:resa:2013 established the consistency and asymptotic normality of the resulting estimator under mild conditions. In a simulation study, the convergence rate of the method was almost independent of $n$, the sample size of the full data. ### Subsampling-Based MCMC As a popular general purpose tool for Bayesian inference, Markov chain Monte Carlo (MCMC) for big data is challenging because of the prohibitive cost of likelihood evaluation of every datum at every iteration. @Lian:Kim:boot:2013 extended the mean log-likelihood method to a bootstrap Metropolis–Hastings (MH) algorithm in MCMC. The likelihood ratio of the proposal and current estimate in the MH ratio is replaced with an approximation from the mean log-likelihood based on $k$ bootstrap samples of size $m$. The algorithm can be implemented exploiting the embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations. @Macl:Adams:fire:2014 proposed an auxiliary variable MCMC algorithm called Firefly Monte Carlo (FlyMC) that only queries the likelihoods of a potentially small subset of the data at each iteration yet simulates from the exact posterior distribution. For each data point, a binary auxiliary variable and a strictly positive lower bound of the likelihood contribution are introduced. The binary variable for each datum effectively turn on and off data points in the posterior, hence the “firefly" name. The probability of turning on each datum depends on the ratio of its likelihood contribution and the introduced lower bound. The computational gain depends on that the lower bound is tight enough and that simulation of the auxiliary variables is cheap enough. Because of the need to hold the whole data in computer memory, the size of the data this method can handle is limited. The pseudo-marginal Metropolis–Hasting algorithm replaces the intractable target (posterior) density in the MH algorithm with an unbiased estimator [@Andr:Robe:pseu:2009]. The the log-likelihood is estimated by an unbiased subsampled version, and an unbiased estimator of the likelihood is obtained by correcting the bias of the exponentiation of this estimator. @Quir:Vill:Kohn:spee:2014 proposed subsampling the data using probability proportional-to-size (PPS) sampling to obtain an approximately unbiased estimate of the likelihood which is used in the M-H acceptance step. The subsampling approach was further improved in @Quir:Vill:Kohn:scal:2015 using the efficient and robust difference estimator form the survey sampling literature. Divide and Conquer {#sect:stat:dc} ------------------ A divide and conquer algorithm (which may appear under other names such as divide and recombine, split and conquer, or split and merge) generally has three steps: 1) partitions a big dataset into $K$ blocks; 2) processes each block separately (possibly in parallel); and 3) aggregates the solutions from each block to form a final solution to the full data. ### Aggregated Estimating Equations For a linear regression model, the least squares estimator for the regression coefficient $\beta$ for the full data can be expressed as a weighted average of the least squares estimator for each block with weight being the inverse of the estimated variance matrix. The success of this method for linear regression depends on the linearity of the estimating equations in $\beta$ and that the estimating equation for the full data is a simple summation of that for all the blocks. For general nonlinear estimating equations, @Lin:Xi:aggr:2011 proposed a linear approximation of the estimating equations with the Taylor expansion at the solution in each block, and, hence, reduce the nonlinear estimating equation to the linear case so that the solutions to all the blocks are combined by a weighted average. The weight of each block is the slope matrix of the estimating function at the solution in that block, which is the Fisher information or inverse of the variance matrix if the equations are score equations. @Lin:Xi:aggr:2011 showed that, under certain technical conditions including $K = O(n^{\gamma})$ for some $\gamma \in (0, 1)$, the aggregated estimator has the same limit as the estimator from the full data. ### Majority Voting @Chen:Xie:spli:2014 consider a divide and conquer approach for generalized linear models (GLM) where both the sample size $n$ and the number of covariates $p$ are large, by incorporating variable selection via penalized regression into a subset processing step. More specifically, for $p$ bounded or increasing to infinity slowly, ($p_n$ not faster than $o(e^{n_k})$, while model size may increase at a rate of $o(n_k)$), they propose to first randomly split the data of size $n$ into $K$ blocks (size $n_k=O(n/K)$). In step 2, penalized regression is applied to each block separately with a sparsity-inducing penalty function satisfying certain regularity conditions. This approach can lead to differential variable selection among the blocks, as different blocks of data may result in penalized estimates with different non-zero regression coefficients. Thus, in step 3, the results from the $K$ blocks are combined by majority vote to create a combined estimator. The implicit assumption is that real effects should be found persistently and therefore should be present even under perturbation by subsampling [e.g. @meinsh.buhl:2010]. The derivation of the combined estimator in step 3 stems from ideas for combining confidence distributions in meta-analysis [@singh.xie.straw:2005; @xie.singh.straw:2011], where one can think of the $K$ blocks as $K$ independent and separate analyses to be combined in a meta-analysis. The authors show under certain regularity conditions that their combined estimator in step 3 is model selection consistent, asymptotically equivalent to the penalized estimator that would result from using all of the data simultaneously, and achieves the oracle property when it is attainable for the penalized estimator from each block [see e.g., @fan.lv:2011]. They additionally establish an upper bound for the expected number of incorrectly selected variables and a lower bound for the expected number of correctly selected variables. ### Screening with Ultrahigh Dimension Instead of dividing the data into blocks of observations in step 1, @Song:Lian:spli:2014 proposed a split-and-merge (SAM) method that divides the data into subsets of covariates for variable selection in ultrahigh dimensional regression from the Bayesian perspective. This method is particularly suited for big data where the number of covariates $P_n$ is much larger than the sample size $n$, $P_n \gg n$, and possibly increasing with $n$. In step 2, Bayesian variable selection is separately performed on each lower dimensional subset, which facilitates parallel processing. In step 3, the selected variables from each subset are aggregated, and Bayesian variable selection is applied on the aggregated data. The embarrassingly parallel structure in step 2 makes the SAM method applicable to big data problems with millions or more predictors. Posterior consistency is established for correctly specified models and for misspecified models, the latter of which is necessary because it is quite likely that some true predictors are missing. With correct model specification, true covariates will be identified as the sample size becomes large; under misspecified models, all predictors correlated with the response variable will be identified. Compared with the sure independence screening (SIS) approach [@Fan:Lv:sure:2008], the method uses the joint information of multiple predictors in predictor screening while SIS only uses the marginal information of each predictor. Their numerical results show that the SAM approach outperforms competing methods for ultrahigh dimensional regression. ### Parallel MCMC In the Bayesian framework, it is natural to partition the data into $K$ subsets and run parallel MCMC on each one of them. The prior distribution for each subset is often obtained by taking a power $1/K$ of the prior distribution for whole data in order to preserve the total amount of prior information (which may change the impropriety of the prior). MCMC is run independently on each subset with no communications between subsets (and, thus, embarrassingly parallel), and the resulting samples are combined to approximate samples from the full data posterior distribution. @Neis:Wang:Xing:asym:2013 proposed to use kernel density estimators of the posterior density for each data subset, and estimate the full data posterior by multiplying the subset posterior densities together. This method is asymptotically exact in the sense of being converging in the number of MCMC iterations. @Wang:etal:para:2015 replaced the kernel estimator of @Neis:Wang:Xing:asym:2013 with a random partition tree histogram, which uses the same block partition across all terms in the product representation of the posterior to control the number of terms in the approximation such that it does not explode with $m$. @Scot:etal:baye:2013 proposed a consensus Monte Carlo algorithm, which produces the approximated full data posterior using weighted averages over the subset MCMC samples. The weight used (for Gaussian models) for each subset is the inverse of the variance-covariance matrix of the MCMC samples. The method is effective when the posterior is close to Gaussian but may cause bias when the distribution is skewed or has multi-modes. The consensus Monte Carlo principal is approached from a variational perspective by @Robi:Ange:Jord:vari:2015. The embarrassingly parallel feature of these methods facilitates their implementation in the MapReduce framework that exploits the division and recombination strategy [@Dean:Ghem:mapr:2008]. The final recombination step is implemented in package `parallelMCMCcombine` [@Miro:Conl:para:2014]. Going beyond embarrassingly parallel MCMC remains challenging because of storage issues and communication overheads. General strategies for parallel MCMC such as multiple-proposal MH algorithm [@Cald:gene:2014] and population MCMC [@Song:Wu:Lian:weak:2014] mostly require full data at each node. Online Updating for Stream Data ------------------------------- In some applications, data come in streams or large chunks, and a sequentially updated analysis is desirable without storing the data. Motivated from a Bayesian inference perspective, @Schifano2015 extends the work of @Lin:Xi:aggr:2011 in a few important ways. First, they introduce divide-and-conquer-type variance estimates of regression parameters in the linear model and estimating equation settings. These estimates of variability allow for users to make inferences about the true regression parameters based upon the previously developed divide-and-conquer point estimates of the regression parameters. Second, they develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. Thus, while the divide-and-conquer setting is quite amenable to parallel processing for each subset, the online-updating approach for data streams is inherently sequential in nature. Their algorithms were designed to be computationally efficient and minimally storage-intensive, as they assume no access/storage of the historical data. Third, the authors address the issue of possible rank deficiencies when dealing with blocks of data, and the uniqueness properties of the combined and cumulative estimators when using a generalized inverse. The authors also provide methods for assessing goodness of fit in the linear model setting, as standard residual-based diagnostics cannot be performed with the cumulative data without access to historical data. Instead, they propose outlier tests relying on predictive residuals, which are based on the predictive values computed from the cumulative estimate of the regression coefficients attained at the previous accumulation point. Additionally, they introduce a new online-updated estimator of the regression coefficients and corresponding estimator of the standard error in the estimating equation setting that takes advantage of information from the previous data. They show theoretically that this new estimator, the cumulative updated estimating equation (CUEE) estimator, is asymptotically consistent, and show empirically that the CUEE estimator is less biased in their finite sample simulations than the cumulatively estimated version of the estimator of @Lin:Xi:aggr:2011. Criterion-Based Variable Selection with Online Updating {#sec:varsel} ======================================================= To the best of our knowledge, criterion-based variable selection has not yet been considered in the online updating context. This problem is well worth investigating especially when access/storage of the historical data is limited. Suppose that we have $K$ blocks of data in a sequence with ${{\bf{Y}}}_k$, ${{\bf{X}}}_k$, and $n_k$ being the $n_k$-dimensional vector of responses, the $n_k \times (p + 1)$ matrix of covariates, and the sample size, respectively, for the $k_{th}$ block, $k = 1, \ldots, K$, such that ${{\bf{Y}}}= (Y'_1, Y'_2, \dots, Y'_K )'$ and ${{\bf{X}}}= ({{\bf{X}}}'_1, \dots, {{\bf{X}}}'_k)'$. Consider the standard linear regression model for the whole data with sample size $n = \sum_{i=1}^k n_k$, $${{\bf{Y}}}= {{\bf{X}}}{{\mbox{\boldmath $\beta$}}}+ {{\mbox{\boldmath $\epsilon$}}},$$ where ${{\mbox{\boldmath $\beta$}}}$ is the regression coefficient vector, and ${{\mbox{\boldmath $\epsilon$}}}$ is a normal random vector with mean zero and variance $\theta I_n$. Let $\mathcal{M}$ denote the model space. We enumerate the models in $\mathcal{M}$ by $m=1, 2, ..., 2^p$, where $2^p$ is the dimension of $\mathcal{M}$. For the full model, the least squares estimate of ${{\mbox{\boldmath $\beta$}}}$ and the sum of squared errors based on the $k$th subset is given by $\hat{{{\mbox{\boldmath $\beta$}}}}_{n_k,k}=({{\bf{X}}}'_k{{\bf{X}}}_k)^{-}{{\bf{X}}}'_k{{\bf{Y}}}_k$ and ${\mathsf{SSE}}_{n_k, k}$. In the sequential setting, we only need to store and update the cumulative estimates at each $k$ [see, e.g. @Schifano2015]. Let ${{\mbox{\boldmath $\beta$}}}^{(m)}_{k}=(\beta^{(m)}_{0}, \beta^{(m)}_{1}, \ldots, \beta^{(m)}_{{p_m}})'$ and ${\mathsf{SSE}}^{(m)}_{k}$ denote the cumulative estimates based on all data through subset $k$ for model $m$, where $p_m$ is the number of covariates for model $m$. We further introduce the $(p + 1) \times (p_m +1)$ selection matrix $P^{(m)} = (e_{m_0}, \quad e_{m_1}, \quad \dots \quad e_{m_{p_m}})$, where $e_{m_0}$ is a vector with length $(p+1)$ and the first element as 1, and $e_{m_j}$ denotes a vector of length $(p+1)$ with 1 in the $m_{j}$th position and 0 in every other position for all $j > 0$. Here $(m_1, ..., m_{p_m})$ are not necessarily in sequence, but represents the index of selected variables in the full design matrix $X_k$. Define ${{\bf{X}}}^{(m)}_{k} = {{\bf{X}}}_kP^{(m)}$. Update a $(p_m + 1) \times (p_m + 1)$ matrix $$V^{(m)}_{k}= {{\bf{X}}}^{(m)'}_{k} {{\bf{X}}}^{(m)}_{k} + V^{(m)}_{k-1},$$ where $V^{(m)}_0=0$, and a $(p_m + 1) \times 1$ vector $$A^{(m)}_k = {{\bf{X}}}^{(m)'}_{k}{{\bf{X}}}_k\hat{{{\mbox{\boldmath $\beta$}}}}_{n_k,k}+V^{(m)}_{k-1}\hat{{{\mbox{\boldmath $\beta$}}}}^{(m)}_{k-1},$$ where $\hat{{{\mbox{\boldmath $\beta$}}}}^{(m)}_{0} = 0$. After some algebra, we have $$\hat{{{\mbox{\boldmath $\beta$}}}}^{(m)}_{k} = (V^{(m)}_k)^{-1} A^{(m)}_k,$$ and $$\begin{aligned} {\mathsf{SSE}}^{(m)}_{k} &= {\mathsf{SSE}}_{n_k k}+\hat{{{\mbox{\boldmath $\beta$}}}}_{n_k k}'{{\bf{X}}}'_k{{\bf{X}}}_k\hat{{{\mbox{\boldmath $\beta$}}}}_{n_k k}+\hat{{{\mbox{\boldmath $\beta$}}}}^{(m)'}_{k-1}V^{(m)}_{k-1}\hat{{{\mbox{\boldmath $\beta$}}}}^{(m)}_{k-1} \\ & \quad - \hat{{{\mbox{\boldmath $\beta$}}}}^{(m)'}_{k}V^{(m)}_{k}\hat{{{\mbox{\boldmath $\beta$}}}}^{(m)}_{k} + {\mathsf{SSE}}^{(m)}_{k-1}.\end{aligned}$$ With $\sigma$ unknown, letting $$\begin{aligned} B_k^{(m)} & = n \log \frac{2\pi {\mathsf{SSE}}^{(m)}_{k}}{n - p_m - 1},\end{aligned}$$ the Akaike information criterion (AIC) and Bayesian information criterion (BIC) are updated by $$\begin{aligned} {\mathsf{AIC}}_k^{(m)} &= B_k^{(m)} + n + p_m + 1,\\ {\mathsf{BIC}}_k^{(m)} &= B_k^{(m)} + n - p_m - 1 + (p_{m}+1)\log n.\end{aligned}$$ To study the Bayesian variable selection criteria, assume a joint conjugate prior for $({{\mbox{\boldmath $\beta$}}}^{(m)},{\theta}^{(m)})$ as follows: ${{\mbox{\boldmath $\beta$}}}^{(m)} | {\theta}^{(m)}$ follows normal distribution with mean ${{\mbox{\boldmath $\mu$}}}_o$, and precision matrix ${{\bf{V}}}_0$, ${\theta}^{(m)}$ follows Inverse Gamma distribution with shape parameter $\nu_0/2$ and scale parameter $\tau_0/2$, e.g, $$\begin{aligned} \pi( {{\mbox{\boldmath $\beta$}}}^{(m)},{\theta}^{(m)}| & {{\mbox{\boldmath $\mu$}}}_0, {{\bf{V}}}_0, \nu_0, \tau_0) \\ & = \pi({{\mbox{\boldmath $\beta$}}}^{(m)}|{\theta}^{(m)},{{\mbox{\boldmath $\mu$}}}_0, {{\bf{V}}}_0) \pi({\theta}^{(m)}|\nu_0,\tau_0),\end{aligned}$$ where ${{\mbox{\boldmath $\mu$}}}_0$ is a prespecified $(p_m+1)$-dimensional vector, ${{\bf{V}}}_0$ is a $(p_m+1)\times (p_m+1)$ positive definite matrix, $\nu_0 > 0$, $\tau_0 > 0$. It can be shown that the deviance information criterion (${\mathsf{DIC}}$) [@Spie:etal:baye:2002] is updated by $${\mathsf{DIC}}_k^{(m)} = n \log\frac{\pi(n-2) {\mathsf{SSE}}_k^{(m)}}{2} + 2n\psi(\frac{n}{2}) + 2p_m +n + 4,$$ where $\psi(x)={\mathrm{d}\!}\log \Gamma(x) / {\mathrm{d}\!}x$ is the digamma function. We examined the performance of AIC, BIC and DIC under the online updating scenario in a simulation study. Each dataset was generated from linear model $y_i= {{\bf{x}}}_i'{{\mbox{\boldmath $\beta$}}}+ \epsilon_i,$ where $\epsilon_i$’s were independently generated from $N(0, 100)$, $x_i = (1, x_{i1}, x_{i2}, x_{i3}, x_{i4})$ were identically distributed random vectors from a multivariate normal distribution with mean $(1, 0, 0, 0, 0)$ and marginal variances $(0, 16, 9, 0.3, 3)$. Two correlation structures of $(x_{i1}, x_{i2}, x_{i3}, x_{i4})$ were considered: 1) independent and 2) AR(1) with correlation coefficient 0.9. Four different models as determined by the nonzeroness of ${{\mbox{\boldmath $\beta$}}}$ were considered: $(-1, 3, 0, 0, 0)$, $(-1, 3, 0, -1.5, 0)$, $(-1, 3, 2, -1.5, 0)$, and $(-1, 3, 2, -1.5, 1)$. The corresponding signal-to-noise ratios were 1.44, 1.45, 1.81, and 1.83 in the independent case and 1.44, 1.29, 2.85, and 3.33 under the dependent case. The sample size of each block was set as $n_k=100$. The performance of the criteria was investigated with the cumulative estimates at block $k \in \{2, 25, 100\}$. For each scenario, 10,000 independent datasets were generated. [max width=0.85]{} [l ccc ccc ccc ccc ccc ccc]{} (lr)[2-10]{} (lr)[11-19]{} & &\ (lr)[2-10]{} (lr)[11-19]{} True & & & & & &\ (lr)[2-4]{}(lr)[5-7]{}(lr)[8-10]{} (lr)[11-13]{}(lr)[14-16]{}(lr)[17-19]{} Model & AIC &BIC &DIC& AIC &BIC &DIC& AIC &BIC &DIC &AIC &BIC &DIC& AIC &BIC &DIC& AIC &BIC &DIC\ \ none &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ $\bf(x_1)$ &**59 &**93 &**59 &**60 &**98 &**60 &**59 &**99 &**59 &**63 &**94 &**62 &**64 &**99 &**64 &**64 &**99 &**64\ ($x_2$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2$) &11 &2 &11 &11 &1 &11 &12 &0 &12 &10 &2 &10 &9 &1 &9 &10 &0 &10\ ($x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_3$) &11 &2 &11 &11 &1 &11 &11 &0 &11 &8 &2 &8 &8 &0 &8 &8 &0 &8\ ($x_2, x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_3$) &2 &0 &3 &2 &0 &2 &2 &0 &2 &4 &0 &4 &3 &0 &3 &3 &0 &3\ ($x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_4$) &11 &2 &11 &11 &0 &11 &11 &0 &11 &9 &2 &9 &8 &0 &9 &8 &0 &8\ ($x_2, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_4$) &2 &0 &2 &2 &0 &2 &2 &0 &2 &3 &0 &3 &3 &0 &3 &3 &0 &3\ ($x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_3, x_4$) &2 &0 &2 &2 &0 &2 &2 &0 &2 &4 &0 &4 &4 &0 &4 &4 &0 &4\ ($x_2, x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_3, x_4$)&1 &0 &1 &0 &0 &0 &0 &0 &0 &1 &0 &1 &1 &0 &1 &1 &0 &1\ \[5pt\]\ none &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1$) &42 &83 &42 &0 &9 &0 &0 &0 &0 &55 &89 &55 &10 &60 &10 &0 &3 &0\ ($x_2$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2$) &8 &2 &8 &0 &0 &0 &0 &0 &0 &11 &3 &11 &10 &4 &10 &1 &2 &1\ ($x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ $\bf(x_1, x_3)$ &**28 &**12 &**27 &**71 &**90 &**71 &**70 &**100 &**70 &**13 &**4 &**13 &**50 &**30 &**50 &**69 &**90 &**69\ ($x_2, x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_3$) &6 &0 &6 &13 &0 &13 &14 &0 &14 &4 &0 &4 &6 &0 &6 &12 &0 &12\ ($x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_4$) &8 &2 &8 &0 &0 &0 &0 &0 &0 &10 &3 &10 &14 &6 &14 &3 &5 &3\ ($x_2, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_4$) &2 &0 &2 &0 &0 &0 &0 &0 &0 &3 &0 &3 &2 &0 &2 &2 &0 &2\ ($x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_3, x_4$) &6 &0 &6 &13 &0 &13 &13 &0 &13 &4 &0 &5 &6 &0 &6 &11 &0 &11\ ($x_2, x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_3, x_4$)&1 &0 &1 &2 &0 &3 &3 &0 &3 &1 &0 &1 &1 &0 &1 &2 &0 &2\ \[5pt\]\ none &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &2 &17 &2 &0 &0 &0 &0 &0 &0\ ($x_2$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2$) &50 &85 &50 &0 &9 &0 &0 &0 &0 &64 &74 &64 &28 &83 &28 &1 &29 &1\ ($x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &3 &2 &3 &0 &0 &0 &0 &0 &0\ ($x_2, x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ $\bf(x_1, x_2, x_3)$ &**33 &**13 &**33 &**84 &**90 &**84 &**84 &**100 &**84 &**14 &**3 &**14 &**50 &**14 &**50 &**81 &**67 &**81\ ($x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &0 &0 &0 &0 &0 &0\ ($x_2, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_4$) &10 &2 &10 &0 &0 &0 &0 &0 &0 &11 &2 &11 &15 &3 &15 &6 &4 &6\ ($x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &1 &0 &0 &0 &0 &0 &0\ ($x_2, x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ $(x_1, x_2, x_3, x_4)$ &7 &0 &7 &15 &0 &15 &16 &0 &16 &4 &0 &5 &7 &0 &7 &13 &0 &13\ \[5pt\]\ none &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 & 0\ ($x_1$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &3 &0 &0 &0 &0 &0 &0 &0\ ($x_2$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2$) &9 &40 &9 &0 &0 &0 &0 &0 &0 &51 &75 &51 &0 &13 &0 &0 &0 &0\ ($x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &4 &6 &4 &0 &0 &0 &0 &0 &0\ ($x_2, x_3$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_3$) &6 &6 &6 &0 &0 &0 &0 &0 &0 &7 &1 &7 &0 &0 &0 &0 &0 &0\ ($x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &4 &10 &4 &0 &0 &0 &0 &0 &0\ ($x_2, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_2, x_4$) &50 &47 &50 &0 &9 &0 &0 &0 &0 &24 &4 &25 &51 &80 &51 &11 &65 &11\ ($x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_1, x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ ($x_2, x_3, x_4$) &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\ $\bf(x_1, x_2, x_3, x_4)$ &**34 &**7 &**34 &**100 &**91 &**100 &**100 &**100 &**100 &**10 &**1 &**10 &**48 &**7 &**48 &**89 &**35 &**89\ ************************************************************************************************************************************************ The percentages of models selected among the $2^4$ models by each of the three criteria are summarized in Table \[tab:sim\]. The entire row in bold represents the true model. Based on the simulation results, BIC performs extremely well when the number of blocks ($k$) is large, which is consistent with known results that the probability of selecting the true model by BIC approaches 1 as $n \rightarrow \infty$ [e.g., @Schw:esti:1978; @Nish:asym:1984]. The BIC also performs better than AIC and DIC when the covariates are independent, even for small sample sizes. When covariates are highly dependent, AIC and DIC provide more reliable results when sample size is small. The performance of AIC and DIC is always very similar. The simulation results also confirm the existing theorem that AIC is not consistent [e.g., @Wood:on:1982]. In the big data setting with large sample size, BIC is generally preferable, especially when the covariates are not highly correlated. Open Source R and R Packages {#sec:R} ============================ Handling big data is one of the topics of high performance computing. As the most popular open source statistical software, and its adds-on packages provide a wide range of high performance computing; see Comprehensive Archive Network (CRAN) task view on “High-Performance and Parallel Computing with R” [@Edde:cran:2014]. The focus of this section is on how to break the computer memory barrier and the computing power barrier in the context of big data. Breaking the Memory Barrier {#sec:R:mem} --------------------------- The size of big data is relative to the available computing resources. The theoretical limit of random access memory (RAM) is determined by the width of memory addresses: 4 gigabyte (GB) ($2^{32}$ bytes) for a 32-bit computer and 16.8 million terabyte ($2^{64}$ bytes) for a 64-bit computer. In practice, however, the latter is limited by the physical space of a computer case, the operating system, and specific software. Individual objects in have limits in size too; an user can hardly work with any object of size close to that limit. @Emer:Kane:dont:2012 suggested that a data set would be considered *large* if it exceeds 20% of RAM on a given machine and *massive* if it exceeds 50%, in which case, even the simplest calculation would consume all the remaining RAM. Memory boundary can be broken with an external memory algorithms (EMA) [e.g., @Vitt:exte:2001], which conceptually works by storing the data on a disk storage (which has a much greater limit than RAM), and processing one chunk of it at a time in RAM [e.g., @Rpkg:biglm]. The results from each chunk will be saved or updated and the process continues until the entire dataset is exhausted; then, if needed as in an iterative algorithm, the process is reset from the beginning of the data. To implement an EMA for each statistical function, one need to address 1) data management and 2) numerical calculation. ### Data Management Earlier solutions to oversize data resorted to relational databases. This method depends on an external database management system (DBMS) such as , , , , , , and others. Interfaces to are provided through many packages such as [[sqldf]{}]{} [@Rpkg:sqldf], [[DBI]{}]{} [@Rpkg:dbi], [[RSQLite]{}]{} [@Rpkg:rsqlite], and others. The database approach requires a DBMS to be installed and maintained, and knowledge of structured query language (); an exception for simpler applications is package [[filehash]{}]{} [@Peng:inte:2006], which comes with a simple key-value database implementation itself. The numerical functionality of SQL is quite limited, and calculations for most statistical analyses require copying subsets of the data into objects in facilitated by the interfaces. Extracting chunks from an external DBMS is computationally much less efficient than the more recent approaches discussed below [@Kane:Emer:West:scal:2013]. Two packages, [[bigmemory]{}]{} [@Kane:Emer:West:scal:2013] and [[ff]{}]{} [@Rpkg:ff] provide data structures for massive data while retaining a look and feel of objects. Package [[bigmemory]{}]{} defines a data structure for numeric matrices which uses memory-mapped files to allow matrices to exceed the RAM size on computers with 64-bit operating systems. The underling technology is memory mapping on modern operating systems that associates a segment of virtual memory in a one-to-one correspondence with contents of a file. These files are accessed at a much faster speed than in the database approaches because operations are handled at the operating-system level. The structure has several advantages such as support of shared memory for efficiency in parallel computing, reference behavior that avoids unnecessary temporary copies of massive objects, and column-major format that is compatible with legacy linear algebra packages (e.g., , ) [@Kane:Emer:West:scal:2013]. The package provides utility to read in a csv file to form a object, but it only allows one type of data, numeric; it is a numeric matrix after all. Package [[ff]{}]{} provides data structures that are stored in binary flat files but behave (almost) as if they were in RAM by transparently mapping only a section (pagesize) of meta data in main memory. Unlike [[bigmemory]{}]{}, it supports ’s standard atomic data types (e.g., double or logical) as well as nonstandard, storage efficient atomic types (e.g., the 2-bit unsigned type allows efficient storage of genomic data as a factor with levels A, T, G, and, C). It also provides class which is like in , and import/export filters for csv files. A binary flat file can be shared by multiple objects in the same or multiple processes for parallel access. Utility functions allow interactive process of selections of big data. ### Numerical Calculation The data management systems in packages [[bigmemory]{}]{} or [[ff]{}]{} do not mean that one can apply existing functions yet. Even a simple statistical analysis such as linear model or survival analysis will need to be implemented for the new data structures. Chunks of big data will be processed in RAM one at a time, and often, the process needs to be iterated over the whole data. A special case is the linear model fitting, where one pass of the data is sufficient and no resetting from the beginning is needed. Consider a regression model $E[Y] = X\beta$ with $n \times 1$ response $Y$, $n\times p$ model matrix $X$ and $p\times 1$ coefficient $\beta$. The base implementation takes $O(np + p^2)$ memory, which can be reduced dramatically by processing in chunks. The first option is to compute $X'X$ and $X'y$ in increment, and get the least squares estimate of $\beta$, $\hat\beta = (X'X)^{-1}X'Y$. This method is adopted in package [[speedglm]{}]{} [@Rpkg:speedglm]. A slower but more accurate option is to compute the incremental QR decomposition [@Mill:algo:1992] of $X = QR$ to get $R$ and $Q'Y$, and then solve $\beta$ from $R\beta = Q'Y$. This option is implemented in package [[biglm]{}]{} [@Rpkg:biglm]. Function uses only $p^2$ memory of $p$ variables and the fitted object can be updated with more data using . The package also provides an incremental computation of sandwich variance estimator by accumulating a $(p + 1)^2\times (p + 1)^2$ matrix of products of $X$ and $Y$ without a second pass of the data. In general, a numerical calculation needs an iterative algorithm in computation and, hence, multiple passes of the data are necessary. For example, a GLM fitting is often obtained through the iterated reweighted least squares (IRLS) algorithm. The function in package [[biglm]{}]{} implements the generic IRLS algorithm that can be applied to any specific data management system such as DBMS, [[bigmemory]{}]{}, or [[ff]{}]{}, provided that a function supplies the next chunk of data or zero-row data if there is no more, and resets to the beginning of the data for the next iteration. Specific implementation of the function for object of class and are provided in package [[biganalytics]{}]{} [@Rpkg:biganalytics] and [[ffbase]{}]{} [@Rpkg:ffbase], respectively. For any statistical analysis on big data making use of the data management system, one would need to implement the necessary numerical calculations like what package [[biglm]{}]{} does for GLM. The family of [[bigmemory]{}]{} provides a collection of functions for objects: [[biganalytics]{}]{} for basic analytic and statistical functions, [[bigtabulate]{}]{} for tabulation operations [@Rpkg:bigtabulate], and [[bigalgebra]{}]{} for matrix operation with the BLAS and LAPACK libraries [@Rpkg:bigalgebra]. Some additional functions for objects are available from other contributed packages, such as [[bigpca]{}]{} for principal component analysis and single-value decomposition [@Rpkg:bigpca], and [[bigrf]{}]{} for random forest [@Rpkg:bigrf]. For objects, package [[ffbase]{}]{} provides basic statistical functions [@Rpkg:ffbase]. Additional functions for objects are provided in other packages, with examples including [[biglars]{}]{} for least angle regression and LASSO [@Rpkg:biglars] and [[PopGenome]{}]{} for population genetic and genomic analysis [@Pfei:popg:2014]. If some statistical analysis, such as generalized estimating equations or Cox proportional hazards model, has not been implemented for big data, then one will need to modify the existing algorithm to implement it. As pointed out by @Kane:Emer:West:scal:2013 [p.5], this would open Pandora’s box of recoding which is not a long-term solution for scalable statistical analyses; this calls for redesign of the next-generation statistical programming environment which could provide seamless scalability through file-backed memory-mapping for big data, help avoid the need for specialized tools for big data management, and allow statisticians and developers to focus on new methods and algorithms. Breaking the Computing Power Barrier {#sec:R:comp} ------------------------------------ ### Speeding Up As a high level interpreted language, for which most of instructions are executed directly, is infamously slow with loops. Some loops can be avoided by taking advantage of the vectorized functions in or by clever vectorizing with some effort. When vectorization is not straightforward or loops are unavoidable, as in the case of MCMC, acceleration is much desired, especially for big data. The least expensive tool in a programmer’s effort to speed up code is to compile them to byte code with the [[compiler]{}]{} package, which was developed by Luke Tierney and is now part of base . The byte code compiler translates the high-level into a very simple language that can be interpreted by a very fast byte code interpreter, or virtual machine. Starting with  2.14.0 in 2011, the base and recommended packages were pre-compiled into byte-code by default. Users’ functions, expressions, scripts, and packages can be compiled for an immediate boost in speed by a factor of 2 to 5. Computing bottlenecks can be implemented in a compiled language such as or and interfaced to through ’s foreign language interfaces [@Rext ch.5]. Typical bottlenecks are loops, recursions, and complex data structures. Recent developments have made the interfacing with much easier than it used to be [@Edde:seam:2013]. Package [[inline]{}]{} [@Rpkg:inline] provides functions that wrap (or ) code as strings in and takes care of compiling, linking, and loading by placing the resulting dynamically-loadable object code in the per-session temporary directory used by . For more general usage, package [[Rcpp]{}]{} [@Edde:etal:rcpp:2011] provides classes for many basic data types, which allow straightforward passing of data in both directions. Package [[RcpEigen]{}]{} [@Rpkg:rcppeigen] provides access to the high-performance linear algebra library for a wide variety of matrix methods, various decompositions and support of sparse matrices. Package [[RcppArmadillo]{}]{} [@Edde:Sand:rcpp:2014] connects with , a powerful templated linear algebra library which provides a good balance between speed and ease of use. Package [[RInside]{}]{} [@Rpkg:rinside] gives easy access of objects from by wrapping the existing embedding application programming interface (API) in classes. The [[Rcpp]{}]{} project has revolutionized the integration of with ; it is now used by hundreds of packages. Diagnostic tools can help identify the bottlenecks in code. Package [[microbenchmark]{}]{} [@Rpkg:microbenchmark] provides very precise timings for small pieces of source code, making it possible to compare operations that only take a tiny amount of time. For a collection of code, run-time of each individual operation can be measured with realistic inputs; the process is known as profiling. Function in does the profiling, but the outputs are not intuitive to understand for many users. Packages [[proftools]{}]{} [@Rpkg:proftools] and [[aprof]{}]{} [@Rpkg:aprof] provide tools to analyze profiling outputs. Packages [[profr]{}]{} [@Rpkg:profr], [[lineprof]{}]{} [@Rpkg:lineprof], and [[GUIProfiler]{}]{} [@Rpkg:GUIProfiler] provide visualization of profiling results. ### Scaling Up The package system has long embraced integration of parallel computing of various technologies to address the big data challenges. For embarrassingly parallelizable jobs such as bootstrap or simulation, where there is no dependency or communication between parallel tasks, many options are available with computer clusters or multicores. @schm:etal:stat:2009 reviewed the then state-of-the-art parallel computing with , highlighting two packages for cluster use: [[Rmpi]{}]{} [@Yu:rmpi:2002] which provides an interface to the Message Passing Interface (MPI) in parallel computing; [[snow]{}]{} [@Ross:Tier:Li:simp:2007] which provides an abstract layer with the communication details hidden from the end users. Since then, some packages have been developed and some discontinued. Packages [[snowFT]{}]{} [@Rpkg:snowFT] and [[snowfall]{}]{} [@Rpkg:snowfall] extend [[snow]{}]{} with fault tolerance and wrappers for easier development of parallel programs. Package [[multicore]{}]{} [@Rpkg:multicore] provides parallel processing of code on machines with multiple cores or CPUs. Its work and some of [[snow]{}]{} have been incorporated into the base package [[parallel]{}]{}, which was first included in  2.14.0 in 2011. Package [[foreach]{}]{} [@Rpkg:foreach] allows general iteration over elements in a collection without any explicit loop counter. Using loop without side effects facilitates executing the loop in parallel with different parallel mechanisms, including those provided by [[parallel]{}]{}, [[Rmpi]{}]{}, and [[snow]{}]{}. For massive data that exceed the computer memory, a combination of [[foreach]{}]{} and [[bigmemory]{}]{}, with shared-memory data structure referenced by multiple processes, provides a framework with ease of development and efficiency of execution (both in speed and memory) as illustrated by @Kane:Emer:West:scal:2013. Package [[Rdsm]{}]{} provides facilities for distributed shared memory parallelism at the level, and combined with [[bigmemory]{}]{}, it enables parallel processing on massive, out-of-core matrices. The “Programming with Big Data in ” project (pbdR) enables high-level distributed data parallelism in with easy utilization of large clusters with thousands of cores [@pbdR2012]. Big data are interpreted quite literally to mean that a dataset requires parallel processing either because it does not fit in the memory of a single machine or because its processing time needs to be made tolerable. The project focuses on distributed memory systems where data are distributed across processors and communications between processors are based on MPI. It consists of a collection of packages in a hierarchy. Package [[pbdMPI]{}]{} provides classes to directly interface with MPI to support the Single Program Multiple Data (SPMD) parallelism. Package [[pbdSLAP]{}]{} serves as a mechanism to utilize a subset of functions of scalable dense linear algebra in [@Blac:etal:scal:1997], a subset of routines redesigned with the SPMD style. Package [[pbdBASE]{}]{} contains a set of wrappers of low level functions in , upon which package [[pbdMAT]{}]{} builds to provide distributed dense matrix computing while preserving the friendly and familiar syntax for these computations. Demonstrations on how to use these and other packages from the pbdR are available in package [[pbdDEMO]{}]{}. A recent, widely adopted open source framework for massive data storage and distributed computing is [@Hadoop]. Its heart is an implementation of the MapReduce programming model first developed at Google [@Dean:Ghem:mapr:2008], which divides the data to distributed systems and computes for each group (the map step), and then recombines the results (the reduce step). It provides fault tolerant and scalable storage of massive datasets across machines in a cluster [@Whit:hado:2011]. The model suits perfectly the embarrassingly parallelizable jobs and the distributed file system helps break the memory boundary. @McCa:West:para:2011 [ch.5–8] demonstrated three ways to combine and . The first is to submit scripts directly to a cluster, which gives the user the most control and the most power, but comes at the cost of a learning curve. The second is a pure solution via package [[Rhipe]{}]{}, which hides the communications to from users. The package (not on CRAN) is from the project, which stands for and Integrated Programming Environment [@Guha:etal:larg:2012]. With [[Rhipe]{}]{}, data analysts only need to write code for the map step and the reduce step [@Guha:etal:larg:2012], and get the power of without leaving . The third approach targets specifically the Elastic MapReduce (EMR) at Amazon by a CRAN package [[segue]{}]{} [@Rpkg:segue], which makes EMR as easy to use as a parallel backend for -style operations. An alternative open source project that connects and is the RHadoop project, which is actively being developed by Revolution Analytics [@RHadoop]. This project is a collection of packages that allow users to manage and analyze data with : [[rhbase]{}]{} provides functions for database management for the HBase distributed database, [[rhdfs]{}]{} provides functions for distributed file system (HDFS), [[rmr]{}]{} provides functions to MapReduce functionality, [[plymr]{}]{} provides higher level data processing for structured data, and the most recent addition [[ravro]{}]{} provides reading and writing functions for files in format, an efficient data serialization system developed at Apache [@Avro]. is a more recent, cousin project of that supports tools for big data related tasks [@Spark]. The functions of and are neither the exactly same nor mutually exclusive, and they often work together. has its own distributed storage system, which is fundamental for any big data computing framework, allowing vast datasets to be stored across the hard drives of a scalable computer cluster rather than on a huge costly hold-it-all device. It persists back to the disk after a map or reduce action. In contrast, does not have its own distributed file system, and it processes data in-memory [@Zaha:etal:2010]. The biggest difference is disk-based computing versus memory-based computing. This is why could work 100 times faster than hadoop for some applications when the data fit in the memory. Some applications such as machine learning or stream processing where data are repeatedly queried makes an ideal framework. For big data that does not fit in memory, ’s operators spill data to disk, allowing it to run well on any sized data. For this purpose, it can be installed on top to take advantage of ’s HDFS. An frontend to is provided in package [[SparkR]{}]{} [@Rpkg:SparkR], which has become part of Apache recently. By using ’s distributed computation engine, the package allows users to run large scale data analysis such as selection, filtering, aggregation from . @Kara:etal:2015 provides a summary of the state-of-the-art on using . As multicores have become the standard setup for computers today, it is desirable to automatically make use of the cores in implicit parallelism without any explicit requests from the user. The experimental packages [[pnmath]{}]{} and [[pnmath0]{}]{} by Luke Tierney replace a number of internal vector operations in with alternatives that can take advantage of multicores [@Tier:code:2009]. For a serial algorithm such as MCMC, it is desirable to parallelize the computation bottleneck if possible, but this generally involves learning new computing tools and the debugging can be challenging. For instance, @Yan:etal:para:2007 used the parallel linear algebra package (PLAPACK) [@Geji:usin:1997] for the matrix operations (especially the Cholesky decomposition) in a MCMC algorithm for Bayesian spatiotemporal geostatistical models, but the scalability was only moderate. When random numbers are involved as in the case of simulation, extra care is needed to make sure the parallelized jobs run independent (and preferably reproducible) random-number streams. Package [[rsprng]{}]{} [@Rpkg:rsprng] provides an interface to the Scalable Parallel Random Number Generators (SPRNG) [@Masc:Srin:algo:2000]. Package [[rlecuyer]{}]{} [@Rpkg:rlecuyer] provides an interface to the random number generator with multiple independent streams developed by @L'Ec:etal:2002, the ideas of which are also implemented in the base package [[parallel]{}]{}: make independent streams by separating a single stream with a sufficiently large number of steps apart. Package [[doRNG]{}]{} [@Rpkg:doRNG] provides functions to perform reproducible parallel loops, independent of the parallel environment and associated backend. From a hardware perspective, many computers have mini clusters of graphics processing units (GPUs) that can help with bottlenecks. GPUs are dedicated numerical processors that were originally designed for rendering three dimensional computer graphics. A GPU has hundreds of processor cores on a single chip and can be programmed to apply the same numerical operations on large data array. @Such:etal:unde:2010 investigated the use of GPUs in massively parallel massive mixture modeling, and showed better performance of GPUs than multicore CPUs, especially for larger samples. To reap the advantage, however, one needs to learn the related tools such as Compute Unified Device Architecture (CUDA), Open Computing Language (OpenCL), and so on, which may be prohibitive. An package [[gputools]{}]{} [@Rpkg:gputools] provides interface to NVidia CUDA toolkit and others. If one is willing to step out of the comfort zone of and take full control/responsibility of parallel computing, one may program with open source MPI or Open Multi-Processing (OpenMP). MPI is a language-independent communication system designed for programming on parallel computers, targeting high performance, scalability and portability [@Pach:para:1997]. Most MPI implementations are available as libraries from , , and any language that can interface with such libraries, including , or . The interface from can be accessed with package [[Rmpi]{}]{} [@Yu:rmpi:2002] as mentioned earlier. Freely available implementations include OpenMPI (not OpenMP) and MPICH, while others come with license such as Intel MPI. OpenMP is an API that supports multi-platform shared memory multiprocessing programming in and on most processor architectures and operating systems [@chapman2008using]. It is an add on to compilers (e.g., , intel compiler) to take advantage of of shared memory systems such as multicore computers where processors shared the main memory. MPI targets both distributed as well as shared momory systems while OpenMP targets only shared memory systems. MPI provides both process and thread based approach while OpenMP provides only thread based parallilism. OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for writing multi-threaded programs in and [@Dagu:Enon:open:1998]. Debugging parallel programs can be very challenging. Commercial Statistical Software {#sec:comm} =============================== is the core product of Revolution Analytics (formerly Revolution Computing), a company that provides tools, support, and training. focuses on big data, large scale multiprocessor (or high performance) computing, and multicore functionality. Massive datasets are handled via EMA and parallel EMA (PEMA) when multiprocessors or multicores are available. The commercial package [[RevoScaleR]{}]{} [@Rpkg:RevoScaleR] breaks the memory boundary by a special data format that allows efficient storage and retrieval of data. Functions in the package (e.g., for GLM fitting) know to work on a massive dataset one chunk at a time. The computing power boundary is also addressed — functions in the package can exploit multicores or computer clusters. Packages from the aforementioned open source project RHadoop developed by the company provide support for . Other components in allow high speed connection for various types of data sources and threading and inter-process communication for parallel and distributed computing. The same code works on small and big data, and on workstations, servers, clusters, , or in the cloud. The single workstation version of is free for academic use currently, and was used in the case study in Section \[sec:case\]. , one of the most widely used commercial software for statistical analysis, provides big data support through SAS High Performance Analytics. Massive datasets are approached by grid computing, in-database processing, in-memory analytics and connection to . The SAS High Performance Analytics Products include statistics, econometrics, optimization, forecasting, data mining, and text mining, which, respectively, correspond to SAS products STAS, ETS, OR, high-performance forecasting, enterprise miner, and text miner [@Cohe:Rodr:high:2013]. IBM , the Statistical Product and Services Solution, provides big data analytics through SPSS Modeler, SPSS Analytic Server, SPSS Collaboration and Deployment Services, and SPSS Analytic Catalyst [@SPSS]. SPSS Analytic Server is the foundation and it focuses on high performance analytics for data stored in Hadoop-based distributed systems. SPSS modeler is the high-performance data mining workbench, utilizing SPSS Analytic Server to leverage big data in Hadoop environments. Analysts can define analysis in a familiar and accessible workbench to conduct analysis modeling and scoring over high volumes of varied data. SPSS Collaboration and Deployment Services helps manage analytical assets, automate processes and efficiently share results widely and securely. SPSS Analytic Catalyst is the automation of analysis that makes analytics and data more accessible to users. provides a number of tools to tackle the challenges of big data analytics [@Matlab]. Memory mapped variables map a file or a proportion of a file to a variable in RAM; disk variables direct access to variables from files on disk; datastore allows access to data that do not fit into RAM. Their combination addresses the memory boundary. The computation power boundary is broken by intrinsic multicore math, GPU computing, parallel computing, cloud computing, and support. A Case Study {#sec:case} ============ The airline on-time performance data from the 2009 ASA Data Expo (<http://stat-computing.org/dataexpo/2009/the-data.html>) is used as a case study to demonstrate a logistic model fitting with a massive dataset that exceeds the RAM of a single computer. The data is publicly available and has been used for demonstration with big data by @Kane:Emer:West:scal:2013 and others. It consists of flight arrival and departure details for all commercial flights within the USA, from October 1987 to April 2008. About 12 million flights were recorded with 29 variables. A compressed version of the pre-processed data set from the bigmemory project (<http://data.jstatsoft.org/v55/i14/Airline.tar.bz2>) is approximately 1.7GB, and it takes 12GB when uncompressed. The response of the logistic regression is late arrival which was set to 1 if a flight was late by more than 15 minutes and 0 otherwise. Two binary covariates were created from the departure time: night (1 if departure occurred between 8pm and 5am) and weekend (1 if departure occurred on weekends and 0 otherwise). Two continuous covariates were included: departure hour (DepHour, range 0 to 24) and distance from origin to destination (in 1000 miles). In the raw data, the departure time was an integer of the HHmm format. It was converted to minutes first to prepare for DepHour. Three methods are considered in the case study: 1) combination of with package [[bigmemory]{}]{}; 2) combination of with package [[ff]{}]{}; and 3) the academic, single workstation version of . The default settings of [[ff]{}]{} were used. Before fitting the logistic regression, the 12GB raw data needs to be read in from the csv format, and new variables needs to be generated. This leads to a total of $120,748,239$ observations with no missing data. The scripts for the three methods are in the supplementary materials for interested readers. Reading Transforming Fitting ------------------- --------- -------------- --------- [[bigmemory]{}]{} 968.6 105.5 1501.7 [[ff]{}]{} 1111.3 528.4 1988.0 851.7 107.5 189.4 : Timing results (in seconds) for reading in the whole 12GB data, transforming to create new variables, and fitting the logistic regression with three methods: [[bigmemory]{}]{}, [[ff]{}]{}, and .[]{data-label="tab:timing"} The scripts were executed in batch mode on a 8-core machine running CenOS (a free Linux distribution functionally compatible with Red Hat Enterprise Linux which is officially supported by ), with Intel Core i7 2.93GHz CPU, and 16GB memory. Table \[tab:timing\] summarizes the timing results of reading in the whole 12GB data, transforming to create new variables, and fitting the logistic regression with the three methods. The chunk sizes were set to be 500,000 observations for all three methods. For , this was set when reading in the data to the format; for the other two methods, this was set at the fitting stage using function . Under the current settings, has a clear advantage in fitting with only 8% of the time used by the other two approaches. This is a result of the joint force of its using all 8 cores implicitly and efficient storage and retrieval of the data; the version of the data is about 1/10 of the size of the external files saved by [[bigmemory]{}]{} or [[ff]{}]{}. Using [[bigmemory]{}]{} and using [[ff]{}]{} in had very similar performance in fitting the logistic regression, but the former took less time in reading, and significantly less time (only about 1/5) in transforming variables of the latter. The [[bigmemory]{}]{} method was quite close to the method in the reading and the transforming tasks. The [[ff]{}]{} method took longer in reading and transforming than the [[bigmemory]{}]{} method, possibly because it used much less memory. Estimate Std. Error ($\times 10^4$) ------------- ---------- ---------------------------- (Intercept) $-$2.985 9.470 DepHour 0.104 0.601 Distance 0.235 4.032 Night $-$0.448 8.173 Weekend $-$0.177 5.412 : Logistic regression results for late arrival.[]{data-label="tab:fit"} The results of the logistic regression are identical from all methods, and are summarized in Table \[tab:fit\]. Flights with later departure hour or longer distance are more likely to be delayed. Night flights or weekend flights are less likely to be delayed. Given the huge sample size, all coefficients were highly significant. It is possible, however, that p-values can still be useful. A binary covariate with very low rate of event may still have an estimated coefficient with a not-so-low p-value [@Schifano2015], an effect only estimable with big data. 1 2 3 4 5 6 7 8 ------------------- ------ ------ ----- ----- ----- ----- ----- ----- [[bigmemory]{}]{} 22.1 11.2 7.8 6.9 6.2 6.3 6.4 6.8 [[ff]{}]{} 21.4 11.0 7.1 6.7 5.8 5.9 6.1 6.8 : Time results (in seconds) for parallel computing quantiles of departure delay for each day of the week with 1 to 8 cores using [[foreach]{}]{}.[]{data-label="tab:para"} As an illustration of [[foreach]{}]{} for embarrassingly parallel computing, the example in @Kane:Emer:West:scal:2013 is expanded to include both [[bigmemory]{}]{} and [[ff]{}]{}. The task is to find three quantiles (0.5, 0.9, and 0.99) of departure delays for each day of the week; that is, 7 independent jobs can run on 7 cores separately. To make the task bigger, each job was set to run twice. The resulting 14 jobs were parallelized with on the same Linux machine using 1 to 8 cores for the sake of illustration. The script is included in the supplementary materials. The timing results are summarized in Table \[tab:para\]. There is little difference between the two implementations. When there is no communication overhead, with 14 jobs one would expect the run time to reduce to 1/2, 5/14, 4/14, 3/14, 3/14, 2/14, and 2/14, respectively, with 2, 3, 4, 5, 6, 7 and 8 cores. The impact of communication cost is obvious in Table \[tab:para\]. The time reduction is only closer to the expectation in the ideal case when the number of cores is smaller. Discussion {#sec:disc} ========== This article presents a recent snapshot on statistical analysis with big data that exceed the memory and computing capacity of a single computer. Albeit under-appreciated by the general public or even mainstream academic community, computational statisticians have made respectable progress in extending standard statistical analysis to big data, with the most notable achievements in the open source community. Packages [[bigmemory]{}]{} and [[ff]{}]{} make it possible in principle to implement any statistical analysis with their data structure. Nonetheless, for anything that has not been already implemented (e.g., survival analysis, generalized estimating equations, mixed effects model, etc.), one would need to implement an EMA version of the computation task, which may not be straightforward and may involve some steep learning curves. allows easy extension of algorithms that do not require multiple passes of the data, but such analyses are mostly descriptive. An example is visualization, an important tool in exploratory analysis. With big data, the bottleneck is the number of pixels in the screen. The bin-summarize-smooth framework for visualization of large data of @Wick:bin:2014 with package [[bigvis]{}]{} [@Rpkg:bigvis] may be adapted to work with . Big data present challenges much further beyond the territory of classic statistics, requiring joint workforce with domain knowledge, computing skills, and statistical thinking [@Yu:let:2014]. Statisticians have much to contribute to both the intellectual vitality and the practical utility of big data, but will have to expand their comfort zone to engage high-impact, real world problems which are often less structured or with ambiguity [@Jord:Lin:stat:2014]. Examples are to provide structure for poorly defined problems, or to develop methods/models for new types of data such as image or network. As suggested by @Yu:let:2014, to play a critical role in the arena of big data or own data science, statisticians need to work on real problems and relevant methodology and theory will follow naturally. Acknowledgement {#acknowledgement .unnumbered} =============== The authors thank Stephen Archut, Fang Chen, and Joseph Rickert for the big data analytics information on , , and . An earlier version of the manuscript was presented at the “Statistical and Computational Theory and Methodology for Big Data Analysis” workshop in February, 2014, at the Banff International Research Station in Banff, AB, Canada. The discussions and comments from the workshop participants are gratefully acknowledged. Supplementary Materials {#supplementary-materials .unnumbered} ======================= Four scripts (and their outputs), along with a descriptive README file are provided for the case study. The first three are the logistic regression with, respectively, combination of [[bigmemory]{}]{} with (), combination of [[ff]{}]{} with (), and (); their output files have extensions. The first two run with , while the third one needs . The fourth script is for the parallel computing with [[foreach]{}]{} combined with [[bigmemory]{}]{} and [[ff]{}]{}, respectively.
{ "pile_set_name": "ArXiv" }
--- abstract: | We use the 2-loop term of the Kontsevich integral to show that there are (many) knots with trivial Alexander polynomial which don’t have a Seifert surface whose genus equals the rank of the Seifert form. This is one of the first applications of the Kontsevich integral to intrinsically $3$-dimensional questions in topology. Our examples contradict a lemma of Mike Freedman, and we explain what went wrong in his argument and why the mistake is irrelevant for topological knot concordance. address: - | Department of Mathematics\ University of Warwick\ Coventry, CV4 7AL, UK. - | Department of Mathematics\ University of California in San Diego\ 9500 Gilman Drive\ La Jolla, CA, 92093-0112, USA. author: - Stavros Garoufalidis - Peter Teichner date: ' October 7, 2003 First edition: May 31, 2002.' title: On Knots with trivial Alexander polynomial --- [^1] A question about classical knots ================================ Our starting point is a wrong lemma of Mike Freedman in [@F1 Lemma 2], dating back before his proof of the $4$-dimensional topological Poincaré conjecture. To formulate the question, we need the following A knot in 3-space has [*minimal Seifert rank*]{} if it has a Seifert surface whose genus equals the rank of the Seifert form. Since the Seifert form minus its transpose gives the (nonsingular) intersection form on the Seifert surface, it follows that the genus is indeed the smallest possible rank of a Seifert form. The formula which computes the Alexander polynomial in terms of the Seifert form shows that knots with minimal Seifert rank have trivial Alexander polynomial. Freedman’s wrong lemma claims that the converse is also true. However, in the argument he overlooks the problem that S-equivalence does [*not*]{} preserve the condition of minimal Seifert rank. It turns out that not just the argument, but also the statement of the lemma is wrong. This has been overlooked for more than 20 years, maybe because none of the classical knot invariants can distinguish the subtle difference between trivial Alexander polynomial and minimal Seifert rank. In the last decade, knot theory was overwhelmed by a plethora of new “quantum”invariants, most notably the HOMFLY polynomial (specializing to the Alexander and the Jones polynomials), and the Kontsevich integral. Despite their rich structure, it is not clear how strong these invariants are for solving open problems in low dimensional topology. It is the purpose of this paper to provide one such application. There are knots with trivial Alexander polynomial which don’t have minimal Seifert rank. More precisely, the 2-loop part of the Kontsevich integral induces an epimorphism $\overline{Q}$ from the monoid of knots with trivial Alexander polynomial, onto an [*infinitely generated*]{} abelian group, such that $\overline{Q}$ vanishes on knots with minimal Seifert rank. The easiest counterexample is shown in Figure \[fig.example\], drawn using surgery on a clasper. Surgery on a clasper is a refined form of Dehn surgery (along an embedded trivalent graph, rather than an embedded link) which we explain in Section \[sec.claspers\]. Clasper surgery is an elegant way of drawing knots that amplifies the important features of our example suppressing irrelevant information (such as the large number of crossings of the resulting knot). For example, in Figure \[fig.example\], if one pulls the central edge of the clasper out of the visible Seifert surface, one obtains an S-equivalence to a nontrivial knot with minimal Seifert rank. $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/example.eps,width=1in}} \hspace{-1.9mm}\end{array}}$$ All of the above notions make sense for knots in homology spheres. Our proof of Theorem \[thm.main\] works in that setting, too. Since [@F1 Lemma 2] was the starting point of what eventually became Freedman’s theorem that all knots with trivial Alexander polynomial are topologically slice, we should make sure that the above counterexamples to his lemma don’t cause any problems in this important theorem. Fortunately, an argument independent of the wrong lemma can be found in [@F2 Thm. 7], see also [@FQ 11.7B]. However, it uses unnecessarily the surgery exact sequence and some facts from $L$-theory. In an appendix, we shall give a more direct proof that Alexander polynomial 1 knots are topologically slice. We use no machinery, except for a single application of Freedman’s main disk embedding theorem [@F2] in $D^4$. To satisfy the assumptions of this theorem, we employ a triangular base change for the intersection form of the complement of a Seifert surface in $D^4$, which works for all Alexander polynomial 1 knots. By Theorem \[thm.main\], this base change does [*not*]{} work on the level of Seifert forms, as Freedman possibly tried to anticipate. A relevant quantum invariant ============================ The typical list of knot invariants that might find its way into a text book or survey talk on [*classical*]{} knot theory, would contain the Alexander polynomial, (twisted) signatures, (twisted) Arf invariants, and maybe knot determinants. It turns out that all of these invariants can be computed from the homology of the infinite cyclic covering of the knot complement. In particular, they all vanish if the Alexander polynomial is trivial. This condition also implies that certain “noncommutative” knot invariants vanish, namely all those calculated from the homology of [*solvable*]{} coverings of the knot complement, like the Casson-Gordon invariants [@CG] or the von Neumann signatures of [@COT]. In fact, the latter are concordance invariants and, as discussed above, all knots with trivial Alexander polynomial are topologically slice. Thus it looks fairly difficult to study knots with trivial Alexander polynomial using classical invariants. Nevertheless, there are very natural topological questions about such knots like the one explained in the previous section. We do not know a classical treatment of that question, so we turn to quantum invariants. One might want to use the Jones polynomial, which often distinguishes knots with trivial Alexander polynomial. However, it is not clear which knots it distinguishes, and which values it realizes, so the Jones polynomial is of no help to this problem. Thus, we are looking for a quantum invariant that relates well to classical topology, has good realization properties, and is one step beyond the Alexander polynomial. In a development starting with the Melvin-Morton-Rozansky conjecture and going all the way to the recent work of [@GR] and [@GK1], the Kontsevich integral has been reorganized in a rational form $\Zrat$ which is closer to the algebraic topology of knots. It is now a theorem (a restatement of the MMR Conjecture) that the “1-loop” part of the Kontsevich integral gives the same information as the Alexander polynomial [@BG; @KSA]. The quantum invariant in Theorem \[thm.main\] is the “2-loop” part $Q$ of the [*rational invariant*]{} $\Zrat$ of [@GK1]. We consider $Q$ as an invariant of Alexander polynomial 1 knots $K$ in integral homology spheres $M^3$, and summarize its properties: - $Q$ takes values in the abelian group $$\lth:= \frac{\BZ[t_1^{\pm 1},t_2^{\pm 1},t_3^{\pm 1}]}{( t_1t_2t_3-1,\quad\Sym_3\times\Sym_2) }$$ The second relations are given by the symmetric groups $\Sym_3$ which acts by permuting the $t_i$, and $\Sym_2$ which inverts the $t_i$ simultaneously. - Under connected sums and orientation-reversing, $Q$ behaves as follows: $$\begin{aligned} Q(M\# M',K \# K') &= Q(M,K) + Q(M',K')\\ Q(M,-K) &= Q(M,K) =-Q(-M,K)\end{aligned}$$ - If one applies the augmentation map $$\e:\lth\to \BZ, \quad t_i\mapsto 1,$$ then $Q(M,K)$ is mapped to the Casson invariant $\lambda(M)$, normalized by $\l(S^3_{\text{Right Trefoil},+1})=1$. - $Q$ has a simple behavior under surgery on [*null claspers*]{}, see Section \[sec.Q\]. All these properties are proven in [@GR] and in [@GK1]. Given a homology sphere $M^3$, the image of $Q$ on knots in $M$ with trivial Alexander polynomial is the subspace $\e^{-1}(\lambda(M))$ of $\lth$. The realization in the above proposition is concrete, not abstract. In fact, to realize the subgroup $\e^{-1}(\lambda(M))$ one only needs (connected sums of) knots which are obtained as follows: Pick a standard Seifert surface $\Sigma$ of genus one for the unknot in $M$, and do a surgery along a clasper $G$ with one loop and two leaves which are meridians to the bands of $\Sigma$, just like in Figure \[fig.example\]. The loop of $G$ may intersect $\Sigma$ and these intersection create the interesting examples. Note that all of these knots are ribbon which implies unfortunately that the invariant $Q$ does [*not*]{} factor through knot concordance, even though it vanishes on knots of the form $K\# -K$. Together with the following finiteness result, the above realization result proves Theorem \[thm.main\], even for knots in a fixed homology sphere. The value of $Q$ on knots with minimal Seifert rank is the subgroup of $\lth$, (finitely) generated by the three elements $$(t_1-1),\quad (t_1-1)(t_2^{-1}-1), \quad (t_1-1)(t_2-1)(t_3^{-1}-1).$$ This holds for knots in 3-space, and one only has to add $\lambda(M)$ to all three elements to obtain the values of $Q$ for knots in a homology sphere $M$. If a knot $K$ in $S^3$ has minimal Seifert rank, then $Q(S^3,K)$ can be computed in terms of three Vassiliev invariants of degree $3,5,5$. The $Q$ invariant can be in fact calculated on many classes of examples. One such computation was done in [@Ga]: The (untwisted) Whitehead double of a knot $K$ has minimal Seifert rank and $K\mapsto Q(S^3,\Wh(K))$ is a nontrivial Vassiliev invariant of degree $2$. Note that $K$ has minimal Seifert rank if and only if it bounds a certain grope of class 3. More precisely, the bottom surface of this grope is just the Seifert surface, and the second stages are embedded disjointly from the Seifert surface. However, they are allowed to intersect each other. So this condition is quite different to the notion of a “grope cobordism” introduced in [@CT]. In a forthcoming paper, we will study related questions for boundary links. This is made possible by the rational version of the Kontsevich integral for such links recently defined in [@GK1]. The analogue of knots with trivial Alexander polynomial are called [*good boundary links*]{}. In [@FQ 11.7C] this term was used for boundary links whose free cover has trivial homology. Unfortunately, the term was also used in [@F1] for a class of boundary links which should be rather called [*boundary links of minimal Seifert rank*]{}. This class of links is relevant because they form the atomic surgery problems for topological $4$-manifolds, see Remark \[rem.atomic\]. By Theorem \[thm.main\] the two definitions of good boundary links in the literature actually differ substantially (even for knots). One way to resolve the “Schlamassel” would be to drop this term all together. S-equivalence in homology spheres ================================= We briefly recall some basic notions for knots in homology spheres. We decided to include the proofs because they are short and might not be well known for homology spheres, but we claim no originality. Let $K$ be a knot in a homology sphere $M^3$. By looking at the inverse image of a regular value under a map $M\sminus K\to S^1$, whose homotopy class generates $$[M\sminus K,S^1] \cong H^1(M\sminus K;\BZ) \cong H_1(K;\BZ) \cong \BZ \quad \text{ (Alexander duality in $M$)}$$ one constructs a [*Seifert surface*]{} $\Sigma$ for $K$. It is a connected oriented surface embedded in $M$ with boundary $K$. Note that a priori the resulting surface is not connected, but one just ignores the closed components. By the usual discussion about twistings near $K$, one sees that a collar of $\Sigma$ always defines the linking number zero pushoff of $K$. To discuss uniqueness of Seifert surfaces, assume that $\Sigma_0$ and $\Sigma_1$ are both connected oriented surfaces in $M$ with boundary $K$. After a finite sequence of “additions of tubes”, i.e. ambient 0-surgeries, $\Sigma_0$ and $\Sigma_1$ become isotopic. Consider the following closed surface in the product $M \times I$ (where $I=[0,1]$): $$\Sigma_0\cup (K \times I) \cup \Sigma_1 \, \subset M \times I$$ As above, relative Alexander duality shows that this surface bounds an connected oriented $3$-manifold $W^3$, embedded in $M \times I$. By general position, we may assume that the projection $p:M \times I\to I$ restricts to a Morse function on $W$. Moreover, the usual dimension counts show that after an ambient isotopy of $W$ in $M \times I$ one can arrange for $p:W\to I$ to be an ordered Morse function, in the sense that the indices of the critical points appear in the same order as their values under $p$. This can be done relative to $K \times I\subset W$ since $p$ has no critical points there. Consider a regular value $a\in I$ for $p$ between the index 1 and index 2 critical points. Then $\Sigma:=p^{-1}(a) \subset M \times \{a\} = M$ is a Seifert surface for $K$. By Morse theory, $\Sigma$ is obtained from $\Sigma_0$ by - A finite sequence of small $2$-spheres $S_i$ in $M$ being born, disjoint from $\Sigma_0$. These correspond to the index 0 critical points of $p$. - A finite sequence of tubes $T_k$, connecting the $S_i$ to (each other and) $\Sigma_0$. These correspond to the index 1 critical points of $p$. Since $W$ is connected, we know that the resulting surface $\Sigma$ must be connected. In case there are no index 0 critical points, it is easy to see that $\S$ is obtained from $\S_0$ by additions of tubes. We will now reduce the general case to this case. This reduction is straight forward if the first tubes $T_i$ that are born have exactly one end on $S_i$, where $i$ runs through all index 0 critical points. Then a sequence of applications of the [*lamp cord trick*]{} (in other words, a sequence of Morse cancellations) would show that up to isotopy one can ignore these pairs of critical points, which include all index 0 critical points. To deal with the general case, consider the level just after all $S_i$ were born and add “artificial” thin tubes (in the complement of the expected $T_k$) to obtain a connected surface. By the lamp cord trick, this surface is isotopic to $\Sigma_0$, and the $T_k$ are now tubes on $\Sigma_0$, producing a connected surface $\Sigma_0'$. Since by construction the tubes $T_k$ do not go through the artificial tubes, we can cut the artificial tubes to move from $\Sigma_0'$ back to $\Sigma$ (through index 2 critical points). We can treat $\Sigma_1$ exactly as above, by turning the Morse function upside down, replacing index 3 by index 0, and index 2 by index 1 critical points. The result is a surface $\Sigma_1'$, obtained from $\Sigma_1$ by adding tubes, and such that $\Sigma$ is obtained from $\Sigma_1'$ by cutting other tubes. Collecting the above information, we now have an ambient Morse function with only critical points of index 1 and 2, connecting $\Sigma_0$ and $\Sigma_1$ (rel $K$), and a middle surface $\S$ which is tube equivalent to $\S_0$ and $\S_1$. The result follows. The above proof motivates the definition of S-equivalence, which is the algebraic analogue, on the level of Seifert forms, of the geometric addition of tubes. Given a Seifert surface $\Sigma$ for $K$ in $M$, one defines the [*Seifert form*]{} $$S_\Sigma: H_1\Sigma \times H_1\Sigma \to \BZ$$ by the formula $S_\Sigma(a,b):=\lk(a,b\down)$. These are the usual linking numbers for circles in $M$ and $b\down$ is the circle $b$ on $\Sigma$, pushed slighly off the Seifert surface (in a direction given by the orientations). The downarrow reminds us that in the case of $a$ and $b$ being the short and long curve on a tube, we are pushing $b$ [*into*]{} the tube, and hence the resulting linking number is one. It should be clear what it means to “add a tube” to the Seifert form $S_\Sigma$: The homology increases by two free generators $s$ and $l$ (for “short” and “long” curve on the tube), and the linking numbers behave as follows: $$\lk(s,s\down)=\lk(l,l\down)= \lk(l,s\down)=\lk(s,a\down)=0,\quad\lk(s,l\down)=1, \quad\forall a\in H_1\Sigma.$$ Note that there is no restriction on the linking numbers of $l$ with curves on $\Sigma$, reflecting the fact that the tube can wind around $\Sigma$ in an arbitrary way. Observing that isotopy of Seifert surfaces gives isomorphisms of their Seifert forms, we are lead to the following algebraic notion. It abstracts the necessary equivalence relation on Seifert forms coming from the non-uniqueness of the Seifert surface. Two Seifert surfaces (for possibly distinct knots) are called [*S-equivalent*]{} if their Seifert forms become isomorphic after a finite sequence of (algebraic) additions of tubes. Geometric basis for Seifert surfaces ==================================== It is convenient to discuss Seifert forms in terms of their corresponding matrices. So for a given basis of $H_1\Sigma$, denote by $SM_\Sigma$ the matrix of linking numbers describing the Seifert form $S_\Sigma$. For example, the addition of a tube has the following effect on a Seifert matrix $SM$: $$SM \mapsto \left( \begin{matrix} SM & 0 & \rho \\ 0 & 0 & 1 \\ \rho^T & 0 & 0\\ \end{matrix} \right)$$ Here we have used the short and long curves on the tube as the last two basis vectors (in that order). $\rho$ is the column of linking number of the long curve with the basis elements of $H_1\Sigma$ and $\rho^T$ is its transposed row. It is clear that in general this operation can destroy the condition of having minimal Seifert rank as defined in Definition \[def.minrank\]. An important invariant of S-equivalence is the [*Alexander polynomial*]{}, defined by $$\lbl{eq.delta} \Delta_K(t):=\det( t^{1/2}\cdot SM-t^{-1/2} SM^T)$$ for any Seifert matrix $SM$ for $K$. One can check that this is unchanged under S-equivalence, it lies in $\BZ[t^{\pm 1}]$ and satisfies the symmetry relations $\Delta_K(t^{-1})=\Delta_K(t)$ and $\Delta_K(1)=1$. Let $\Sigma$ be a Seifert surface of genus $g$. The following basis of $H_1\Sigma$ will be useful. - A [*geometric basis*]{} is a set of embedded simple closed curves $\{s_1,\dots,s_g, \ell_1,\dots,\ell_g\}$ on $\Sigma$ with the following geometric intersections $$s_i\cap s_j =\emptyset= \ell_i\cap \ell_j, \text{ and } s_i\cap \ell_j =\delta_{i,j}$$ Note that the Seifert matrix $SM_\Sigma$ for a geometric basis always satisfies $$SM_\Sigma - SM_\Sigma^T = \mat{0}{{{1\!\!1}}}{-{{1\!\!1}}}{0}.$$ - A [*trivial Alexander basis*]{} is a geometric basis such that the corresponding Seifert matrix can be written in terms of four blocks of $g \times g$-matrices as follows: $$\mat{0}{{{1\!\!1}}+U}{U^T}{V}$$ Here $U$ is an upper triangular matrix (with zeros on and below the diagonal), $U^T$ is its transpose, and $V$ is a symmetric matrix with zeros on the diagonal. - A [*minimal Seifert*]{} basis is a trivial Alexander basis such that the matrices $U$ and $V$ are zero, so the Seifert matrix looks as simply as could be: $$\mat{0}{{{1\!\!1}}}{0}{0}$$ By starting with a disk, and then adding tubes according to the matrices $U$ and $V$, it is clear that any matrix for a trivial Alexander basis can occur as the Seifert matrix for the unknot. The curves $s_i$ above are the short curves on the tubes, and $\ell_j$ are the long curves. The matrix $U$ must be lower triangular because the long curves can only link those short curves that are already present. The following lemma explains our choice of notation above: Any Seifert surface has a geometric basis. Moreover, - A knot has trivial Alexander polynomial if and only if there is Seifert surface with a trivial Alexander basis. - A knot has minimal Seifert rank if and only if it has a Seifert surface with a minimal Seifert basis. By the classification of surfaces, they always have a geometric basis. If a knot has a trivial Alexander basis, then an elementary computation using Equation implies that it has trivial Alexander polynomial. Finally, the Seifert matrix for a minimal Seifert basis obviously has minimal rank. So we are left with showing the two converses of the statements in our lemma. Start with a knot with trivial Alexander polynomial. Then by Trotter’s theorem [@Tr] it is S-equivalent to the unknot, and hence its Seifert form is obtained from the empty form by a sequence of algebraic additions of tubes. Then an easy induction implies that the resulting Seifert matrix $SM_\Sigma$ is as claimed, so we are left with showing that the corresponding basis can be chosen to be geometric on $\Sigma$. But since $SM_\Sigma - SM_\Sigma^T$ is the standard (hyperbolic) form, we get a symplectic isomorphism of $H_1\Sigma$ which sends the given basis into a standard (geometric) one. Since the mapping class group realizes any such symplectic isomorphism, we see that the given basis can be realized by a geometric basis. Finally, consider a Seifert surface with minimal Seifert rank. By assumption, there is a basis of $H_1\Sigma$ so that the Seifert matrix looks like $$SM_\Sigma=\mat{0}{A}{0}{B}$$ Since $\Delta(1)=1$, Equation implies that $A$ must be invertible, and hence there is a base change so that the Seifert matrix has the desired form $$SM_\Sigma=\mat{0}{{{1\!\!1}}}{0}{0}$$ Just as above one shows that this matrix is also realized by a geometric basis. Every knot in $S^3$ with minimal Seifert rank $g$ can be constructed from a standard genus $g$ Seifert surface of the unknot, by tying the $2g$ bands into a 0-framed string link with trivial linking numbers: $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/standardsurface4.eps,width=2.5in}} \hspace{-1.9mm}\end{array}}$$ Clasper Surgery =============== As we mentioned in Section \[sec.question\], we can construct examples of knots that satisfy Theorem \[thm.main\] using [*surgery on claspers*]{}. Since claspers play a key role in geometric constructions, as well as in realization of quantum invariants, we include a brief discussion here. For a reference on claspers[^2] and their associated surgery, we refer the reader to [@Gu2; @H] and also [@CT; @GGP]. [*Surgery*]{} is an operation of cutting, twisting and pasting within the category of smooth manifolds. A low dimensional example of surgery is the well-known [*Dehn surgery*]{}, where we start from a framed link $L$ in a 3-manifold $M$, we cut out a tubular neighborhood of $L$, twist the boundary using the framing, and glue back. The result is a 3-dimensional manifold $M_L$. Clasper surgery is entirely analogous to Dehn surgery, excpet that it is operated on claspers rather than links. A a clasper is a thickening of a trivalent graph, and it has a preferred set of loops, called the leaves. The degree of a clasper is the number of trivalent vertices (excluding those at the leaves). With our conventions, the smallest clasper is a Y-clasper (which has degree one and three leaves), so we explicitly exclude struts (which would be of degree zero with two leaves). A clasper of degree 1 is an embedding $G: N \to M$ of a regular neighborhood $N$ of the graph $\Ga$ (with 4 trivalent vertices and 6 edges) $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/yvaria.eps,width=3in}} \hspace{-1.9mm}\end{array}}$$ into a 3-manifold $M$. Surgery on $G$ can be described by removing the genus 3 handlebody $G(N)$ from $M$, and regluing by a certain diffeomorphism of its boundary (which acts trivially on the homology of the boundary). We will denote the result of surgery by $M_G$. To explain the regluing diffeomorphism, we describe surgery on $G$ by surgery on the following framed six component link $L$ in $M$: $L$ consists of a $0$-framed Borromean ring and an arbitrarily framed three component link, the so-called [*leaves*]{} of $G$, see the figure above. The framings of the leaves reflect the prescribed neighborhood $G(N)$ of $\Ga$ in $M$. If one of the leaves is $0$-framed and bounds an embedded disk disjoint from the rest of $G$, then surgery on $G$ does not change the 3-manifold $M$, because the gluing diffeomorphism extends to $G(N)$. In terms of the surgery on $L$ this is explained by a sequence of Kirby moves from $L$ to the empty link (giving a diffeomorphism $M_G \cong M$). However, if a second link $L'$ in $M \smallsetminus G(N)$ intersects the disk bounding the 0-framed leaf of $L$ then the pairs $(M,L')$ and $(M_G,L')$ might not be diffeomorphic. This is the way how claspers act on knots or links in a fixed $3$-manifold $M$, a point of view which is most relevant to this paper. A particular case of surgery on a clasper of degree 1 (sometimes called a Y-move) looks locally as follows: $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/borro2.eps,width=3.5in}} \hspace{-1.9mm}\end{array}}$$ In general, surgery on a clasper $G$ of degree $n$ is defined in terms of simoultaneous surgery on $n$ claspers $G_1, \dots, G_n$ of degree 1. The $G_i$ are obtained from $G$ by breaking its edges and inserting 0-framed Hopf linked leaves as follows: $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/edgecut.eps,width=2in}} \hspace{-1.9mm}\end{array}}$$ In particular, consider the clasper $G$ of degree 2 in Figure \[fig.example\], which has two leaves and two edges. We can insert two pairs of Hopf links in the edges of $G$ to form two claspers $G_1$ and $G_2$ of degree 1, and describe the resulting clasper surgery on $G_1$ and $G_2$ by using twice the above figure on each of the leaves of $G$. Draw the knot which is described by surgery on a clasper of degree 2 in Figure \[fig.example\]. It should be clear from the drawing why it is easier to describe knots by clasper surgery on the unknot, rather than by drawing them explicitly. Moreover, as we will see shortly, quantum invariants behave well under clasper surgery. The $Q$ invariant ================= A brief review of the $\Zrat$ invariant --------------------------------------- The quantum invariant we want to use for Theorem \[thm.main\] is the Euler-degree $2$ part of the rational invariant $\Zrat$ of [@GK1]. In this section we will give a brief review of the full $\Zrat$ invariant. Hopefully, this will underline the general ideas more clearly, and will be a useful link with our forthcoming work. $\Zrat$ is a rather complicated object; however it simplifies when evaluated on Alexander polynomial $1$ knots, as was explained in [@GK1 Remark 1.6]. In particular, it is a map of monoids (taking connected sum to multiplication) $$\Zrat: \text{Alexander polynomial 1 knots} \longto \A(\La)$$ where the range is a new algebra of diagrams with beads defined as follows. We abbreviate the ring of Laurent polynomials in $t$ as $\La:=\BZ[t^{\pm 1}]$. $\A(\La)$ is the completed $\BQ$-vector space generated by pairs $(G,c)$, where $G$ is a trivalent graph, with oriented edges and vertices and $c:\mathrm{Edges}(G)\to \La$ is a $\La$-coloring of $G$, modulo the relations: $\AS$, $\IHX$, Orientation Reversal, Linearity, Holonomy and Graph Automorphisms, see Figure \[relations4\] below. $\A(\La)$ is graded by the [*Euler degree*]{} (that is, the number of vertices of graphs) and the completion is with respect to this grading. $\A(\La)$ is a commutative algebra with multiplication given by the disjoint union of graphs. $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/relations4.eps,width=4.5in}} \hspace{-1.9mm}\end{array}}$$ Notice that a connected trivalent graph $G$ has $2n$ vertices, $3n$ edges, and its Euler degree equals to $-2\chi(G)$, where $\chi(G)$ is the [*Euler characteristic*]{} of $G$. This explains the name “Euler degree”. Where is the $\Zrat$ invariant coming from? There is an important [*hair*]{} map $$\hair: \A(\La) \longto \A(\ast)$$ which is defined by replacing a bead $t$ by an exponential of hair: $$\strutb{}{}{t} \mapsto \sum_{n=0}^\infty \frac{1}{n!} \, \, {\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/attacchn.eps,width=.7in}} \hspace{-1.9mm}\end{array}}$$ Here, $\A(\ast)$ is the completed (with respect to the [*Vassiliev degree*]{}, that is half the number of vertices) $\BQ$-vector space spanned by vertex-oriented unitrivalent graphs, modulo the $\AS$ and $\IHX$ relations. It was shown in [@GK1] that when evaluated on knots of Alexander polynomial 1, the Kontsevich integral $Z$ is determined by the rational invariant $\Zrat$ by: $$\lbl{eq.Zhair} Z=\hair \circ \Zrat$$ Thus, in some sense $\Zrat$ is a [*rational lift*]{} of the Kontsevich integral. Note that although the Hair map above is not 1-1 [@P], the invariants $Z$ and $\Zrat$ might still contain the same information. The existence of the $\Zrat$ invariant was predicted by Rozansky, [@R], who constructed a rational lift of the colored Jones function, i.e., for the image of the Kontsevich integral on the level of the $\mathfrak{sl}_2$ Lie algebras. The $\Zrat$ invariant was constructed in [@GK1]. How can one compute the $\Zrat$ invariant (and therefore, also the Kontsevich integral) on knots with trivial Alexander polynomial? This is a difficult question; however $\Zrat$ is a graded object, and in each degree it is a [* *]{} in an appropriate sense. In order to explain this, we need to recall the [*null move*]{} of [@GR], which is defined in terms of surgery on a special type of clasper. Consider a knot $K$ in a homology sphere $M$ and a clasper $G \sub M\sminus K$ whose leaves are null homologous knots in the knot complement $X=M\sminus K$. We will call such claspers [*null*]{} and will denote the result of the corresponding surgery by $(M,K)_G$. Surgery on null claspers preserves the set of Alexander polynomial 1 knots. Moreover, by results of [@Ma] and [@MN] one can untie every Alexander polynomial 1 knot via surgery on some null clasper, see [@GR Lemma 1.3]. As usual in the world of s, if $G=\{G_1,\dots,G_n\}$ is a collection of null claspers, we set $$[(M,K),G]:= \sum_{I \subset \{0,1\}^n} (-1)^{|I|} (M,K)_{G_I}$$ where $|I|$ denotes the number of elements of $I$ and $(M,K)_{G_I}$ stands for the result of simultaneous surgery on $G_i$ for all $i \in I$. A [* of null-type $k$*]{} by definition vanishes on all such alternating sums with $ k < \deg(G):=\sum_{i=1}^n \deg(G_i). $ ([@GK1]) $\Zrat_{2n}$ is a  of null-type $2n$. Furthermore, the degree $2n$ term (or [*symbol*]{}) of $\Zrat_{2n}$ can be computed in terms of the equivariant linking numbers of the leaves of $G$, as we explain next. Fix an Alexander polynomial 1 knot $(M,K)$, and consider a [*null homologous link*]{} $C \sub X$ of two ordered components, where $X=M\sminus K$. The lift $\ti C$ of $C$ to the $\BZ$-cover $\ti X$ of $X$ is a link. Since $H_1(\ti X)=0$ (due to our assumption that $\Delta(M,K)=1$) and $H_2(\ti X)=0$ (true for $\BZ$-covers of knot complements) it makes sense to consider the linking number of $\ti C$. Fix a choice of lifts $\ti C_i$ for the components of $C$. The equivariant linking number is the finite sum $$\lkZ(C_1,C_2)=\sum_{n \in \BZ} \lk(\ti C_1, t^n \, \ti C_2) \, t^n \, \in\BZ[t^{\pm 1}]=\La .$$ Shifting the lifts $\ti C_i$ by $n_i\in\BZ$ multiplies this expression by $t^{n_1-n_2}$. There is a way to fix this ambiguity by considering an arc-basing of $C$, that is a choice of disjoint embedded arcs $\ga$ in $M\sminus (K \cup C)$ from a base point to each of the components of $C$. In that case, we can choose a lift of $C \cup \ga$ to $\ti X$ and define the equivariant linking number $\lkZ(C_1,C_2)$. The result is independent of the lift of $C \cup \ga$, but of course depends on the arc-basing $\ga$. It will be useful for computations to describe an alternative way of fixing the ambiguity in the definition of equivariant linking numbers. Given $(M,K)$ consider a Seifert surface $\Sigma$ for $(M,K)$, and a link $C$ of two ordered components in $M\sminus \S$. We will call such links [*$\Sigma$-null*]{}. Notice that a $\S$-null link is $(M,K)$-null, and conversely, every $(M,K)$-null link is $\S$-null for some Seifert surface $\S$ of $(M,K)$. Given a $\S$-null link $C$ of two ordered components, one can construct the $\BZ$-cover $\ti X$ by cutting $X$ along $\Sigma$, and then putting $\BZ$ copies of this fundamental domain together to obtain $\ti X$. It is then obvious that there are canonical lifts of $\Sigma$-null links which lie in one fundamental domain and using them, one can define the equivariant linking number of $C$ without ambiguity. This definition of equivariant linking number agrees with the previous one if we choose basing arcs which are disjoint from $\S$. Consider a standard Seifert surface $\Sigma$ for the unknot $\O$. Let $C_i$ be two meridians of the bands of $\Sigma$; thus $(C_1,C_2)$ is $\Sigma$-null. If these bands are not dual, then $(\O,C_1,C_2)$ is an unlink and hence $\lkZ(C_1,C_2)=0$. If the bands are dual, then this 3-component link is the Borromean rings. Recall that the Borromean rings are the Hopf link with one component Bing doubled (and the other one being $\O$). Then one can pull apart that link, in the complement of $\O$, by introducing two intersections (of opposite sign) between $C_1$ and $C_2$, differing by the meridian $t$ to $\O$. This shows that in this case $$\lkZ(C_1,C_2)= t-1$$ In order to give a formula for the symbol of $\Zrat_{2n}$, we need to recall the useful notion of a complete contraction of an $(M,K)$-null clasper $G$ of degree $2n$, [@GR Sec.3]. Let $G^{break}=\{G_1,\dots, G_{2n}\}$ denote the collection of degree $1$ claspers $G_i$ which are obtained by inserting a Hopf link in the edges of $G$. Choose arcs from a fixed base point to the trivalent vertex of each $G^{nl}_i$, which allows us to define the equivariant linking numbers of the leaves of $G^{break}$. Let $G^{nl}=\{G^{nl}_1,\dots, G^{nl}_{2n}\}$ denote the collection of abstract unitrivalent graph obtained by removing the leaves of the $G_i$ (and leaving one leg, or univalent vertex, for each leave behind). Then the [*complete contraction*]{} $\la G\ra\in\A(\La)$ of $G$ is defined to be the sum over all ways of gluing pairwise the legs of $G^{nl}$, with the resulting edges of each summand labelled by elements of $\La$ as follows: pick orientations of the edges of $G^{nl}$ such that pairs of legs that are glued are oriented consistently. If two legs $l$ and $l'$ are glued, with the orientation giving the order, then we attach the bead $\lkZ(l,l')$ on the edge created by the gluing. The result of a complete contraction of a null clasper $G$ is a well-defined element of $\A(\La)$. Changing the edge orientations is taken care of by the symmetry of the equivariant linking number as well as the orientation reversal relations. Changing the arcs is taken care by the holonomy relations in $\A(\La)$. Then the complete contraction $\la G\ra\in\A(\La)$ of a single clasper $G$ with $\Sigma$-null leaves is easily checked to be the sum over all ways of gluing pairwise the legs of $G^{nl}$, with the resulting edges of each summand labelled by elements of $\La$ as follows: First pick orientations of the edges of $G^{nl}$ such that pairs of legs that are glued are oriented consistently. If two legs $l$ and $l'$ are glued, with the orientation giving the order, then we attach the bead $\lkZ(l,l')$ on the edge created by the gluing. In addition, each internal edge $e$ of $G^{nl}$ is labelled by $t^n$, where $n\in\BZ$ is the intersection number of $e$ with the Seifert surface $\Sigma$. One can check directly that this way of calculating a complete contraction of a clasper $G$ with $\Sigma$-null leaves is a well-defined element of $\A(\La)$: Changing the edge orientations is taken care of by the symmetry of the equivariant linking number as well as the orientation reversal relations. The holonomy relations in $\A(\La)$ correspond beautifully to Figure \[fig.surfacehol\] in which a trivalent vertex of $G$ is pushed through $\Sigma$. $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/surfacehol.eps,width=4in}} \hspace{-1.9mm}\end{array}}$$ Finally, we can state the main result on calculating the invariant $\Zrat$. ([@GK1 Thm.4]) If $(M,K)$ is a knot with trivial Alexander polynomial and $G$ is a collection of $(M,K)$-null claspers of degree $2n$, then $$\Zrat_{2n}([(M,K),G])= \la G \ra \in \A_{2n}(\La)$$ A review of the $Q$ invariant ----------------------------- We will be interested in $Q=\Zrat_2$, the loop-degree 2 part of $\Zrat$. It turns out that $Q$ takes values in a lattice $\A_{2,\BZ}(\La)$, that is the abelian subgroup of $\A(2,\La)$ generated by integer multiples of graphs with beads. The next lemma (taken from [@GK2 Lemma 5.9]) explains the definition of $\Lath$. There is an isomorphism of abelian groups: $$\lbl{eq.la} \Lath \longrightarrow \A_{2,\BZ}(\La) \hspace{0.5cm} \text{given by:} \hspace{0.5cm} \a_1 \, \a_2 \, \a_3 \mapsto {\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/Theta.eps,width=0.8in}} \hspace{-1.9mm}\end{array}}.$$ Since $\Aut(\Theta) \cong \Sym_3 \times \Sym_2$, it is easy to see that the above map is well-defined. There are two trivalent graphs of degree $2$, namely $\Theta$ and $\eyes$. Using the Holonomy Relation, we can assume that the labeling of the middle edge of $\eyes$ is $1$. In that case, the $\IHX$ relation implies that $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/theyes.eps,width=2in}} \hspace{-1.9mm}\end{array}}$$ This shows that the map in question is onto. It is also easy to see that it is a monomorphism. Let us define the [*reduced*]{} groups $$\ti\A(\La)=\mathrm{Ker}(\A(\La)\to\A(\phi))$$ induced by the augmentation map $\e:\La\to\BZ$. Let $\tiLath:=\mathrm{Ker}(\e: \Lath \to \BZ)$. The proof of the above lemma implies that there is an isomorphism: $$\tiLath \cong \ti\A_{2,\BZ}(\La).$$ Realization and finiteness -------------------------- Let us first assume that the ambient 3-manifold $M=S^3$. It is easy to see that $\ti \Lath$ is generated by $(t_1-1)t_2^n t_3^m$ for $n,m \in \BZ$, so we only need to realize these values. Consider a standard genus one Seifert surface $\Sigma$ of an unknot with bands $\{\a,\b \}$ and the clasper $G$ $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/2wheels.eps,width=0.5in}} \hspace{-1.9mm}\end{array}}$$ of degree $2$ (with two leaves shown as ellipses above). Choose an embedding of $G$ into $S^3 \smallsetminus \O$ in such a way that the two leaves are $0$-framed meridians of the two bands of $\Sigma$ and the two internal edges of $G$ intersect $\Sigma$ algebraically $n$ respectively $m$ times. Then $G$ is a $\Sigma$-null clasper and Theorem \[thm.GKfti2\], together with Exercise \[ex.levine\] we get $$Q(S^3,\O_G)=-Q([(S^3,\O),G])= (1-t_1)t_2^n t_3^m \in \tiLath.$$ The realization result follows for $M=S^3$. For the case of a general homology sphere $M$, use the behavior of $Q$ under connected sums. To show that the constructed knots are ribbon, we refer to [@GL Lem.2.1, Thm.5], or [@CT Thm.4]. The next lemma gives a clasper construction of all minimal Seifert rank knots. We first introduce a useful definition. Consider a surface $\Sigma \subset S^3$ and a clasper $G \subset S^3\sminus \pt \Sigma$. We say that $G$ is [*$\Sigma$-simple*]{} if the leaves of $G$ are $0$-framed meridians of the bands of $\Sigma$ and the edges of $G$ are disjoint from $\Sigma$. Every knot in $S^3$ with minimal Seifert rank can be constructed from a standard Seifert surface $\Sigma$ of the unknot, by surgery on a disjoint collection of $\Sigma$-simple Y-claspers. The result follows by Lemma \[cor.Matveev\] and the fact, proven by Murakami-Nakanishi [@MN], that every string-link with trivial linking numbers can be untied by a sequence of Borromean moves. In terms of $\O$, these Borromean moves are $\Sigma$-simple Y-clasper surgeries (with the leaves being $0$-framed meridians to the bands of $\Sigma$). [*Proof of Proposition \[prop.finite\].*]{} (Finiteness) Consider a knot $K$ in $S^3$ with minimal Seifert rank. By Lemma \[lem.Yconstruct\] it is obtained from a standard Seifert surface $\Sigma$ of an unknot $\O$ by surgery on a disjoint collection $G$ of $\Sigma$-simple Y-claspers. The fact that $Q$ is an invariant of type $2$ implies that $$Q(S^3,K)=-Q((S^3,\O)-(S^3,\O)_G)=-\sum_{G' \sub G} Q([(S^3,\O),G']) +\sum_{G'' \sub G} Q([(S^3,\O),G''])$$ where the summation is over all claspers $G'$ and $G''$ of degree $1$ and $2$ respectively. The $Q([(S^3,\O),G''])$ terms can be computed by complete contractions and using Example \[ex.levine\], it follows that they contribute only summands of the form $(t_i-1)$. Next we simplify the remaining terms, which are given by $\Sigma$-simple Y-claspers $G'\sub G$. Note that we can work modulo $\Sigma$-simple claspers of degree $>1$ by the above argument. Using the Sliding Lemma ([@GR Lem.2.5]) we can move around all edges and finally put $G'$ into a standard position as in Figure \[fig.remain\] below. $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/remain.eps,width=1in}} \hspace{-1.9mm}\end{array}}$$ We are reduced to $\Sigma$ of genus one because if the 3 leaves of $G'$ are meridians to 3 distinct bands of $\Sigma$, the unknot $\O$ would slip off the clasper altogether, i.e., surgery on the simplified $G'$ does not alter $\O$. This means that we are left with a family of 4 examples, given by the various possibilities of the half-twists in the 3 edges of the the clasper in Figure \[fig.remain\]. Let $\a$ and $\b$ denote the two bands of the standard genus 1 surface $\S$, and let $m_{\a}, m_{\b}$ (resp. $\ell_{\a}, \ell_{\b}$) denote the knots which are meridians (resp. longitudes) of the bands. Let $G'$ denote the $\S$-simple clasper of degree 1 as in Figure \[remain\]. It has 3 leaves $m_{\a}, m_{\a}$ and $\ell_{\b}$. We have $$[(S^3,\O),G']=[(S^3,\O),G'']+[(S^3,\O),G''']$$ modulo terms of degree 2, where $G''$ is a $\S$-simple clasper with leaves $m_{\a}, m_{\a}, \ell_{\a}$ and $G'''$ is obtained from $G''$ by replacing the edge of $\ell_{\a}$ by one that intersects $\S$ once. (of the claim) Observe that $m_{\b}$ is isotopic to $\ell_{\a}$ by an isotopy rel $\S$. Use this isotopy to move the leaf $\ell_{\b}$ of $G'$ near the $\a$ handle, and use the Cutting a Leaf lemma ([@GR Lem.2.4]) to conclude the proof. Going back to the proof of Proposition \[prop.finite\], we may apply the Cutting a Leaf lemma once again to replace $G''$ by a $\S$-simple clasper with leaves two copies of $m_{\a}$ together with a meridian of one copy of $m_{\a}$. For this clasper, the surface $\S$ can slide off, and as a result surgery gives back the unknot. Work similarly for $G'''$, and conclude that $Q([(S^3,\O),G'])$ lies in the subgroup of $\Lath$ which is generated by the elements $$(t^{\e_1}_1-1),\quad (t^{\e_1}_1-1)(t_2^{\e_2}-1), \quad (t^{\e_1}_1-1) (t^{\e_2}_2-1)(t_3^{\e_3}-1)$$ for all $\e_i = \pm 1$. Using the relations in $\Lath$, it is easy to show that this subgroup is generated by the three elements as claimed in Proposition \[prop.finite\]. This concludes the proposition for knots in $S^3$. In the case of a knot $K$ with minimal Seifert rank in a general homology sphere $M$, we may untie it by surgery on a collection of $\Sigma$-simple Y-claspers, $\Sigma$ a standard Seifert surface for the unknot $\O$. That is, we may assume that $(M,K)=(S^3,\O)_G$ for some $\Sigma$-null clasper $G$ whose leaves are meridians of the bands of $\S$ and have framing $0$ or $\pm 1$. We can follow the previous proof to conclude our result. As we discussed previously, the rational invariant $\Zrat$ determines the Kontsevich integral via Equation . It follows that $\hair \circ Q$ is a power series of Vassiliev invariants. Although the $\hair$ map is not 1-1, it is for diagrams with two loops, thus $\hair \circ Q$ determines $Q$. Consider the image of $t_1-1$, $(t_1-1)(t_2^{-1}-1)$ and $(t_1-1)(t_2-1)(t_3^{-1}-1)$ under the $\hair$ map in $\A(\ast)$. It follows that the Vassiliev invariants of degree $3,5$ and $5$ which separate the uni-trivalent graphs $${\begin{array}{c} \hspace{-1.3mm} \raisebox{-4pt}{\epsfig{figure=draws/vassiliev.eps,width=2in}} \hspace{-1.9mm}\end{array}}$$ determine the value of $Q$ on knots with minimal Seifert rank. Knots with trivial Alexander polynomial are topologically slice =============================================================== A complete argument for this fact can be found in [@F2 Thm.7], see also [@FQ 11.7B]). However, that argument uses unnecessarily the surgery exact sequence for the trivial as well as infinite cyclic fundamental group. Moreover, one needs to know Wall’s surgery groups $L_i(\BZ[\BZ])$ for $i=4,5$. We shall give a direct argument in the spirit of [@F1] but without assuming that the knot has minimal Seifert rank (which Freedman did assume indirectly). The simple new ingredient is the triangular base change, Lemma \[lem.triangular\]. Note that at the time of writing [@F1], the topological disk embedding theorem was not known, so the outcome of the constructions below was much weaker than an actual topological slice. The direct argument uses a single application of Freedman’s main disk embedding theorem [@F2]. In [@F2] it is not stated in its most general form which we need here, so we really use the disk embedding theorem [@FQ 5.1B]. So let’s first recall this basic theorem. It works in any $4$-manifold with [*good*]{} fundamental group, an assumption which up to day is not known to be really necessary. In any case, cyclic groups are known to be good which is all we need in this appendix. Note that the second assumption, on dual $2$-spheres, is well known to be necessary. Without this assumption, the proof below would imply that every “algebraically slice” knot, i.e., a knot whose Seifert form has a Lagrangian, is topologically slice. This contradicts for example the invariants of [@CG]. A more direct reason that this assumption is necessary was recently given in [@ST]: In the absence of dual $2$-spheres, there are nontrivial secondary invariants (in two copies of the group ring modulo certain relations), which are obstructions to a disk being homotopic to an embedding. (Disk embedding theorem [@FQ 5.1B]) Let $\Delta_j:(D^2,S^1)\to (N^4,\partial N)$ be continuous maps of disks which are embeddings on the boundary, and assume that all intersection and self-intersection numbers vanish in $\BZ[\pi_1N]$. If $\pi_1N$ is good and there exist algebraically dual $2$-spheres, then there is a regular homotopy (rel. boundary) which takes the $\Delta_j$ to disjoint (topologically flat) embeddings. The assumption on dual $2$-spheres (which is an algebraic condition) means that there are framed immersions $f_i:S^2\to N$ such that the intersection numbers in $\BZ[\pi_1N]$ satisfy $$\lambda(f_i,\Delta_j)=\delta_{i,j}$$ The following simple observation turns out to be crucial for Alexander polynomial 1 knots. There exist algebraically dual $2$-spheres for $\Delta_i$ if and only if there exist framed immersions $g_i:S^2\to N$ with $$\lambda(g_i,\Delta_i)=1 \text{ and } \lambda(g_i,\Delta_j)=0 \text{ for } i>j.$$ So the matrix of intersection numbers of $g_i$ and $\Delta_j$ needs to have zeros only below the diagonal. Define $f_1:=g_1$, and then inductively $$f_i:=g_i - \sum_{k<i} \lambda(g_i,\Delta_k) f_k.$$ Then one easily checks that $\lambda(f_i,\Delta_j)=\delta_{i,j}$. The disk embedding theorem is proven by an application of another embedding theorem [@FQ 5.1A], to the Whitney disks pairing the intersections among the $\Delta_i$. Thus [@FQ Theorem 5.1A] might be considered as more basic. It sounds very similar to [@FQ Theorem 5.1B], except that the assumptions on trivial intersection and self-intersection numbers is moved from the $\Delta_i$ to the dual $2$-spheres. Hence one looses the information about the regular homotopy class of $\Delta_i$. In most applications, one wants this homotopy information, hence we have stated theorem 5.1B as the basic disk embedding theorem. However, in the application below we might as well have used 5.1A directly, by interchanging the roles of $s_i$ and $\ell_i$. The following proof will be given for knots (and slices) in $(D^4,S^3)$ but it works just as well in $(C^4,M^3)$ where $M$ is any homology sphere and $C$ is [*the*]{} contractible topological $4$-manifold with boundary $M$. Since the knot $K$ has trivial Alexander polynomial, Lemma \[lem.basis\] shows that we can choose a Seifert surface $\Sigma_1$ with a trivial Alexander basis $\{s_1,\dots,s_g,\ell_1,\dots,\ell_g\}$. Pick generically immersed disks $\Delta(s_j)$ (respectively $\Delta(\ell_j)$) in $D^4$ which bound $s_j\down$ (respectively $\ell_j$). So these disks are disjoint on the boundary, and the intersection numbers satisfy $$\Delta(s_i)\cdot\Delta(s_j)=\lk(s_i\down,s_j\down)=\lk(s_i\down,s_j)=0 \quad\text{ and }\quad \Delta(s_i)\cdot\Delta(\ell_j)=\lk(s_i\down,\ell_j).$$ By Definition \[def.basis\], the latter is a triangular matrix, which will turn out to be the crucial fact. Now we “push” the Seifert surface $\Sigma_1$ slightly into $D^4$ to obtain a surface $\Sigma \subset D^4$, and call $N$ the complement of (an open neighborhood of) $\Sigma$ in $D^4$. The basic idea of the proof is to use the disk embedding theorem in $N$ to show that $\Sigma$ can be ambiently surgered into a disk which will be a slice disk for our knot $K$. To understand the $4$-manifold $N$ better, note that by Alexander duality $$H_1N \cong H^2(\Sigma,\pt \Sigma) \cong\BZ \text{ and } H_2N \cong H^1(\Sigma,\pt \Sigma) \cong\BZ^{2g}.$$ Moreover, a Morse function on $N$ is given by restricting the radius function on $D^4$. Reading from the center of $D^4$ outward, this Morse function has one critical point of index 0, one of index 1 (the minimum of $\Sigma$), and $2g$ critical points of index 2, one for each band of $\Sigma$. Together with the above homology information, this implies that $N$ is homotopy equivalent to a wedge of a circle and $2g$ $2$-spheres. To make the construction of $N$ more precise, we prefer to add an exterior collar $(S^3 \times [1,1.5], K\times [1,1.5])$ to $D^4$, i.e. we work with the knot $K$ in the 4-disk $D_{1.5}$ of radius $1.5$. Then the pushed in Seifert surface $\Sigma\subset D_{1.5}$ is just $(K\times [1,1.5])\cup \Sigma_1$. The normal bundle of $\Sigma_1$ in $D_{1.5}$ can then be canonically decomposed as $$\nu(\Sigma_1,D_{1.5}) \cong \nu(S^3,D_{1.5}) \times \nu(\Sigma_1,S^3) =: \BR_x \times \BR_y$$ Since $N^4$ is the complement of an open thickening of $\Sigma$ in $D_{1.5}$, we may assume that for points on $\Sigma_1$ the normal coordinates $x$ vary in the open interval $(0.9,1.1)$, and $y$ in $(-\epsilon,\epsilon)$. Here $\epsilon>0$ is normalized so that for a curve $\alpha= \alpha \times 1 \times 0$ on $\Sigma_1$ one has $$\alpha \times 1 \times -\epsilon = \alpha\down \quad \text { and } \quad \alpha \times 1 \times \epsilon = \alpha\up.$$ Note that by construction, the disks $\Delta(s_j)$ lie in $N$ and have their boundary $s_i\down$ in $\partial N$ and hence one can attempt to apply the disk embedding theorem these disks. If we can do this successfully, then the $\Delta(s_j)$ may be replaced by disjoint embeddings and hence we can surger $\Sigma$ into a slice disk for our knot $K$. Let’s check the assumptions in the disk embedding theorem: As mentioned above, $\pi_1N\cong \BZ$ is a good group. By construction, the (self-) intersections among the $\Delta(s_j)$ vanish algebraically, even in the group ring $\BZ[\pi_1N]$, because these disks lie in a simply connected part of $N$. Finally, we need to check that the $\Delta(s_j)$ have algebraically dual $2$-spheres. Note that this must be the place where the assumption on the Alexander polynomial is really used, since so far we have only used that $K$ is “algebraically slice”. We start with $2$-dimensional tori $\T_i$ which are the boundaries of small normal bundles of $\Sigma$ in $D_{1.5}$, restricted to the curves $\ell_i$ in our trivial Alexander basis of $\Sigma_1$. More precisely, $$\T_i:=\ell_i \times S^1_t \text{ where } S^1_t:=[0.8,1.2] \times \{-2 \epsilon,2 \epsilon\} \cup \{0.8,1.2\} \times [-2 \epsilon,2 \epsilon]$$ in our normal coordinates introduced above. Note that $S^1_t$ is a (square shaped) meridian to $\Sigma$ and freely generates $\pi_1N$. By construction, these $\T_i$ lie in our $4$-manifold $N$. Moreover, they are disjointly embedded and dual to $\Delta(s_j)$ in the sense that the [*geometric*]{} intersections are $$\T_i\cap \Delta(s_j) = (\ell_i\cap s_j) \times (0.8 \times - \epsilon) = \delta_{i,j}.$$ Hence the $\T_i$ satisfy all properties of dual $2$-spheres, except that they are not $2$-spheres! However, we can use our disks $\Delta(\ell_i)$ with boundary $\ell_i$ as follows. First remove collars $\ell_i \times (0.8,1]$ from these disks (without changing their name) so that $\Delta(\ell_i)$ have boundary equal to the “long curve” $\ell_i\times 0.8$ on $\T_i$. Using two parallel copies of $\Delta(\ell_i)$ we can surger the $\T_i$ into $2$-spheres $g_i$. These are framed because of our assumption that the $\ell_i$ are “untwisted”, i.e. that $\lk(\ell_i,\ell_i\down)=0$ (which is used only modulo 2). The equivariant intersection numbers are $$\lambda(g_i,\Delta(s_j))=\delta_{i,j} + \Delta(\ell_i)\cdot\Delta(s_j)(1-t)=\delta_{i,j} +\lk(\ell_i,s_j\down)(1-t) \quad \in\quad\BZ[\pi_1N]=\BZ[t^{\pm 1}]$$ because the single intersection point of $\Delta(s_i)$ with $\T_i$ remains and any geometric intersection point between $\Delta(\ell_i)$ and $\Delta(s_j)$ is now turned into exactly two (oppositely oriented) intersections of $g_i$ with $\Delta(s_j)$. These differ by the group element $t$ going around the short curve $S^1_t$ of $\T_i$. By our assumption on the linking numbers, the resulting $2$-spheres $g_i$ satisfy the triangular condition from Lemma \[lem.triangular\] and can hence be turned into dual spheres for $\Delta(s_j)$. Thus we have checked all assumptions in the disk embedding theorem, and hence we may indeed surger $\Sigma$ to a slice disk for $K$ as planned. Recall that the topological surgery and s-cobordism theorems in dimension 4 (for all fundamental groups) are equivalent to certain “atomic” links being free slice [@FQ Ch. 12]. These atomic links are all boundary links with minimal Seifert rank in the appropriate sense. In particular, if the disk embedding theorem above was true for free fundamental groups, then the proof above (without needing our triangular base change) would show how to find free slices for all the atomic links. This shows how one reduces the whole theory to the disk embedding theorem for free fundamental groups. [\[EMSS\]]{} D. Bar-Natan, S. Garoufalidis, [*On the Melvin-Morton-Rozansky conjecture*]{}, Inventiones [**125**]{} (1996) 103–133. A. J. Casson and C. McA. Gordon, [*On slice knots in dimension three*]{}, Proc. Symp. in Pure Math. XXX, part 2, 39-53, 1978. J. Conant and P. Teichner, [*Grope cobordism of classical knots*]{}, preprint 2000, [math.GT/0012118]{}. T. Cochran, K. Orr and P. Teichner, [*Knot concordance, Whitney towers and $L^2$ signatures*]{}, To appear in the Annals of Math. [math.GT/9908117]{}. M. Freedman, [*A surgery sequence in dimension four: the relations with knot concordance*]{}, Inventiones [**68**]{} (1982) 195–226. M. Freedman, [*The Disk theorem for four dimensional manifolds*]{}, Proceedings of the ICM in Warsaw 1983, 647–663. [to3em]{}and F. Quinn, [*Topology of 4-manifolds*]{}, Princeton University Press, Princeton NJ 1990. S. Garoufalidis, [*Whitehead doubling persists*]{}, preprint 2000, [math.GT/0003189]{}. [to3em]{}, M. Goussarov and M. Polyak, [*Calculus of clovers and finite type invariants of 3-manifolds*]{}, Geometry and Topology, [**5**]{} (2001) 75–108. [to3em]{}and L. Rozansky, [*The loop expansion of the Kontsevich integral, the null-move and $S$-equivalence*]{}, preprint [math.GT/0003187]{}, to appear in Topology. [to3em]{}and A. Kricker, [*A rational noncommutative invariant of boundary links*]{}, preprint 2001, [math.GT/0105028]{}. [to3em]{}and [to3em]{}, [*Finite type invariants of cyclic branched covers*]{}, preprint 2001, [math.GT/0107220]{}. [to3em]{}and J. Levine, [*Concordance and 1-loop clovers*]{}, Algebraic and Geometric Topology, [**1**]{} (2001) 687–697. M. Goussarov, [*Finite type invariants and $n$-equivalence of 3-manifolds*]{}, C. R. Acad. Sci. Paris Ser. I. Math. [**329**]{} (1999) 517–522. M. Goussarov, [*Knotted graphs and a geometrical technique of n-equivalence*]{}, St. Petersburg Math. J. [**12-4**]{} (2001). K. Habiro, [*Claspers and finite type invariants of links*]{}, Geometry and Topology, [**4**]{} (2000) 1–83. A. Kricker, B. Spence, I. Aitchinson, [*Cabling the Vassiliev invariants*]{}, J. Knot Theory and its Rami. [**6**]{} (1997) 327–358. J. Levine, [*Knot modules*]{}, Transactions AMS [**229**]{} (1977) 1–51. S. V. Matveev, [*Generalized surgery of three-dimensional manifolds and representations of homology spheres*]{}, Math. Notices Acad. Sci. USSR, [**42:2**]{} (1987) 651–656. H. Murakami and Y. Nakanishi, [*On a certain move generating link homology*]{}, Math. Annalen [**284**]{} (1989) 75–89. B. Patureau-Mirand, [*Non-Injectivity of the “hair” map*]{}, preprint 2002, [math.GT/0202065]{}. L. Rozansky, [*A rationality conjecture about Kontsevich integral of knots and its implications to the structure of the colored Jones polynomial*]{}, preprint [math.GT/0106097]{}. R. Schneiderman and P. Teichner, [*Higher order intersection numbers of 2-spheres in 4-manifolds*]{}, Algebraic & Geometric Topology [**1**]{} (2000) 1–29. H. Trotter, [*On $S$-equivalence of Seifert matrices*]{}, Inventiones [**20**]{} (1973) 173–207. [^1]: The authors are partially supported by NSF grants DMS-02-03129 and DMS-00-72775 respectively. The second author was also supported by the Max-Planck Gesellschaft. This and related preprints can also be obtained at [http://www.math.gatech.edu/$\sim$stavros]{} and [http://math.ucsd.edu/$\sim$teichner]{} 1991 [*Mathematics Classification.*]{} Primary 57N10. Secondary 57M25. Alexander polynomial, knot, Seifert surface, Kontsevich integral, concordance, slice, clasper. [^2]: By clasper we mean precisely the object called [*clover*]{} in [@GGP]. For the sake of Peace in the World, after the Kyoto agreement of September 2001 at RIMS, we decided to follow this terminology.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Weyl denominator identity for the basic simple Lie superalgebras was formulated by V. Kac and M. Wakimoto and was proven by them for the defect one case. In this paper we prove the identity for the rest of the cases.' address: 'Dept. of Mathematics, The Weizmann Institute of Science, Rehovot 76100, Israel' author: - Maria Gorelik title: 'Weyl denominator identity for finite-dimensional Lie superalgebras' --- [^1] Introduction {#intro} ============ The basic simple Lie superalgebras are finite-dimensional simple Lie superalgebras, which have a reductive even part and admit an even non-degenerate invariant bilinear form. These algebras were classified by V. Kac in [@Ksuper] and the list (excluding Lie algebra case) consists of four series: $A(m,n), B(m,n), C(m), D(m,n)$ and the exceptional algebras $D(2,1,a), F(4), G(3)$. Let $\fg$ be a basic simple Lie superalgebra with a fixed triangular decomposition $\fg=\fn_-\oplus\fh\oplus\fn_+$, and let $\Delta_+=\Delta_{+,0}\coprod \Delta_{+,1}$ be the corresponding set of positive roots. The Weyl denominator associated to the above data is $$R:=\frac{\prod_{\alpha\in\Delta_{+,0}}(1-e^{-\alpha})} {\prod_{\alpha\in\Delta_{+,1}}(1+e^{-\alpha})}.$$ If $\fg$ is a finite-dimensional simple Lie algebra (i.e., $\Delta_1=\emptyset$), then the Weyl denominator is given by the Weyl denominator identity $$Re^{\rho}=\sum_{w\in W} \sgn(w)e^{w\rho},$$ where $\rho$ is the half-sum of the positive roots, $W$ is the Weyl group, i.e. the subgroup of $GL(\fh^*)$ generated by the reflections with respect to the roots, and $\sgn(w)\in\{\pm 1\}$ is the sign of $w\in W$. This identity may be viewed as the character of trivial representation of the corresponding Lie algebra. The Weyl denominator identities for superalgebras were formulated and partially proven (for $A(m-1,n-1), B(m,n), D(m,n)$ with $\min(m,n)=1$ and for $C(n), D(2,1,a), F(4), G(3)$) by V. Kac and M. Wakimoto in [@KW94]. In order to state the Weyl denominator identity for basic simple Lie superalgebras we need the following notation. Let $\Delta^{\#}$ be the “largest” component of $\Delta_0$, see \[Delfin\] for the definition. Let $W$ be the Weyl group of $\fg_0$, i.e. the subgroup of $GL(\fh^*)$ generated by the reflections with respect to the even roots $\Delta_0$, and let $\sgn(w)$ be the sign of $w$. One has $W=W_1\times W_2$, where $W^{\#}$ be the Weyl group of root system $\Delta^{\#}$, i.e. the subgroup of $W$ generated by the reflections with respect to the roots from $\Delta^{\#}$. Set $\rho_0:=\sum_{\alpha\in\Delta_{+,0}}\alpha/2,\ \ \rho_1:=\sum_{\alpha\in\Delta_{+,1}}\alpha/2,\ \ \rho:=\rho_0-\rho_1$. A subset $\Pi$ of $\Delta_+$ is called a [*set of simple roots*]{} if the elements of $\Pi$ are linearly independent and $\Delta_+\subset \sum_{\alpha\in \Pi}\mathbb{Z}_{\geq 0}\alpha$. For each functional $f$ the corresponding set $\Delta_+(f)$ contains a unique system of simple roots, which we denote by $\Pi(f)$. A subset $S$ of $\Delta$ is called [*maximal isotropic*]{} if the elements of $S$ form a basis of a maximal isotropic space in $V$. By [@KW94], $\Delta$ contains a maximal isotropic subset and each maximal isotropic subset is a subset of a set of simple roots (for a certain functional $f$). Fix a maximal isotropic subset $S\subset \Delta$, choose a set of simple roots $\Pi$ containing $S$, and choose a functional $f:V\to \mathbb{R}$ in such a way that $\Pi=\Pi(f)$. Let $R$ be the Weyl denominator for this choice of $f$. The following Weyl denominator identity was suggested by V. Kac and M. Wakimoto in [@KW94]: $$\label{denomKW} Re^{\rho}=\sum_{w\in W^{\#}} \sgn(w) w\bigl(\frac{e^{\rho}}{\prod_{\beta\in S}(1+e^{-\beta})}\bigr).$$ If $S$ is empty (i.e., $(-,-)$ is a positive/negative definite) the denominator identity takes the form $Re^{\rho}=\sum_{w\in W} \sgn(w)e^{w\rho}$. In this case either $\Delta_1$ is empty (i.e., $\fg$ is a Lie algebra), or $\fg=\mathfrak{osp}(1,2l)$ (type $B(0,l)$). The Weyl denominator identity for the case $\mathfrak{osp}(1,2l)$ was proven in [@K77]; for the case when $S$ has the cardinality one the identity was proven in [@KW94]. The Weyl denominator identity for root system of Lie (super)algebra $\fg$ can be again naturally interpreted as the character of one-dimensional representation of $\fg$. The proofs in abovementioned cases ([@K77],[@KW94]) are based on a analysis of the highest weights of irreducible subquotients of the Verma module $M(0)$ over $\fg$. In this paper we give a proof of the Weyl denominator identity (\[denomKW\]) for the case when $S$ has the cardinality greater than one. A similar proof works for the case when the cardinality of $S$ is one. Unfortunately, our proof does not use representation theory, but requires an analysis of the roots systems. The proof is based on a case-by-case verification of the following facts: \(i) the monomials appearing in the right-hand side of (\[denomKW\]) are of the form $e^{\rho-\nu}$, where $\nu\in Q^+:=\sum_{\alpha\in\Pi} \mathbb{Z}_{\geq 0}\alpha$ and that the coefficient of $e^{\rho}$ is one; \(ii) the right-hand side of (\[denomKW\]) is $W$-skew-invariant (i.e., $w\in W$ acts by the multiplication by $\sgn(w)$). Taking into account that for any $\lambda\in V$ the stabilizer of $\lambda$ in $W$ is either trivial or contains a reflection, and that if the stabilizer is trivial and $W\lambda\subset (\rho_0-\sum_{\alpha\in\Delta_+}\mathbb{Z}_{\geq 0}\alpha)$, then $\lambda=\rho_0$, we easily deduce the identity (\[denomKW\]) from (i), (ii). I. M. Musson informed us that he has an unpublished proof of the Weyl denominator identity for basic simple Lie superalgebras. The Weyl denominator identity for the affinization of a simple finite-dimensional Lie superalgebra with non-zero Killing form was also formulated by V. Kac and M. Wakimoto in [@KW94] and was proven for the defect one case. We prove this identity in [@G]. [*Acknowledgments.*]{} I am very grateful to V. Kac for his patience and useful comments. I would like to thank D. Novikov and A. Novikov for their support. The algebra $\cR$ ================= In this section we introduce the algebra $\cR$. Since the main technical difficulty in our proof of the denominator identity comes from the existence of different triangular decompositions, we illustrate how the proof works when there is only one triangular decomposition (see \[liecase\],\[qn\]). Notation {#notat} -------- Denote by $Q$ the root lattice of $\fg$ and by $Q^+$ the positive part of root lattice $Q^+:= \sum_{\alpha\in\Pi} \mathbb{Z}_{\geq 0}\alpha$; we introduce the following partial order on $V$: $$\mu\leq\nu\ \text{ if } (\nu-\mu)\in\sum_{\alpha\in\Delta_+} \mathbb{R}_{\geq 0}\alpha.$$ Introduce the height function $\htt:Q^+\to\mathbb{Z}_{\geq 0}$ by $$\ \htt (\sum_{\alpha\in\Pi}m_{\alpha}\alpha):=\sum_{\alpha\in\Pi}m_{\alpha}.$$ We use the following notation: for $X\subset \mathbb{R}, Y\subset V$ we set $XY:=\{xy|x\in X, y\in Y\}$; for instance, $Q^+:= \mathbb{Z}_{\geq 0}\Pi$. Note that $\Delta_0$ is the root system of the reductive Lie algebra $\fg_0$. In particular, all isotropic roots are odd. Both sets $\Delta_0, \Delta_1$ are $W$-stable: $W\Delta_i=\Delta_i$. The algebra $\cR$ {#supp} ----------------- Denote by $\mathbb{Q}[e^{\nu},\ \nu\in V]$ the algebra of polynomials in $e^{\nu},\nu\in V$. Let $\cR$ be the algebra of rational functions of the form $$\frac{X}{\prod_{\alpha\in\Delta_+} (1+a_{\alpha}e^{-\alpha})^{m_{\alpha}}},$$ where $X\in \mathbb{Q}[e^{\nu},\ \nu\in V]$ and $a_{\alpha}\in \mathbb{Q}$, $m_{\alpha}\in\mathbb{Z}_{\geq 0}$. Clearly, $\cR$ contains the rational functions of the form $\frac{X}{\prod_{\alpha\in\Delta}(1+a_{\alpha}e^{-\alpha})^{m_{\alpha}}}$, where $X,a_{\alpha},m_{\alpha}$ as above. The group $W$ acts on $\cR$ by the automorphism mapping $e^{\nu}$ to $e^{w\nu}$. Say that $P\in\cR$ is [*$W$-invariant*]{} (resp., [*$W$-skew-invariant*]{}) if $wP=P$ (resp., $wP=\sgn(w)P$) for every $w\in W$. ### For a sum $Y:=\sum b_{\mu} e^{\mu}, b_{\mu}\in\mathbb{Q}$ introduce the [*support* ]{} of $Y$ by the formula $$\supp(Y):=\{\mu|\ b_{\mu}\not=0\}.$$ Any element of $\cR$ can be uniquely expanded in the form $$\frac{\sum_{i=1}^m a_ie^{\nu_i}}{\prod_{\alpha\in\Delta_+} (1+a_{\alpha}e^{-\alpha})^{m_{\alpha}}}=\sum_{i=1}^s \sum_{\mu\in Q^+} b_{\mu}e^{\nu_i-\mu},\ b_{\mu}\in\mathbb{Q}.$$ For $Y\in\cR$ denote by $\supp(Y)$ the support of its expansion; by above, $\supp(Y)$ lies in a finite union of cones of the form $\nu-Q^+$. ### {#regorb} We call $\lambda\in V$ [*regular*]{} if $\Stab_W \lambda=\{\id\}$; we call the orbit $W\lambda$ regular if $\lambda$ is regular (so the orbit consists of regular points). It is well-known that for $\lambda\in V$ the stabilizer $\Stab_W \lambda$ is either trivial or contains a reflection (see  (ii)). Therefore the stabilizer of a non-regular point $\lambda\in V$ contains a reflection. As a result, the space of $W$-skew-invariant elements of $\mathbb{Q}[e^{\nu},\ \nu\in V]$ is spanned by $\sum_{w\in W} \sgn(w) e^{w\lambda}$, where $\lambda\in V$ is regular. In particular, the support of a $W$-skew-invariant element of $\mathbb{Q}[e^{\nu},\ \nu\in V]$ is a union of regular $W$-orbits. Lie algebra case {#liecase} ---------------- The denominator identity for Lie algebra is $$Re^{\rho}=\prod_{\alpha\in\Delta_+} (1-e^{-\alpha})e^{\rho} =\sum_{w\in W}\sgn(w) e^{w\rho}.$$ There are several proofs of this identity. The proof which we are going to generalize is the following. Observe that $Re^{\rho}\in\mathbb{Q}[e^{\nu},\ \nu\in V]$ and $\supp(Re^{\rho})\subset (\rho-Q^+)$. Moreover, $Re^{\rho}=\prod_{\alpha\in\Delta_+} (e^{\alpha/2}-e^{-\alpha/2})$ is $W$-skew-invariant, so $\supp(Re^{\rho})$ is a union of regular orbits lying in $(\rho-Q^+)$. However, $W\rho$ is the only regular orbit lying entirely in $(\rho-Q^+)$, see  (iii). Hence $Re^{\rho}$ is proportional to $\sum_{w\in W}\sgn(w) e^{w\rho}$. Since the coefficient of $e^{\rho}$ in the expression $Re^{\rho}$ is $1$, the coefficient of proportionality is $1$. Case $Q(n)$ {#qn} ----------- For the case $\fg:=Q(n)$ one has $\fg_0=\fgl(n)$, $W=S_n$, $\Delta_{0+}=\Delta_{+,1}=\{\vareps_i-\vareps_j\}_{1\leq i<j\leq n}$ and so $\rho_0=\rho_1,\ \rho=0$. The Weyl denominator is $R=\prod_{\alpha\in\Delta_{+,0}}\frac{1-e^{-\alpha}} {1+e^{-\alpha}}$. For each $S\subset\Delta_{+,1}$ define $$A(S):=\{w\in S_n|\ wS\subset\Delta_{+,0}\}, \ \ a(S):=\sum_{w\in A(S)} \sgn(w).$$ ### {#section-1} For each $S\subset\Delta_{+,1}$ one has $$a(S)R=\sum_{w\in S_n} \sgn(w)\frac{1}{\prod_{\beta\in S} (1+e^{-w\beta})}.$$ Observe that the support of both sides of the formula lies in $-Q^+$ and that the coefficients of $1=e^0$ in both sides are equal to $a(S)$. Multiplying both sides of the formula by the $W$-invariant expression $\prod_{\alpha\in\Delta_{+,1}}(1+e^{-\alpha})e^{\rho_1}= \prod_{\alpha\in\Delta_{+,1}}(e^{\alpha/2}+e^{-\alpha/2})$ we obtain $$\begin{array}{ll} Y:&=\prod_{\alpha\in\Delta_{+,0}}(1-e^{-\alpha})e^{\rho_0}- \prod_{\alpha\in\Delta_{+,1}}(1+e^{-\alpha})e^{\rho_1} \sum_{w\in S_n} \sgn(w)\frac{1}{\prod_{\beta\in S} (1+e^{-w\beta})}\\ &=\prod_{\alpha\in\Delta_{+,0}}(1-e^{-\alpha})e^{\rho_0}- \sum_{w\in S_n} \sgn(w)w\bigl(\prod_{\alpha\in\Delta_{+,0}\setminus S} (1+e^{-\alpha})e^{\rho_0}\bigr). \end{array}$$ Since $\prod_{\alpha\in\Delta_{+,0}}(1-e^{-\alpha})e^{\rho_0}= \prod_{\alpha\in\Delta_{+,0}}(e^{\alpha/2}-e^{-\alpha/2})$ is $W$-skew-invariant, $Y$ is also $W$-skew-invariant. Clearly, $Y\in\mathbb{Q}[e^{\nu},\ \nu\in V]$, so, by \[regorb\], $\supp(Y)$ is a union of regular $W$-orbits. By above, $\supp(Y)\subset (\rho_0-Q^+)\setminus\{\rho_0\}$. However, by  (iii), any regular $W$-orbit intersects $\rho_0+Q^+$. Hence $Y=0$ as required. ### {#section-2} In order to obtain a formula for the Weyl denominator $R$, we choose $S$ such that $a(S)\not=0$. Taking $S=\{\vareps_1-\vareps_n,\vareps_2-\vareps_{n-1},\ldots, \vareps_{[\frac{n}{2}]}-\vareps_{n+1-[\frac{n}{2}]}\}$ and using , we obtain the following formula $$R=\frac{1}{[n/2]!}\sum_{w\in S_n} \sgn(w)\frac{1}{\prod_{\beta\in S} (1+e^{-w\beta})},$$ which appears in [@KW94] (7.1) (up to a constant factor). Note that such $S$ has a minimal cardinality: if the cardinality of $S$ is less than $[\frac{n}{2}]$, then $a(S)=0$. Indeed, if the cardinality of $S$ is less than $[\frac{n}{2}]$, then there is a root $\vareps_i-\vareps_j$, which does not belong to the span of $S$, and thus $s_{\vareps_i-\vareps_j}W(S)=W(S)$; since $\sgn(w)+\sgn(s_{\vareps_i-\vareps_j}w)=0$, this forces $a(S)=0$. ### {#section-3} [lemQ]{} Set $A:=\{\sigma\in S_n|\ \forall i\leq \frac{n}{2}\ \ \ \sigma(i)>\sigma(n+1-i)\}$. Then $$\sum_{\sigma\in A} \sgn(\sigma)=[n/2]!$$ For each $\sigma\in A$ let $P(\sigma)$ be the set of pairs $\{(\sigma(1),\sigma(n);(\sigma(2),\sigma(n-1));\ldots\}$, that is $P(\sigma):=\{(\sigma(j),\sigma(n+1-j))\}_{j=1}^{[\frac{n}{2}]}$. Let $B:=\{\sigma\in A|\ P(\sigma)= \{(j,j+1)\}_{j=1}^{[\frac{n}{2}]}\}$. Define an involution $f$ on the set $A\setminus B$ as follows: for $\sigma\in A\setminus B$ set $f(\sigma):=(i,i+1)\circ\sigma$, where $i$ is minimal such $(i,i+1)\not\in P(\sigma)$. Since $\sgn(\sigma)+\sgn(f(\sigma))=0$, we get $\sum_{\sigma\in A\setminus B} \sgn(\sigma)=0$. One readily sees that $B$ has $[n/2]!$ elements and that $\sgn(\sigma)=1$ for each $\sigma\in B$. Hence $\sum_{\sigma\in A} \sgn(\sigma)=[n/2]!$ as required. Notation {#notation} ======== Let $\fg=\fg_0\oplus\fg_1$ be a a basic Lie superalgebra with a fixed triangular decomposition of the even part: $\fg_0=\fn_{-,0}\oplus\fh\oplus\fn_{+,0}$. For $A(m-1,n-1)$-type we put $\fg=\fgl(m|n)$ (one readily sees that the denominator identities for $\fgl(m|n)$ imply the denominator identities for $\fsl(m|n), \fpsl(n|n)$). Let $\Delta_0$ (resp., $\Delta_1$) be the set of even (resp., odd) roots of $\fg$. Set $V=\fh^*_{\mathbb{R}}$ (so $V=\spn\Delta$ for $\fg\not=A(m,n)$ and $V=\mathbb{R}\spn\Delta\oplus\mathbb{R}$ for $\fg=A(m,n)$). Denote by $(-,-)$ a non-degenerate symmetric bilinear form on $V$, induced by a non-degenerate invariant bilinear form on $\fg$. Retain notation of Section \[intro\] and define the Weyl denominator. The dimension of a maximal isotropic space in $V=\fh^*_{\mathbb{R}}$ is called the [*defect*]{} of $\fg$. If $\fg$ is a Lie algebra or $\fg=\mathfrak{osp}(1,2l)$ (type $B(0,l)$) then the defect of $\fg$ is zero; the defect of $A(m-1,n-1), B(m,n), D(m,n)$ is equal to $\min(m,n)$; for $C(n)$ and the exceptional Lie superalgebras the defect is equal to one. Notice that the cardinality of a maximal isotropic set $S$ is equal to defect of $\fg$. Admissible pairs {#adm} ---------------- The set of positive even roots $\Delta_{+,0}$ is determined by the triangular decomposition $\fg_0=\fn_{-,0}\oplus\fh\oplus\fn_{+,0}$ (i.e., $\Delta_{+,0}$ is the set of weights of $\fn_{+,0}$). Recall that for each maximal isotropic set $S$ there exists a set of simple roots containing $S$. We call a pair $(S,\Pi)$ [*admissible*]{} if $S\subset\Pi$ is a maximal isotropic set of roots and $\Pi$ is a set of simple roots such that the corresponding set of positive even roots coincides with $\Delta_{+,0}$: $$(S,\Pi)\text{ is admissible if } S\subset\Pi\ \&\ \Delta_+(\Pi)\cap\Delta_0=\Delta_{+,0}.$$ For a fixed set of simple roots $\Pi$ we retain notation of \[notat\] and \[supp\]. The set $\Delta^{\#}$ {#Delfin} --------------------- Let $\Delta_1,\Delta_2$ be two finite irreducible root systems; we say that $\Delta_1$ is “larger” than $\Delta_2$ if either the rank of $\Delta_1$ is greater than the rank of $\Delta_2$, or the ranks are equal and $\Delta_1\subset\Delta_2$. If the defect of $\fg$ is greater than one, then the root system $\Delta_0$ is a disjoint union of two irreducible root systems. We denote by $\Delta^{\#}$ the irreducible component, which is not the smallest one, i.e. $\Delta_0=\Delta^{\#}\coprod\Delta_2$, where $\Delta^{\#}$ is not smaller, than $\Delta_2$, see the following table: $$ --------------- --------------------------- -------------------------------------------- ----------------------- $\Delta$    A(m-1,n-1)           B(m,n)    D(m,n) $m>n$     $m\leq n$ $m>n$    $m<n$     $m=n$ $m>n$   $m\leq n$ $\Delta^{\#}$ $A_{m-1}$       $A_{n-1}$  $B_m$      $C_n$           $B_m$ or $C_m$ $D_m$           $C_n$ --------------- --------------------------- -------------------------------------------- ----------------------- $$ The notion of $\Delta^{\#}$ in [@KW94] coincides with the above one, except for the case $B(m,m)$, where we allow both choices $B_m$ and $C_m$, whereas in [@KW94] $\Delta^{\#}$ is of the type $C_m$. Notice that $\fg_0=\fs_1\times \fs_2$, where $\fs_1,\fs_2$ are reductive Lie algebras and $\Delta^{\#},\Delta_0\setminus\Delta^{\#}$ are roots systems of $\fs_1, \fs_2$ respectively. We normalize $(-,-)$ in such a way that $\Delta^{\#}:=\{\alpha\in\Delta_0|\ (\alpha,\alpha)>0\}$; then $\Delta_0\setminus\Delta^{\#}=\{\alpha\in\Delta_0|\ (\alpha,\alpha)<0\}$. Outline of the proof ==================== {#X} Let $\fg$ be one of the Lie superalgebras $A(m-1,n-1), B(m,n), D(m,n), m,n>0$. Expansion of the right-hand side of (1) --------------------------------------- Let $(S,\Pi)$ be an admissible pair. Set $$\label{defX} X:=\sum_{w\in W^{\#}}\sgn(w)w\bigl(\frac{e^{\rho}}{\prod_{\beta\in S} (1+e^{-\beta})}\bigr)$$ and rewrite the denominator identity (\[denomKW\]) as $Re^{\rho}=X$. Expanding $X$ we obtain $$\label{expX} \begin{array}{l} X=\sum_{w\in W^{\#}}\sum_{\mu\in\mathbb{Z}_{\geq 0}S} \sgn(w)(-1)^{\htt \mu} e^{\varphi(w)-|w|\mu+w\rho}, \end{array}$$ where $$\varphi(w):=\!\!\!\!\sum_{\beta\in S: w\beta<0}\!\! \!\!w\beta\in -Q^+$$ and $|w|$ is a linear map $\mathbb{Z}_{\geq 0}S\to Q^+$ defined on $\beta\in S$ by the formula $$|w|\beta=\left\{ \begin{array}{ll}w\beta & \text{ for } w\beta>0,\\ -w\beta & \text{ for } w\beta<0.\end{array}\right.$$ Main steps ---------- The proof has the following steps: \(i) We introduce certain operations on the admissible pairs $(S,\Pi)$ and show that these operations preserve the expressions $X$ and $Re^{\rho}$. Consider the equivalence relation on the set of admissible pairs $(S,\Pi)$ generated by these operations. We will show that there are two equivalence classes for $D(m,n), m>n$ and one equivalence class for other cases. \(ii) We check that $\supp(X)\subset (\rho-Q^+)$ and that the coefficient of $e^{\rho}$ in $X$ is $1$ for a certain choice of $(S,\Pi)$ (for $D(m,n), m>n$ we check this for $(S,\Pi)$ and $(S',\Pi)$, which are representatives of the equivalence classes). \(iii) We show that $X$ is $W$-skew-invariant for a certain choice of $(S,\Pi)$ (for $D(m,n), m>n$ we show this for $(S,\Pi)$ and $(S',\Pi)$, which are representatives of the equivalence classes). For $\fg$ of $A(n-1,n-1)$ type we change (ii) to (ii’): (ii’) We check, for a certain choice of $(S,\Pi)$, that $\supp(X)\subset (\rho-Q^+)$ and that for $\xi:=\sum_{\beta\in S}\beta$ the coefficients of $e^{\rho-s\xi}$ in $X$ and in $Re^{\rho}$ are equal for each $s\in\mathbb{Z}_{\geq 0}$. The choices of $(S,\Pi)$ in (ii), (iii) are the same only for $A(m,n)$ case. Why (i)–(iii) imply (1) {#stepsfin} ----------------------- Let us show that (i)–(iii) imply the denominator identity $X=Re^{\rho}$. Indeed, assume that $X-Re^{\rho}\not=0$. Since $WS\subset \Delta_{1}$, $X-Re^{\rho}$ is a rational function with the denominator of the form $\prod_{\beta\in \Delta_{1}^+}(1+e^{-\beta})$; we write $$X-Re^{\rho}=\frac{Y} {\prod_{\beta\in \Delta_{1}^+}(1+e^{-\beta})}= \frac{Ye^{\rho_1}} {\prod_{\beta\in \Delta_1^+}(e^{\beta/2}+e^{-\beta/2})},$$ where $Y\in\mathbb{Q}[e^{\nu},\nu\in V]$. One has $$Re^{\rho}=\frac{\prod_{\alpha\in\Delta_0^+} (e^{\alpha/2}-e^{-\alpha/2})} {\prod_{\alpha\in\Delta_1^+} (e^{\alpha/2}+e^{-\alpha/2})}$$ and the latter expression is $W$-skew-invariant, since its enumerator is $W$-skew-invariant and its denominator is $W$-invariant. Combining (i) and (iii), we obtain that $X-Re^{\rho}$ is $W$-skew-invariant. Thus $Ye^{\rho_1}$ is a $W$-skew-invariant element of $\mathbb{Q}[e^{\nu},\nu\in V]$ and so $\supp(Ye^{\rho_1})$ is a union of regular orbits. Observe that $\supp (Re^{\rho})\subset (\rho-Q^+)$ and that the coefficient of $e^{\rho}$ in $Re^{\rho}$ is $1$. Using (i), (ii) we get $\supp (X-Re^{\rho})\subset (\rho-Q^+)\setminus \{\rho\}$. Note that the sets of maximal elements in $\supp (Y)$ and in $\supp (X-Re^{\rho})$ coincide. Thus $\supp Y\subset (\rho-Q^+)\setminus \{\rho\}$ that is $\supp (Ye^{\rho_1})\subset (\rho_0-Q^+)\setminus \{\rho_0\}$. Hence $\supp(Ye^{\rho_1})$ is a union of regular orbits lying in $(\rho_0-Q^+)\setminus \{\rho_0\}$. By \[regorbit\], for $\fg\not=\fgl(n|n)$, the set $(\rho_0-Q^+)\setminus \{\rho_0\}$ does not contain regular $W$-orbit, a contradiction. Let $\fg=\fgl(n|n)$. Choose $\Pi$ as in \[rootsys\]. By \[regorbit\] the regular orbits in $(\rho_0-Q^+)$ are of the form $W(\rho_0-s\xi)$ with $s\in\mathbb{Z}_{\geq 0},\ \xi=\sum_{\beta\in S}\beta$. One has $W\xi=\xi$ and $w\rho_0\leq \rho_0$, so $\rho_0-s\xi$ is the maximal element in its $W$-orbit. Thus a maximal element in $\supp Ye^{\rho_1}$ is of the form $\rho_0-s\xi$, so a maximal element in $\supp Y$ is $\rho-s\xi$. Then, by above, $\rho-s\xi\in \supp (X-Re^{\rho})$, which contradicts to (ii’). Regular orbits ============== {#section-4} Let $\fg$ be a reductive finite-dimensional Lie algebra, let $W$ be its Weyl group, let $\Pi$ be its set of simple roots and let $\Pi^{\vee}$ be the set of simple coroots. For $\rho$, defined as above, one has $\langle \rho,\alpha^{\vee}\rangle=1$ for each $\alpha\in\Pi$. Set $$Q_{\mathbb{Q}}=\sum_{\alpha\in\Pi} \mathbb{Q}\alpha,\ \ Q_{\mathbb{Q}}^+:=\sum_{\alpha\in\Pi} \mathbb{Q}_{\geq 0}\alpha.$$ As above, we define partial order on $\fh^*_{\mathbb{R}}$ by the formula $\mu\leq\nu\ \text{ if } (\nu-\mu)\in\sum_{\alpha\in\Delta_+} \mathbb{R}_{\geq 0}\alpha$. Let $P\subset \fh^*_{\mathbb{R}}$ be the weight lattice of $\fg$, i.e. $\nu\in P$ iff $\langle \nu,\alpha^{\vee}\rangle\in\mathbb{Z}$ for any $\alpha\in\Pi$, and let $P^+$ be the positive part of $P$, i.e. $\nu\in P^+$ iff $\langle \nu,\alpha^{\vee}\rangle\in\mathbb{Z}_{\geq 0}$ for any $\alpha\in\Pi$. One has $P\subset Q_{\mathbb{Q}}$. ### {#section-5} [cor1]{} (i) $P=WP^+$. \(ii) For any $\lambda\in\fh^*_{\mathbb{R}}$ the stabilizer of $\lambda$ in $W$ is either trivial or contains a reflection. \(iii) A regular orbit in $P$ intersects with the set $\rho+Q^+_{\mathbb{Q}}$. The group $W$ is finite. Take $\lambda\in\fh^*_{\mathbb{R}}$ and let $\lambda'=w\lambda$ be a maximal element in the orbit $W\lambda$. Since $\lambda'$ is maximal, $\langle \lambda',\alpha^{\vee}\rangle\geq 0$ for each $\alpha\in\Pi$. For $\lambda\in P$ one has $\lambda'\in P$, so $\langle \lambda',\alpha^{\vee}\rangle\in\mathbb{Z}_{\geq 0}$ for each $\alpha\in\Pi$, that is $\lambda'\in P^+$, hence (i). For (ii) note that, if $\langle \lambda',\alpha^{\vee}\rangle=0$ for some $\alpha\in\Pi$, then $s_{\alpha}\in \Stab_{W}\lambda'$, so $s_{w^{-1}\alpha}\in \Stab_{W}\lambda$. Assume that $\langle \lambda',\alpha^{\vee}\rangle>0$ for all $\alpha\in\Pi$. Take $y\in W, y\not=\id$ and write $y=y's_{\alpha}$ for $\alpha\in\Pi$, where the length of $y'\in W$ is less than the length of $y$. Then $y'\alpha\in\Delta_+$ (see, for instance, [@Jbook], A.1.1). One has $y'\lambda'-y's_{\alpha}\lambda'= \langle \lambda',\alpha^{\vee}\rangle (y'\alpha)>0$ so $y'\lambda'>y's_{\alpha}\lambda'$. Now (ii) follows by the induction on the length of $y$. For (iii) assume that $\lambda'$ is regular, that is $\langle \lambda',\alpha^{\vee}\rangle>0$ for all $\alpha\in\Pi$. Since $\lambda'\in P$, one has $\langle \lambda',\alpha^{\vee}\rangle\in\mathbb{Z}_{\geq 1}$ so $\langle \lambda-\rho,\alpha^{\vee}\rangle\geq 0$ for all $\alpha\in\Pi$. Write $\lambda-\rho=\sum_{\beta\in\Pi} x_{\beta}\beta$. For a vector $(y_{\beta})_{\beta\in \Pi}$ write $y\geq 0$ if $y_{\beta}\geq 0$ for each $\beta$. The condition $\langle \lambda-\rho,\alpha^{\vee}\rangle\geq 0$ for all $\alpha\in\Pi$ means that $Ax\geq 0$, where $x=(x_{\beta})_{\beta\in\Pi}$ and $A=(\langle\alpha^{\vee},\beta\rangle)_{\alpha,\beta\in \Pi}$ is the Cartan matrix of $\fg$. From [@Kbook], 4.3 it follows that $Ax\geq 0$ forces $x\geq 0$. Hence $\lambda-\rho\in Q^+_{\mathbb{Q}}$ as required. {#regorbit} Now let $\fg$ be a basic simple Lie superalgebra, $Q$ be its root lattice and $Q^+$ be the positive part of $Q$. ### {#section-6} [cor2]{} Let $\fg$ be a basic simple Lie superalgebra and $\fg\not=C(n), A(m,n)$. A regular orbit in the root lattice $Q$ intersects with the set $\rho_0+\mathbb{Q}_{\geq 0}\Delta_{+,0}$. For $\fg\not=C(n), A(m,n)$ one has $\mathbb{Q}\Delta_0=\mathbb{Q}\Delta$ and the $\fg$-root lattice $Q$ is a subset of weight lattice of $\fg_0$. Thus the assertion follows from . ### Case $\fg=\fgl(m|n)$ For $\fgl(m|n)$ one has $$\mathbb{Q}\Delta=\mathbb{Q}\Delta_{0}\oplus \mathbb{Q}\xi,\ \ \text{ where }\ \xi:=\sum\vareps_i-\frac{m}{n}\sum\delta_j.$$ Choose a set of simple roots as in \[rootsys\]. Let $\fg=\fgl(m|n)$. \(i) For $m\not=n$, $W\rho_0$ is the only regular orbit lying entirely in $(\rho_0-Q^+)$. \(ii) For $m=n$ the regular orbits lying entirely in $(\rho_0-Q^+)$ are of the form $W(\rho_0-s\xi), s\geq 0$, where $\xi =\sum\vareps_i-\sum\delta_j$. Let $\iota: \mathbb{Q}\Delta\to\mathbb{Q}\Delta_{0}$ be the projection along $\xi$ (i.e., $\Ker\iota=\mathbb{Q}\xi$). Since $\xi$ is $W$-invaraint, $w\iota(\lambda)=\iota(w\lambda)$. Let $W\lambda\subset(\rho_0-Q^+)$ be a regular orbit. Since $\iota(Q)$ lies in the weight lattice of $\fg_0$, $\iota(W\lambda)$ intersects with $\rho_0+\mathbb{Q}_{\geq 0}\Delta_{+,0}$, by . Thus $W\lambda$ contains a point of the form $\rho_0+\nu+q\xi$, where $\nu\in\mathbb{Q}_{\geq 0}\Delta_{+,0}$ and $q\in\mathbb{Q}$. By above, $\nu+q\xi\in -Q^+$ so $q\xi\in -\mathbb{Q}_{\geq 0}\Delta_+$. Consider the case $m>n$. In this case, for any $\mu\in\mathbb{Q}_{\geq 0}\Delta_+$ one has $(\mu,\vareps_1)\cdot (\mu,\vareps_m)\leq 0$. Since $(\xi,\vareps_1)=(\xi,\vareps_m)\not=0$, the inclusion $q\xi\in -\mathbb{Q}_{\geq 0}\Delta_+$ implies $q=0$. Then $\nu\in\mathbb{Q}_{\geq 0}\Delta_{+,0}$ and $\nu=\nu+q\xi\in -Q^+$ so $\nu=0$. Therefore $W\lambda=W\rho_0$. Hence $W\rho_0$ is the only regular orbit lying entirely in $\rho_0-Q^+$. Since $\fgl(m|n)\cong\fgl(n|m)$, this establishes (i). Consider the case $m=n$. Set $\beta_i:=\vareps_i-\delta_i, \beta'_i:=\delta_i-\vareps_{i+1}$. One has $\xi=\sum_{i=1}^n\beta_i$ and $\Pi=\{\beta_i\}_{i=1}^n\cup\{\beta'_i\}_{i=1}^{n-1}$. The simple roots of $\Delta_{+,0}$ are $\vareps_i-\vareps_{i+1}= \beta_i+\beta'_i$ and $\delta_i-\delta_{i+1}=\beta'_i+\beta_{i+1}$. Thus $\nu\in\mathbb{Q}_{\geq 0}\Delta_0^+$ takes the form $\nu=\sum_{i=1}^{n-1}b_i(\beta_i+\beta_i')+c_i(\beta'_i+\beta_{i+1})$ with $b_i,c_i\in\mathbb{Q}_{\geq 0}$. By above, $\nu':=-(\nu+q\xi)\in Q^+$. Therefore $\nu+\nu'\in\mathbb{Q}\sum_{i=1}^n\beta_i$ and $\nu'\in Q^+$. One readily sees that this implies $b_i=c_i=0$ that is $\nu=0$. One has $Q^+\cap \mathbb{Q}\xi= \mathbb{Z}_{\geq 0}\xi$. Hence a regular orbit in $\rho_0-Q^+$ intersects with the set $\rho_0-\mathbb{Z}_{\geq 0}\xi$ as required. ### Case $C(n)$ {#orbCn} Take $\Pi=\{\vareps_1-\vareps_2,\vareps_2-\vareps_3,\ldots, \vareps_n-\delta_1,\vareps_n+\delta_1\}$. One has $\mathbb{Q}\Delta=\mathbb{Q}\Delta_{0}\oplus \mathbb{Q}\delta_1$. We claim that $W\rho_0$ is the only regular orbit lying entirely in $(\rho_0-Q^+)$. Indeed, take a regular orbit lying entirely in $(\rho_0-Q^+)$. Combining the fact that $W\delta_1=\delta_1$ and , we see that this orbit contains a point of the form $\rho_0+\nu+q\delta_1$, where $\nu\in\mathbb{Q}_{\geq 0}\Delta_0^+$ and $q\in\mathbb{Q}$. Since $-(\nu+q\delta_1)\in Q^+$, one has $q\delta_1\in -\mathbb{Q}_{\geq 0}Q^+$. However, $2\delta_1$ is the difference of two simple roots ($2\delta_1=(\vareps_n+\delta_1)-(\vareps_n-\delta_1)$) so $\mathbb{Q}\delta_1\cap (-\mathbb{Q}_{\geq 0}Q^+)=\{0\}$ that is $q=0$. The conditions $\nu\in\mathbb{Q}_{\geq 0}\Delta_0^+$, $-(\nu+q\delta_1)\in Q^+$ give $\nu=0$. The claim follows. Step (i) {#sect(i)} ======== Consider the following operations with the admissible pairs $(S,\Pi)$. First type operations are the odd reflections $(S,\Pi)\mapsto (s_{\beta}S,s_{\beta}\Pi)= (S\setminus\{\beta\}\cup\{-\beta\},s_{\beta}\Pi)$ with respect to an element of $\beta\in S$ (see \[odd\]). By \[odd\], these odd reflections preserve the expressions $X, Re^{\rho}$. Second type operations are the operations $(S,\Pi)\mapsto (S',\Pi)$ described in , where it is shown that these operations also preserve the expressions $X, Re^{\rho}$. Consider the equivalence relation on the set of admissible pairs $(S,\Pi)$ generated by these operations. In \[fPi\] we will show that there are two equivalence classes for $D(m,n), m>n$ and one equivalence class for other cases. In \[fPi2\] we will show that if $(S,\Pi), (S,\Pi')$ are admissible pairs, then $\Pi=\Pi'$ (on the other hand, there are admissible pairs $(S,\Pi), (S',\Pi)$ with $S\not=S'$, see ). Notation {#notation-1} -------- Let us introduce the following operator $F:\cR\to\cR$ $$F(Y):=\sum_{w\in W^{\#}}\sgn(w) wY.$$ Clearly, $F(wY)=\sgn(w) F(Y)$ for $w\in W^{\#}$, so $F(Y)=0$ if $wY=Y$ for some $w\in W^{\#}$ with $\sgn(w)=-1$. For an admissible pair $(S,\Pi)$ introduce $$Y(S, \Pi):=\frac{e^{\rho(\Pi)}}{\prod_{\beta\in S}(1+e^{-\beta})},\ \ \ \ X(S,\Pi):=F(Y(S,\Pi)),$$ where $\rho(\Pi)$ is the element $\rho$ defined for given $\Pi$. Note that $X=X(S,\Pi)$ for the corresponding pair $(S,\Pi)$. Odd reflections {#odd} --------------- Recall a notion of odd reflections, see [@S]. Let $\Pi$ be a set of simple roots and $\Delta_+(\Pi)$ be the corresponding set of positive roots. Fix a simple isotropic root $\beta\in\Pi$ and set $$s_{\beta}(\Delta_+):=\Delta_+(\Pi)\setminus\{\beta\}\cup\{-\beta\}.$$ For each $P\subset \Pi$ set $s_{\beta}(P):=\{s_{\beta}(\alpha)|\ \alpha\in P\}$, where $$\text{ for }\alpha\in\Pi\ \ \ \ s_{\beta}(\alpha):=\left\{ \begin{array}{ll} -\alpha & \text{ if } \alpha=\beta,\\ \alpha & \text{ if } (\alpha,\beta)=0,\alpha\not=\beta\\ \alpha+\beta & \text{ if } (\alpha,\beta)\not=0. \end{array}\right.$$ By [@S], $s_{\beta}(\Delta_+)$ is a set of positive roots (i.e., $s_{\beta}(\Delta_+)=\Delta_+(f)$ for some functional $f$) and the corresponding set of simple roots is $s_{\beta}(\Pi)$. Clearly, $\rho(s_{\beta}(\Pi))=\rho(\Pi)+\beta$. Let $(S,\Pi)$ be an admissible pair. Take $\beta\in S$. Then for any $\beta'\in S\setminus\{\beta\}$ one has $s_{\beta}(\beta')=\beta'$ so $s_{\beta}(S)=(S\setminus\{\beta\})\cup\{-\beta\}$. Clearly, the pair $(s_{\beta}(S),s_{\beta}(\Pi))$ is admissible. Since $\rho(s_{\beta}(\Pi))=\rho(\Pi)+\beta$, one has $Y(S,\Pi)=Y(s_{\beta}(S),s_{\beta}(\Pi))$. {#section-7} [lemnewS]{} Assume that $\gamma,\gamma'$ are isotropic roots such that $$\gamma\in S,\ \gamma'\in \Pi,\ \gamma+\gamma'\in\Delta^{\#},\ \ (\gamma', \beta)=0 \text{ for each }\beta\in S\setminus\{\gamma\}.$$ Then the pair $(S',\Pi)$, where $S':=(S\cup\{\gamma'\})\setminus\{\gamma\}$, is admissible and $X(S,\Pi)=X(S',\Pi)$. It is clear that the pair $(S',\Pi)$ is admissible. Set $\alpha:=\gamma+\gamma'$ and let $s_{\alpha}\in W^{\#}$ be the reflection with respect to the root $\alpha$. Our assumptions imply that $$\label{salli} s_{\alpha}\rho=\rho;\ \ \ s_{\alpha}\gamma'=-\gamma;\ \ \ s_{\alpha}\beta=\beta \text{ for } \beta\in S\setminus\{\gamma\}.$$ Therefore $s_{\alpha}(Y(S',\Pi))=Y(S,\Pi)e^{-\gamma}$ that is $F(Y(S',\Pi))=F\bigl(s_{\alpha}(Y(S,\Pi)e^{-\gamma})\bigr)$. Since $F\circ s_{\alpha}=-F$, the required formula $X(S,\Pi)=X(S',\Pi)$ is equivalent to the equality $F\bigl(Y(S,\Pi)(1+e^{-\gamma})\bigr)=0$, which follows from the fact that the expression $$Y(S,\Pi)(1+e^{-\gamma})=\frac{e^{\rho}} {\prod_{\beta\in S\setminus\{\gamma\}}(1+e^{-\beta})}$$ is, by (\[salli\]), $s_{\alpha}$-invariant. Equivalence classes {#fPi} ------------------- We consider the types $A(m-1,n-1), B(m,n), D(m,n)$ with all possible $m,n$. We express the roots in terms of linear functions $\xi_1,\ldots,\xi_{m+n}$, see [@Ksuper], such that $$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $\Delta$ A(m-1,n-1) B(m,n) D(m,n) ---------------- -------------------------------------------------- ------------------------------------------------------------- -------------------------------------------------- $\Delta_{+,0}$ $U$ $U'\cup\{\xi_i\}_{i=1}^m\cup $U'\cup \{2\xi_i\}_{i=m+1}^{m+n}$ \{2\xi_i\}_{i=m+1}^{m+n}$ $\Delta_1$ $\{\pm(\xi_i-\xi_j)\}_{1\leq i\leq m<j\leq m+n}$ $\{\pm\xi_i\pm\xi_j;\pm \xi_j)\}_{1\leq i\leq m<j\leq m+n}$ $\{\pm\xi_i\pm\xi_j\}_{1\leq i\leq m<j\leq m+n}$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $$and$$ [l]{} U:={\_i-\_j| 1&lt;i&lt;jmm+1i&lt;jm+n},\ U’:={\_i\_j| 1&lt;i&lt;jmm+1i&lt;jm+n}. $$ ### {#section-8} For a system of simple roots $\Pi\subset \spn\{\xi_i\}_{i=1}^{m+n}$ take an element $f_{\Pi}\in \spn\{\xi_i^*\}_{i=1}^{m+n}$ such that $\langle f_{\Pi},\alpha\rangle=1$ for each $\alpha\in\Pi$ (the existence of $f_{\Pi}$ follows from linear independence of the elements in $\Pi$). Note that $f_{\Pi}$ is unique if $\fg$ is not of the type $A(m-1,n-1)$; for $A(m-1,n-1)$ we fix $f_{\Pi}$ by the additional condition $\min_{1\leq i\leq m+n}\langle f_{\Pi},\xi_i\rangle=1$. In the notation of Sect. \[intro\] one has $\Pi=\Pi(f_{\Pi})$. We will use the following properties of $f_{\Pi}$: \(i) $\langle f_{\Pi},\alpha\rangle\in\mathbb{Z}\setminus\{0\}$ for all $\alpha\in\Delta$; \(ii) if $\alpha\in\Delta^+$, then $\langle f_{\Pi},\alpha\rangle\geq 1$; \(iii) a root $\alpha\in\Delta$ is simple iff $\langle f_{\Pi},\alpha\rangle=1$. Write $f_{\Pi}=\sum_{i=1}^{m+n} x_i\xi_i$. By (i), the $x_i$s are pairwise different; by (ii) , $x_i\pm x_{i+1}>0$ for $i\not=m$ ($1\leq i\leq m+n-1$); by (iii), a root $\xi_i-\xi_j$ is simple iff $x_i-x_j=1$. ### {#xis} For type $A(m-1,n-1)$ all roots are of the form $\xi_i-\xi_j$. In particular, $\{x_i\}_{i=1}^{m+n}=\{1,\ldots, m+n\}$. Consider the case $B(m,n)$. In this case for each $i$ one has $\xi_i\in\Delta_{+}$ so $x_i\geq 1$. Therefore, by (iii), a simple root can not be of the form $\pm(\xi_i+\xi_j)$ so $\Pi=\{\xi_{i_1}-\xi_{i_2},\xi_{i_2}-\xi_{i_3},\ldots, \xi_{i_{m+n-1}}-\xi_{i_{m+n}},\xi_{i_{m+n}}\}$ and $\{x_i\}_{i=1}^{m+n}=\{1,\ldots, m+n\}$. ### {#section-9} Consider the case $D(m,n)$. Using (i), (ii) and the fact that $2\xi_{m+n}\in\Delta_{+,0}$, we conclude $$\begin{array}{l} \forall j\ 2x_j\in\mathbb{Z};\ \ \ \ \ \forall i\not=j\ x_i\pm x_j\in\mathbb{Z}\setminus\{0\};\\ x_i-x_j\geq j-i\text{ for }i<j\leq m\text{ or } m<i<j;\\ x_j>0 \text{ for }j>m;\ \ \ x_j+x_i>0\text{ for }i<j\leq m. \end{array}$$ Recall that, if $-(\xi_p+\xi_q)\in \Pi\cap\Delta_1$, then $\Pi':=s_{-\xi_p-\xi_q}(\Pi)$ contains $\xi_p+\xi_q$. Let us [*assume that* ]{} $\xi_p+\xi_q\in\Pi\cap\Delta_1, p<q$, that is $x_p+x_q=1,\ p\leq m<q$. If $p<m$, then $x_p+x_q>\pm (x_m+x_q)$ (because $x_p\pm x_m, 2x_q>0$), so $x_p+x_q>|x_m+x_q|\geq 1$, a contradiction. Hence $p=m$, in particular, $$\label{xm} \pm (\xi_p+\xi_q)\in\Pi\cap\Delta_1,\ p<q\ \Longrightarrow\ p=m.$$ Since $2x_q\in\mathbb{Z}_{>0},\ x_p+x_q=1$ and $x_q\not=x_m$, one has $x_m\leq 0$. Therefore the assumption implies $$x_m+x_q=1 \ \& \ q>m \ \&\ x_m\leq 0.$$ Since $\xi_{m-1}-\xi_m\in\Delta^+$ there exists a simple root of the form $\pm\xi_s-\xi_m$ ($s\not=m$). First, consider the case when $\xi_s-\xi_m\in\Pi$, that is $x_s-x_m=1$. Since $x_m\leq 0$, one has $x_s+x_m\leq x_s-x_m=1$ so $x_s+x_m<0$, that is $-(\xi_s+\xi_m)\in \Delta_+$. Therefore $s>m$ and $x_q+x_s=2$, because $x_q+x_m=x_s-x_m=1$. Since $x_{m+n-i}\geq 1/2+i$ for $i<n$, we conclude that either $x_q=3/2,x_m=-1/2,x_s=1/2$, which contradicts to $x_m+x_s\not=0$, or $q=s=m+n, x_{m+n}=1, x_m=0$. Hence $\xi_s-\xi_m\in\Pi$ implies $q=m+n, x_{m+n}=1, x_m=0$ and $\xi_{m+n}\pm\xi_m\in\Pi$. Now consider the case when $-\xi_s-\xi_m\in\Pi$. Then $s>m$ and $x_m+x_q=-x_m-x_s=1$ so $x_q-x_s=2$. Since $x_q-x_{q+i}\geq i$ one has $s=q+1$ or $s=q+2$. If $s=q+2$, then we have $2=x_q-x_{q+2} =(x_q-x_{q+1})+(x_{q+1}-x_{q+2})$ that is $x_q-x_{q+1}=1$ so $x_m+x_{q+1}=0$, a contradiction. Hence $s=q+1$ that is $-x_m-x_{q+1}=1$. For $i<m$ one has $0<x_i+x_m=-x_{q+1}-1+x_i$ so $1<x_i-x_{q+1}$. Therefore for $i<m,t\geq q+1$ one has $x_i+x_t>x_i-x_t>1$, so the roots $\pm\xi_i\pm\xi_t$ are not simple. Assume that $m\geq n$ and $(S,\Pi)$ is an admissible pair. Then for each $m<t\leq m+n$ there exists $i_t<m$ such that one of the roots $\pm\xi_t\pm\xi_{i_t}$ is simple (and lies in $S$) and $i_t\not=i_p$ for $t\not=p$. By above, $i_{q+1}=m$ and there is no suitable $i_t$ for $t>q+1$. Hence $q+1=m+n$ and $(-\xi_m-\xi_{m+n})\in S$, that is $\xi_m+\xi_{m+n}\in s_{-\xi_m-\xi_{m+n}} S$. ### {#section-10} We conclude that if $(S,\Pi)$ is an admissible pair for $\fg=D(m,n)$, then one of the following possibilities hold: either all elements of $S$ are of the form $\xi_i-\xi_j$, or all elements of $S$ except one are of the form $\xi_i-\xi_j$, and this exceptional one is $\beta$, where \(1) $\beta=\xi_m+\xi_{m+n}$ and $\xi_{m+n}-\xi_m\in\Pi$; \(2) $m<n$ and $\beta=\xi_m+\xi_{s}, m<s<m+n$, $-\xi_{s+1}-\xi_m\in\Pi$; \(3) $\beta:=-(\xi_m+\xi_s)$, the pair $(s_{\beta}S,s_{\beta}\Pi)$ is one of whose described in (1)-(2). ### {#section-11} Consider the case $D(m,n),\ n\geq m$. Then $\Delta^{\#}=\{\xi_i\pm\xi_j;2\xi_i\}_{i=m+1}^{m+n}$. Let $(S,\Pi)$ be an admissible pair. If $\xi_m+\xi_s\in S$ for $s<m+n$, then, by above, $-(\xi_m+\xi_{s+1})\in\Pi$ and the pair $(S,\Pi)$ is equivalent to the pair $((S\setminus\{\xi_m+\xi_s\})\cup\{-\xi_{m}-\xi_{s+1}\},\Pi)$, which is equivalent to the pair $((S\setminus\{\xi_m+\xi_s\}) \cup\{\xi_{m}+\xi_{s+1}\},s_{-\xi_{m}-\xi_{s+1}}\Pi)$. Thus a pair $(S,\Pi)$ with $\xi_m+\xi_s\in S$ is equivalent to a pair $(S',\Pi')$ with $\xi_m+\xi_{m+n}\in S$. If $\xi_m+\xi_{m+n}\in S$, then, by above, $\xi_{m+n}-\xi_m\in \Pi$ and the pair $(S,\Pi)$ is equivalent to the pair $((S\setminus\{\xi_{m+n}+\xi_m\})\cup\{\xi_{m+n}-\xi_m\},\Pi)$. We conclude that any pair $(S,\Pi)$ is equivalent to a pair $(S',\Pi')$, where $S'=\{\xi_i-\xi_{i_j}\}_{i=1}^m$. Consider the case $D(m,n),\ m>n$. Then $\Delta^{\#}=\{\xi_i\pm\xi_j\}_{i=1}^{m}$. By above, any pair $(S,\Pi)$ is equivalent either to a pair $(S',\Pi')$, where $S'=\{\xi_i-\xi_{i_j}\}_{i=1}^n$, or to a pair $(S',\Pi')$, where $S'=\{\xi_i-\xi_{i_j}\}_{i=1}^{n-1}\cup \{\xi_{m+n}+\xi_m\}$ and $\xi_{m+n}-\xi_m\in \Pi'$. ### {#section-12} Let $(S,\Pi)$ be an admissible pair. We conclude that any pair $(S,\Pi)$ is equivalent to a pair $(S',\Pi')$, where either $S'=\{\xi_i-\xi_{j_i}\}_{i=1}^{\min(m,n)}$, or, for $D(m,n),\ m>n$ $S'=\{\xi_i-\xi_{i_j}\}_{i=1}^{n-1}\cup \{\xi_{m+n}+\xi_m\}$ and $\xi_{m+n}-\xi_m\in \Pi'$. {#fPi2} Fix a set of simple roots of $\fg$ and construct $f_{\Pi}$ as in \[fPi\]. We mark the points $x_i$ on the real line by $a$’s and $b$’s in one of the following ways: \(M) mark $x_i$ by $a$ (resp., by $b$) if $1\leq i\leq m$ (resp., if $m<i\leq m+n$); \(N) mark $x_i$ by $b$ (resp., by $a$) if $1\leq i\leq m$ (resp., if $m<i\leq m+n$). We use the marking (M) if $\Delta^{\#}$ lies in the span of $\{\xi_i\}_{i=1}^m$ and the marking (N) if $\Delta^{\#}$ lies in the span of $\{\xi_i\}_{i=m+1}^{m+n}$. Note that in all cases the number of $a$s is not smaller than the number of $b$s. We fix $\Delta^{\#}$ and an admissible pair $(S,\Pi)$ such that $S=\{\xi_i-\xi_{j_i}\}_{i=1}^{\min(m,n)}$, or for $D(m,n), m>n$, $S=\{\xi_i-\xi_{i_j}\}_{i=1}^{n-1}\cup \{\xi_{m+n}+\xi_m\}$. If $\xi_i-\xi_j\in S$ (resp., $\xi_i+\xi_j\in S$) we draw a bow $\smile$ (resp., $\frown$) between the points $x_i$ and $x_j$. Observe that the points connected by a bow are neighbours and they are marked by different letters ($a$ and $b$). We say that a marked point is a vertex if it is a vertex of a bow. Note that the bows do not have common vertices and that all points marked by $b$ are vertices. From now one we consider the diagrams, which are sequence of $a$s and $b$s endowed with the bows (we do not care about the values of $x_i$). For example, for $\fg=A(4,1)$ and $\Pi=\{\xi_1-\xi_2;\xi_2-\xi_6; \xi_6-\xi_7;\xi_7-\xi_3;\xi_3-\xi_4;\xi_4-\xi_5\}$ we choose $f=\xi_5^*+2\xi_4^*+3\xi_3^*+4\xi_7^*+5\xi_6^*+6\xi_2^*+7\xi_1^*$ and taking $S=\{\xi_2-\xi_6;\xi_7-\xi_3\}$ we obtain the diagram $aaa\smile bb\smile a a$; for $\fg=B(2,2)$ and $\Pi=\{\xi_1-\xi_3;\xi_3-\xi_2;\xi_2-\xi_4;\xi_4\}$ we choose $f=\xi_4^*+2\xi_2^*+3\xi_3^*+4\xi_1^*$ and taking $S=\{\xi_1-\xi_3;\xi_2-\xi_4\}$ we obtain the diagram $a\smile ba\smile b$ for the marking (N), and $b\smile ab\smile a$ for the marking (M). Observe that a diagram containing $\frown$ appear only in the case $D(m,n),\ m>n$, and such a diagram starts from $a\frown b$, because $\xi_{m+n}+\xi_m\in S$ forces $\xi_{m+n}-\xi_m\in\Pi$, which implies $x_m=0, x_{m+n}=1, x_i>0$ for all $i\not=m$. ### {#section-13} Let us see how the odd reflections and the operations of the second type, introduced in , change our diagrams. Recall that $\xi_i-\xi_j\in\Delta_+$ iff $x_i>x_j$. For an odd simple root $\beta$ one has $s_{\beta}(\Delta_+)=(\Delta_+\setminus\{\beta\})\cup\{-\beta\}$, so the order of $x_i$s for $s_{\xi_p-\xi_q}(\Delta_+)$ is obtained from the order of $x_i$s for $\Delta_+$ by the interchange of $x_p$ and $x_q$ (if $\xi_p-\xi_q\in \Pi\cap\Delta_1$). Therefore the odd reflection with respect to $\xi_p-\xi_q\in S$ corresponds to the following operation with the diagram: we interchange the vertices (i.e., the marks $a,b$) of the corresponding bow: $$...a\smile b...\ \mapsto ...b\smile a...;\ \ \ \ \ ...b\smile a...\ \mapsto ...a\smile b...$$ If the diagram has a part $a\smile ba$ (resp. $ab\smile a$), where the last (resp. the first) sign $a$ is not a vertex, and $x_i,x_j,x_k$ are the corresponding points, then the quadrapole $(S,\Pi), \gamma:=\xi_i-\xi_j,\gamma':=\xi_j-\xi_k$ (resp. $\gamma':=\xi_i-\xi_j,\gamma:=\xi_j-\xi_k$) satisfies the assumptions of . The operation $(S,\Pi)\mapsto (S',\Pi)$, where $S':=(S\setminus\{\gamma\})\cup \{\gamma'\}$ corresponds to the following operation with our diagram: $a\smile ba\mapsto ab\smile a$ (resp. $ab\smile a\mapsto a\smile ba$). Hence we can perform the operation of the second type if $a,b,a$ are neighbouring points, $b$ is connected by $\smile$ with one of $a$s and another $a$ is not a vertex; in this case, we remove the bow and connect $b$ with another $a$: $$...ab\smile a...\ \mapsto ...a\smile ba...;\ \ \ \ \ ...a\smile b a...\ \mapsto ...a b\smile a....$$ Since both our operations are involutions, we can consider the orbit of a given diagram with respect to the action of the group generated by these operations. Let us show that all diagrams without $\frown$ lie in the same orbit. Indeed, using the operations $a\smile b\mapsto b\smile a$ and $ab\smile a\mapsto a\smile ba$ we put $b$ to the first place so our new diagram starts from $b\smile a$. Then we do the same with the rest of the diagram and so on. Finally, we obtain the diagram of the form $b\smile ab\smile a\ldots b\smile a\ldots a$. By the same argument, all the diagram starting from $b\frown a$ lie in the same orbit. Hence, for $\fg\not=D(m,n), m>n$ all diagrams lie in the same orbit, and for $\fg=D(m,n), m>n$ there are two orbits: the diagrams with $\frown$ and the diagrams without $\frown$. ### {#PiPi'} We claim that if $(S,\Pi), (S,\Pi')$ are admissible pairs, then $\Pi=\Pi'$. It is clear that if the claim is valid for some $S$, then it is valid for all $S'$ such that $(S,\Pi)$ is equivalent to $(S',\Pi')$. Thus it is enough to verify the claim for any representative of the orbit. Each $S$ determines the diagram and the diagram determines the order of $x_i$’s (for instance, the diagram $b\smile ab\smile a\ldots b\smile aa\ldots a$ gives $x_{m+n}<x_m<x_{m+n-1}<x_{m-1}<\ldots<x_{m+1}<x_{m-n+1}<x_{m-n}<\ldots<x_1$ for the marking (M) and $x_m<x_{m+n}<x_{m-1}<x_{m+n-1}<\ldots<x_1<x_{n+1}<x_{n}<\ldots<x_{m+1}$ for the marking (N)). For $A(m-1,n-1), B(m,n)$ one has $\{x_i\}_{i=1}^{m+n}=\{1,\ldots,m+n\}$, by \[xis\], so each diagram determines $\Pi$. For $D(m,n),\ m>n$ the above diagram means that $-\xi_{m+n}+\xi_m,-\xi_m+\xi_{m+n-1},\ldots\in \Pi$. Since $2\xi_{m+n}\in\Delta_+$, we conclude that $2\xi_{m+n}\in\Pi$ that is $x_{m+n}=\frac{1}{2}$ and $\{x_i\}_{i=1}^{m+n}=\{\frac{1}{2},\ldots,m+n-\frac{1}{2}\}$. Hence the above diagram determines $\Pi$. For $D(m,n), m\leq n$ the same reasoning shows that the diagram $a\smile bb\smile a\ldots b\smile aa\ldots a$ determines $\Pi$. It remains to consider the case $D(m,n),\ m>n$ and the diagram $a\frown b b\smile ab\smile a\ldots b\smile aa\ldots a$. In this case, $\xi_m+\xi_{m+n}\in S$ so, by above, $x_m=0$ and $\{x_i\}_{i=1}^{m+n}=\{0,\ldots,m+n-1\}$. Thus the diagram determines $\Pi$. ### {#section-14} Consider the case $\fg\not=D(m,n), m>n$. Fix an admissible pair $(S,\Pi)$. By \[fPi2\], any admissible pair $(S',\Pi')$ is equivalent to an admissible pair $(S,\Pi'')$; by \[PiPi’\] one has $\Pi=\Pi''$ so $(S',\Pi')$ is equivalent to $(S,\Pi)$. Consider the case $\fg=D(m,n), m>n$. Fix admissible pairs $(S,\Pi), \ (S',\Pi')$, where $S$ consists of the roots of the form $\xi_i-\xi_j$ and $S'$ contains $\xi_m+\xi_{m+n}$. Arguing as above, we conclude that any admissible pair $(S'',\Pi'')$ is equivalent either to $(S,\Pi)$ or to $(S',\Pi')$. {#section-15} [cornews]{} (i) If $(S,\Pi),\ (S,\Pi')$ are admissible pairs, then $\Pi=\Pi'$. \(ii) For $\fg\not=D(m,n), m>n$ there is one equivalence class of the pairs $(S,\Pi)$ and the left-hand (resp., right-hand) side of (\[denomKW\]) is the same for all admissible pairs $(S,\Pi)$. \(iii) For $\fg=D(m,n), m>n$ there are two equivalence classes of the pairs $(S,\Pi)$. In the first class $S$ consists of the elements of the form $\pm(\xi_i-\xi_j)$ and in the second class $S$ contains a unique element of the form $\pm(\xi_i+\xi_j)$. The left-hand side (resp., right-hand) of (\[denomKW\]) is the same for all admissible pairs $(S,\Pi)$ belonging to the same class. Steps (ii), (ii’) ================= {#section-16} Assume that $$\label{alpha>0}\begin{array}{l} \forall\alpha\in\Pi\ \ (\alpha,\alpha)\geq 0;\\ W^{\#}\text{ is generated by the set of reflections } \{s_{\alpha}|\ (\alpha,\alpha)>0\}. \end{array}$$ We start from the following lemma. ### {#section-17} [lem1]{} One has \(i) $\rho\geq w\rho$ for all $w\in W^{\#}$; \(ii) the stabilizer of $\rho$ in $W^{\#}$ is generated by the set $\{s_{\alpha}|\ (\alpha,\alpha)>0,\ (\alpha,\rho)=0\}$. Since $(\alpha,\rho)=\frac{1}{2}(\alpha,\alpha)\geq 0$ for all $\alpha\in\Pi$, one has $(\beta,\rho)\geq 0$ for all $\beta\in\Delta_+$. Take $w\in W^{\#}, w\not=\id$. One has $w=w's_{\beta}$, where $w'\in W^{\#},\beta\in\Delta_+$ are such that the length of $w$ is greater than the length of $w'$ and $w'\beta\in\Delta_+$ (see, for example, [@Jbook], A.1). One has $$\rho-w\rho=\rho-w'\rho+\frac{2(\rho,\beta)} {(\beta,\beta)}\cdot(w'\beta).$$ Now (i) follows by induction on the length of $w$, since $(\rho,\beta)\geq 0,(\beta,\beta)>0,\ w'\beta\in\Delta_+$. For (ii), note that $\rho\geq w'\rho$ by (i), thus $w\rho=\rho$ forces $\rho=w'\rho$ and $(\rho,\beta)=0$. Hence (ii) also follows by induction on the length of $w$. ### {#supprho} Retain notation of (\[expX\]). For $w\in W^{\#}$ one has $-\varphi(w),|w|\mu\in Q^+$, by definition, and $w\rho\leq \rho$, by . Therefore $\varphi(w)-|w|\mu+w\rho\leq \rho$ and the equality means that $\mu=0,\ w\rho=\rho$ and $\varphi(w)=0$, that is $wS\subset\Delta_+$. The above inequality gives $\supp(X)\subset (\rho-Q^+)$. Moreover, by above, the coefficient of $e^{\rho}$ in the expansion of $X$ is equal to $$\sum_{w\in W^{\#}: w\rho=\rho,\ wS\subset\Delta_+} \sgn(w).$$ Root systems {#Rootsy} ------------ Recall that we consider all choices of $\Delta_+$ with a fixed $\Delta_{+,0}$. From now on we [*assume that $m\geq n$*]{} (we consider the types $A(m,n)$, $B(m,n)$, $B(n,m)$, $D(m,n)$, $D(n,m)$) and we embed our root systems in the standard lattices spanned by $\{\vareps_i,\delta_j: 1\leq i\leq m, 1\leq j\leq n\}$ chosen in such a way that $\Delta^{\#}=\Delta_0\cap\spn\{\vareps_i\}_{i=1}^m$. More precisely, for $A(m,n), m\geq n$ we take $$\Delta_{+,0}=\{\vareps_i-\vareps_{i'};\delta_j-\delta_{j'}| 1\leq i<i'\leq m, 1\leq j<j'\leq n\},\ \ \ \Delta_1=\{\pm(\vareps_i-\delta_j)\};$$ for other cases we put $$U':=\{\vareps_i\pm\vareps_{i'};\delta_j\pm\delta_{j'}| 1\leq i<i'\leq m, 1\leq j<j'\leq n\}$$ and then $$\begin{array}{lll} \Delta_{+,0}=U'\cup \{\vareps_i\}_{i=1}^m\cup\{2\delta_j\}_{j=1}^n, & \Delta_1=\{\pm\vareps_i\pm\delta_j,\pm\delta_j\} & \text{ for }B(m,n), m>n \\ & & \text{ and }B(n,n), \Delta^{\#}=B(n);\\ \Delta_{+,0}=U'\cup \{2\vareps_i\}_{i=1}^m\cup\{\delta_j\}_{j=1}^n, & \Delta_1=\{\pm\vareps_i\pm\delta_j,\pm\vareps_i\} & \text{ for }B(n,m), m>n \\ & & \text{ and }B(n,n), \Delta^{\#}=C(n);\\ \Delta_{+,0}=U'\cup \{2\delta_j\}_{j=1}^n, & \Delta_1=\{\pm\vareps_i\pm\delta_j\} & \text{ for }D(m,n), m>n; \\ \Delta_{+,0}=U'\cup \{2\vareps_i\}_{i=1}^m, & \Delta_1=\{\pm\vareps_i\pm\delta_j\} & \text{ for }D(n,m), m\geq n. \end{array}$$ We normalize the form $(-,-)$ by the condition $(\vareps_i,\vareps_j)= -(\delta_i,\delta_j)=\delta_{ij}$. One has $\Delta^{\#}=\Delta_0\cap\spn\{\vareps_i\}_{i=1}^m= \{\alpha\in\Delta_0|\ (\alpha,\alpha)>0\}$. Choice of $(S,\Pi)$ {#rootsys} ------------------- For the case $B(n,n)$ set $$S:=\{\delta_i-\vareps_i\},\ \ \Pi:=\{\delta_1-\vareps_1,\vareps_1-\delta_2, \delta_2-\vareps_2,\ldots, \delta_n-\vareps_n,\vareps_n\},$$ (the root $\vareps_n$ may be even or odd, depending on the choice of $\Delta^{\#}$). For other cases set $$S:=\{\vareps_i-\delta_i\}_{i=1}^n.$$ In order to describe $\Pi$ introduce $$P:=\{\vareps_1-\delta_1, \delta_1-\vareps_2,\vareps_2-\delta_2,\delta_2-\vareps_3, \ldots, \vareps_n-\delta_n,\delta_{n}-\vareps_{n+1}, \vareps_{n+1}-\vareps_{n+2},\ldots, \vareps_{m-1}-\vareps_m\}$$ and set $$ $\Delta$ A(m-1,n-1) B(m,n), B(n,m), $n>m$ D(n,m), $m>n$ D(m,n) $m>n$ ---------- ------------ ----------------------- ------------------------ -------------------------------------- -- $\Pi$ $P$ $P\cup \{\vareps_m\}$ $P\cup \{2\vareps_m\}$ $P\cup\{\vareps_{m-1}+\vareps_{m}\}$ $$$$:={\_1-\_1, \_1-\_2,\_2-\_2,\_2-\_3, …, \_n-\_n,\_n+\_n} D(n,n).$$ Note that the assumptions (\[alpha&gt;0\]) hold. Step (ii): the case $(\vareps_m+\delta_n)\not\in\Pi$ ---------------------------------------------------- By \[supprho\], $\supp(X)\subset (\rho-Q^+)$ and the coefficient of $e^{\rho}$ in the expansion of $X$ is $\sum_{w\in W^{\#}: w\rho=\rho,\ wS\subset\Delta_+} \sgn(w)$. Thus it is enough to show that $$\label{step4} w\in W^{\#} \text{ s.t. } wS\subset\Delta_+,\ w\rho=\rho\ \Longrightarrow\ w=\id.$$ ### {#Sk} Since $(\alpha,\alpha)\geq 0$ for each $\alpha\in\Pi$, one has $(\rho,\beta)=0$ for $\beta\in\Delta$ iff $\beta$ is a linear combination of isotropic simple roots. Consider the case $\fg\not=D(n,n)$ (one has $\rho=0$ for $D(n,n)$). In this case $(\rho,\beta)=0$ for $\beta\in\Delta^{\#}_+$ forces $\beta=\vareps_i-\vareps_j$ for $i<j\leq \min(m,n+1)$. From  we conclude $$\label{eqlem1} \text{ for } \fg\not=D(n,n)\ \ \ \Stab_{W^{\#}}\rho=\left\{ \begin{array}{ll} S_n, &\text{ if } m=n,\\ S_{n+1}, &\text{ if } m>n, \end{array} \right.$$ where $S_k\subset W^{\#}$ is the symmetric group, consisting of the permutations of $\vareps_1,\ldots, \vareps_k$ (that is for $w\in S_k$ one has $w\vareps_i=\vareps_{j_i}$ and $j_i=i$ for $i>k$). ### {#section-18} Take $w\in \Stab_{W^{\#}}\rho$ such that $wS\subset\Delta_+$. Let us show that $w=\id$. Consider the case when $\fg\not=B(n,n), D(n,n)$. Then $S=\{\vareps_i-\delta_i\}_{i=1}^n$. Combining (\[eqlem1\]) and the fact that $\vareps_j-\delta_i\in\Delta_+$ iff $j\leq i$, we conclude that $w\vareps_{i}=\vareps_{j_i}$ for $j_i\leq i$ if $i\leq n$ and $j_i=i$ for $i>n+1$. Hence $w=\id$ as required. For $\fg=D(n,n)$ one has $S=\{\vareps_i-\delta_i\}_{i=1}^n$. Since $-\vareps_i-\delta_j\not\in \Delta_+$ for all $i,j$, the condition $wS\subset\Delta_+$ forces $w\in S_n$ (see \[Sk\] for notation). Repeating the above argument, we obtain $w=\id$. For $\fg=B(n,n)$ one has $S=\{\delta_i-\vareps_i\}_{i=1}^n$. Combining (\[eqlem1\]) and the fact that $\delta_i-\vareps_j\in\Delta_+$ iff $j\geq i$, we obtain for all $i=1,\ldots,n$ that $w\vareps_{i}=\vareps_{j_i}$ for some $j_i\leq i$. Hence $w=\id$ as required. This establishes (\[step4\]) and (ii) for the case $(\vareps_m+\delta_n)\not\in\Pi$. Step (ii): the case $(\vareps_m+\delta_n)\in\Pi$ ------------------------------------------------ Consider the case $D(m,n),\ m>n, (\vareps_m+\delta_n)\in\Pi$. We retain notation of \[Rootsy\] and choose a new pair $(S,\Pi)$: $S =\{\vareps_i-\delta_i\}_{i=1}^{n-1}\cup\{\vareps_m+\delta_n\}$ and $$\Pi:=\{\vareps_1-\delta_1,\delta_1-\vareps_2,\ldots, \delta_{n-1}-\vareps_{n}\}\cup\{\vareps_{i}-\vareps_{i+1}\}_{i=n}^{m-2} \cup\{\vareps_{m-1}-\delta_n,\delta_n- \vareps_m,\delta_n+\vareps_m\}.$$ The assumptions (\[alpha&gt;0\]) are satisfied. By \[supprho\], the coefficient of $e^{\rho}$ in the expansion of $X$ is equal to $\sum_{w\in A} \sgn(w)$, where $A:=\{w\in \Stab_{W^{\#}}\rho|\ wS\subset\Delta_+\}$. Let us show that $A=\{ \id,s_{\vareps_{m-1} -\vareps_m}, s_{\vareps_{m-1}-\vareps_m}s_{\vareps_{m-1} +\vareps_m}\}$; this implies that the coefficient of $e^{\rho}$ in the expansion of $X$ is equal to $1$. Take $w\in W^{\#}$ such that $wS\subset\Delta_+$ . Note that $-\vareps_j-\delta_i\not\in\Delta_+$ for all $i,j$ and for $i<n$ one has $\vareps_j-\delta_i\in\Delta_+$ iff $j\leq i$. The assumption $wS\subset\Delta_+$ means that $w\vareps_{i}-\delta_i\in\Delta_+$ for all $i<n$ and $w\vareps_{m}+\delta_n\in\Delta_+$. For $i<n$ this gives $w\vareps_{i}=\vareps_{j_i}$ for some $j_i\leq i$. Hence $w\vareps_{i}=\vareps_i$ for $i=1,\ldots,n-1$. The remaining condition $w\vareps_{m}+\delta_n\in\Delta_+$ means that $w\vareps_{m}=\vareps_{j_m}$ or $w\vareps_{m}=-\vareps_{m}$. For $m=n+1$ one has $\rho=0$, so $w\in A$ iff $w\in W^{\#},\ wS\subset\Delta_+$. Thus, by above, $w\in A$ iff $w\vareps_{i}=\vareps_i$ for $i<n=m-1$ and $w\vareps_{m}\in\{\pm \vareps_{m},\vareps_{m-1}\}$. Take $m>n+1$. The roots $\beta\in\Delta^{\#}_+$ such that $(\rho,\beta)=0$ are of the form $\beta=\vareps_i-\vareps_j$ for $i<j\leq n$ or $\beta=\vareps_{m-1}\pm\vareps_m$. From  we conclude that the subgroup $\Stab_{W^{\#}}\rho$ is a product of $S_n$ defined in \[Sk\] and the group generated by the reflections $s_{\vareps_{m-1}\pm\vareps_m}$. By above, $w\in A$ iff $w\vareps_{i}=\vareps_i$ for $i<m-1$ and $w\vareps_{m}\in\{\pm \vareps_{m},\vareps_{m-1}\}$. Since $W^{\#}$ is the Weyl group of $D(m)$, i.e. the group of signed permutations of $\{\vareps_{i}\}_{i=1}^m$, changing the even number of signs, the set $$\{w\in W^{\#}|\ w\vareps_{i}=\vareps_i\text{ for }i<m-1 \ \&\ \ w\vareps_{m}\in\{\pm \vareps_{m},\vareps_{m-1}\}\}$$ is $\{ \id,s_{\vareps_{m-1} -\vareps_m}, s_{\vareps_{m-1}-\vareps_m}s_{\vareps_{m-1} +\vareps_m}\}$. Hence $A=\{ \id,s_{\vareps_{m-1} -\vareps_{m}}, s_{\vareps_{m-1}-\vareps_{m}}s_{\vareps_{m-1} +\vareps_{m}}\}$ as required. This establishes (ii) for the case $(\vareps_m+\delta_n)\in\Pi$. Step (ii’) ---------- Consider the case $\fgl(n|n)$. One has $\rho=0$. Set $\xi:=\sum_{\beta\in S}\beta=\sum\vareps_i-\sum\delta_i$. Let us verify that $$\label{skis} \mathbb{Q}\xi\cap \supp (Re^{\rho}-X)=\emptyset.$$ Indeed, it is easy see that $\xi$ has a unique presentation as a positive linear combination of positive roots: $$\label{ski} \xi=\sum_{\alpha\in\Delta_+}m_{\alpha}\alpha, \ m_{\alpha}\geq 0\ \ \Longrightarrow\ \ m_{\beta}=1\text{ for }\beta\in S,\ m_{\alpha}=0 \text{ for }\alpha\not\in S.$$ This implies that for $s\not\in\mathbb{Z}_{\geq 0}$ the coefficients of $e^{-s\xi}$ in $Re^{\rho}=R$ and in $X$ are equal to zero, and that the coefficient of $e^{-s\xi}$ in $R$ is equal to $(-1)^{sn}$ for $s\in\mathbb{Z}_{\geq 0}$. It remains to show that the coefficient of $e^{-s\xi}$ in $X$ is $(-1)^{sn}=(-1)^{\htt (s\xi)}$. It is enough to verify that $|w|\mu-\varphi(w)=s\xi$ for $w\in W^{\#}$ implies $w=\id$. Assume that $|w|\mu-\varphi(w)=s\xi$. By definition, $|w|\mu,-\varphi(w)\in Q^+$. From (\[ski\]) we conclude that $|w|\mu,-\varphi(w)\in\mathbb{Z}_{\geq 0}S$. Recall that $\varphi(w)=\sum_{\beta\in S: w\beta\in\Delta_-} w\beta$. By (\[ski\]), $-\varphi(w)\in\mathbb{Z}_{\geq 0}S$ implies $(-w\beta)\in S$ for each $\beta\in S$ such that $w\beta\in\Delta_-$. However, $(-w\beta)\not\in S$ for any $w\in W^{\#},\ \beta\in S$, because $-w(\vareps_i-\delta_i)=\delta_i-w\vareps_i\not\in S$. Thus $wS\subset\Delta_+$ that is $w\vareps_i=\vareps_{i_j}$ for $i_j\leq i$. Hence $w=\id$. This establishes (\[skis\]). $W$-invariance: step (iii) {#Winv} ========================== In this section we prove that $X$ defined by the formula (\[defX\]) is $W$-skew-invariant for a certain admissible pair $(S,\Pi)$; for the case $D(m,n)\ m>n$ we prove this for two admissible pairs $(S,\Pi)$ and $(S',\Pi')$, which are representatives of the equivalence classes defined in Sect. \[sect(i)\]. Recall that $X$ is $W^{\#}$-skew-invariant and that $\Delta=\Delta^{\#}\coprod\Delta_2$, that is $W=W^{\#}\times W_2$. Operator $F$ {#F} ------------ Recall the operator $F:\cR\to\cR$ given by the formula $F(Y):=\sum_{w\in W^{\#}}\sgn(w) wY$. Clearly, $w(F(Y))=F(wY)$ for $w\in W_2$ and $F(wY)=w(F(Y))=\sgn(w) F(Y)$ for $w\in W^{\#}$. In particular, $F(Y)=0$ if $wY=Y$ for some $w\in W^{\#}$ with $\sgn(w)=-1$. One has $$X=F(Y),\ \text{ where } Y:=\frac{e^{\rho}}{\prod_{\beta\in S}(1+e^{-\beta})}.$$ Suppose that $B\in\cR$ is such that $w_2w_1 B=B$ for some $w_1\in W^{\#}, w_2\in W_2$, where $\sgn (w_1w_2)=1$. Then $$w_2^{-1}F(B)=F(w_2^{-1}B)=F(w_1B)=\sgn(w_1) F(B),$$ that is $w_2F(B)=\sgn(w_2)F(B)$. As a result, in order to verify $W$-skew-invariance of $F(B)$ for an arbitrary $B\in\cR$, it is enough to show that for each generator $y$ of $W_2$ there exists $z\in W^{\#}$ such that $\sgn (yz)=1$ and $yzB=B$ (we consider $y$ running through a set of generators of $W_2$). Root systems {#rootsystem} ------------ We retain notation of \[Rootsy\] for $\Delta_{0,+}$ and $\Delta_1$, but, except for $A(m-1,n-1)$ we do not choose the same pairs $(S,\Pi)$ as in \[rootsys\]. For $A(m-1,n-1)$ we choose $S,\Pi$ as in \[rootsys\] ($S:=\{\vareps_i-\delta_i\}$). For other cases we choose $S:=\{\delta_{n-i}-\vareps_{m-i}\}_{i=0}^{n-1}$ and $$\begin{array}{ll} \Pi:=P\cup\{\vareps_{m}\} &\text{ for }B(m,n), B(n,m) \\ \Pi:=P\cup\{\vareps_{m}+\delta_n\}&\text{ for }D(m,n)\ m>n,\\ \Pi:=P\cup\{2\vareps_{m}\},&\text{ for }D(n,m)\ m\geq n, \end{array}$$ where $P:=\{\vareps_1-\vareps_2,\ldots,\vareps_{m-n-1}-\vareps_{m-n}, \vareps_{m-n}-\delta_1,\delta_1-\vareps_{m-n+1}, \vareps_{m-n+1}-\delta_2,\ldots, \delta_n-\vareps_{m}\}$ for $m>n$, and $P:=\{\delta_1-\vareps_{1},\vareps_{1}-\delta_2,\ldots, \delta_n-\vareps_{n}\}$ for $m=n$. For $D(m,n), m>n$ case we consider two admissible pairs: $(S,\Pi)$ and $(S',\Pi)$, where $S':=\{\delta_{n-i}-\vareps_{m-i}\}_{i=1}^{n-1}\cup\{\delta_n+\vareps_m\}$. Recall that $(\vareps_i,\vareps_j)=-(\delta_i,\delta_j)=\delta_{ij}$ and notice that $(\alpha,\alpha)\geq 0$ for all $\alpha\in\Pi$. $S_n$-invariance {#Sninv} ---------------- Let $S_n\subset W_2$ be the group of permutations of $\delta_1,\ldots,\delta_n$. In all cases $S$ is of the form $S=\pm\{\delta_i-\vareps_{r+i}\}$ for $r=0$ or $r=m-n$. For $i=1,\ldots, n-1$ one has $(\rho,\delta_i-\delta_{i+1})= (\rho,\vareps_{r+i}-\vareps_{r+i+1})=0$. Therefore the reflections $s_{\vareps_{r+i}-\vareps_{r+i+1}}, s_{\delta_i-\delta_{i+1}}$ stabilize $\rho$. Since $s_{\vareps_{r+i}-\vareps_{r+i+1}} s_{\delta_i-\delta_{i+1}}$ stabilizes the elements of $S$, one has $s_{\vareps_{r+i}-\vareps_{r+i+1}}s_{\delta_i-\delta_{i+1}}Y=Y$ for $i=1,\ldots,n-1$. Using \[F\] we conclude that $X$ is $S_n$-skew-invariant. In particular, this establishes $W$-invariance of $X$ for $A(m,n)$-case. For $D(m,n), m>n$ case consider the admissible pair $(S',\Pi)$. Arguing as above, one sees that $s_{\delta_i-\delta_{i+1}}(X)=-X$ for $i=1,\ldots,n-2$. Since $\delta_{n-1}-\vareps_{m-1}, \delta_n+\vareps_m\in S'$ the product $w:=s_{\vareps_{m-1}+\vareps_{m}}s_{\delta_{n-1}-\delta_{n}}$ stabilizes the elements of $S'$. Since $(\rho,\delta_i)= (\rho,\vareps_{m-n+i})=0$ for $i=1,\ldots,n$, $w$ stabilizes $\rho$. Thus $wY=Y$ and so $s_{\delta_{n-1}-\delta_{n}}X=-X$, by \[F\]. Hence $X$ is $S_n$-skew-invariant. $B(m,n), B(n,m), D(m,n), m>n$ cases ----------------------------------- In this case $W_2$ is the group of signed permutations of $\{\delta_i\}_{i=1}^n$ so it is generated by $s_{\delta_n}$ and the elements of $S_n$. In the light of \[F\] and \[Sninv\], it is enough to verify that $s_{\delta_n}s_{\vareps_m}Y=Y$. Set $$\beta:=\delta_n-\vareps_m\in S.$$ Consider the cases $B(m,n), B(n,m)$. In this case $W^{\#}$ is the group of signed permutations of $\{\vareps_i\}_{i=1}^m$. Since $\beta,\vareps_m\in\Pi$, one has $(\rho, \vareps_m)=(\rho, \delta_n)=\frac{1}{2}$ so $s_{\delta_n}s_{\vareps_m}\rho=\rho+\beta$. Clearly, $s_{\delta_n}s_{\vareps_m}$ stabilizes the elements of $S\setminus\{\beta\}$, and $s_{\delta_n}s_{\vareps_m}\beta=-\beta$. As a result, $s_{\delta_n}s_{\vareps_m}Y=Y$ as required. Consider the case $D(m,n), m>n$. In this case $W^{\#}$ is the group of signed permutations of $\{\vareps_i\}$, which change even number of signs. Notice that $s_{\vareps_{m-n}}s_{\vareps_{m}}\in W^{\#}$ and $\sgn(s_{\vareps_{m-n}}s_{\vareps_{m}})=1$. Set $w:=s_{\vareps_{m-n}}s_{\vareps_{m}}s_{\delta_n}$. One has $(\rho,\delta_n)=(\rho,\vareps_m)=(\rho,\vareps_{m-n})=0$ so $w\rho=\rho$. Since $w$ stabilizes the elements of $S\setminus\{\beta\}$, one has $wY=e^{-\beta}Y$. We obtain $$s_{\delta_n}F(Y)=s_{\delta_n}F(s_{\vareps_{m-n}}s_{\vareps_{m}}Y)= F(s_{\delta_n}s_{\vareps_{m-n}}s_{\vareps_{m}}Y)=F(wY)= F(e^{-\beta}Y)$$ and so $$(1+s_{\delta_n})F(Y)=F\bigl((1+e^{-\beta})Y\bigr) =F\bigl(\frac{e^{\rho}}{\prod_{\beta'\in S\setminus\{\beta\}}(1+e^{-\beta'})}\bigr)=0,$$ where the last equality follows from the fact that the reflection $s_{\vareps_{m-n}-\vareps_{m}}$ stabilizes $\rho$ and $S\setminus\{\beta\}$. Hence $(1+s_{\delta_n})F(Y)=0$ as required. For the admissible pair $(S',\Pi)$ we obtain the required formula $(1+s_{\delta_n})F(Y)=0$ along the same lines substituting $\beta$ by $\delta_n+\vareps_{m}$. Case $D(n,m), m\geq n$ ---------------------- In this case $W^{\#}$ is the group of signed permutations of $\{\vareps_i\}$, and $W_2$ is the group of signed permutations of $\{\delta_i\}_{i=1}^n$, which change even number of signs. Note that the reflection $s_{\delta_i}$ does not lie in $W_2$, but $s_{\delta_i}\Delta=\Delta$, so $s_{\delta_i}$ acts on $\cR$ and this action commutes with the operator $F$. Since $W_2$ is generated by $s_{\delta_1}s_{\delta_2}$ and the elements of $S_n$, it is enough to verify that $s_{\delta_1}s_{\delta_2}F(Y)=F(Y)$. Set $\beta_i:=\delta_i-\vareps_{m-n+i}\in S$. One has $(\rho,\vareps_{m-n+i})=(\rho,\delta_i)=1$ so $s_{\delta_i}s_{\vareps_m+n-i}\rho=\rho+2\beta_i$ that is $s_{\delta_i}s_{\vareps_m+n-i} Y=e^{\beta_i}Y$. Therefore $$(1-s_{\delta_i})F(Y)=F(Y)+F(s_{\delta_i}s_{\vareps_m+n-i} Y)= F\bigl((1+e^{\beta_i})Y\bigr)=F\bigl(\frac{e^{\rho+\beta_i}}{ \prod_{\beta\in S\setminus\{\beta_i\}}(1+e^{-\beta})}\bigr)=0,$$ because $s_{\vareps_m+n-i}\in W^{\#}$ stabilizes $\rho+\beta_i$ and the elements of $S\setminus\{\beta_i\}$. Thus $s_{\delta_i}F(Y)=F(Y)$ so $s_{\delta_i}s_{\delta_j}F(Y)=F(Y)$ for any $i,j$. Hence $X=F(Y)$ is $W_2$-skew-invariant. [MMM]{} M. Gorelik, [*Weyl denominator identity for affine Lie superalgebras with non-zero dual Coxeter number*]{}. A. Joseph, [*Quantum groups and their primitive ideals*]{}, Erdebnisse der Mathematik und ihrer Grenzgebiete 3, [**29**]{}, Springer Verlag, 1995. V. G. Kac, [*Lie superalgebras*]{}, Adv. in Math., [**26**]{}, (1977), 8–96. V. G. Kac, [*Characters of typical representations of classical Lie superalgebras*]{}, Comm. in Algebra, [**5**]{}, (1977), No. 8, 889–897. V. G. Kac, [*Infinite-dimensional Lie algebras*]{}, Cambridge University Press, 1990. V. G. Kac, M. Wakimoto, [*Integrable highest weight modules over affine superalgebras and number theory*]{}, in Lie Theory and Geometry, 415-456, Progress in Math., 123, Birkhauser Boston, Boston, MA, 1994. V. Serganova, [*Kac-Moody superalgebras and Integrability*]{}. [^1]: Supported in part by ISF Grant No. 1142/07
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using a simple off-axis jet model of GRBs, we can reproduce the observed unusual properties of the prompt emission of GRB980425, such as the extremely low isotropic equivalent $\gamma$-ray energy, the low peak energy, the high fluence ratio, and the long spectral lag when the jet with the standard energy of $\sim10^{51}$ ergs and the opening half-angle of $10\degr\lesssim\Delta\theta\lesssim30\degr$ is seen from the off-axis viewing angle $\theta_v\sim \Delta \theta +10\gamma^{-1}$, where $\gamma$ is a Lorentz factor of the jet. For our adopted fiducial parameters, if the jet that caused GRB980425 is viewed from the on-axis direction, the intrinsic peak energy $E_p(1+z)$ is $\sim$2.0–4.0 MeV, which corresponds to those of GRB990123 and GRB021004. Our model might be able to explain the other unusual properties of this event. We also discuss the connection of GRB980425 in our model with the X-ray flash, and the origin of a class of GRBs with small $E_\gamma$ such as GRB030329.' author: - Ryo Yamazaki - Daisuke Yonetoku - Takashi Nakamura title: 'GRB980425 in the Off-Axis Jet Model of the Standard GRBs' --- [ address=[Department of Physics, Kyoto University, Kyoto 606-8502, Japan]{} ]{} [ address=[Department of Physics, Kanazawa University, Kakuma, Kanazawa, Ishikawa 920-1192, Japan]{} ]{} [ address=[Department of Physics, Kyoto University, Kyoto 606-8502, Japan]{} ]{} INTRODUCTION ============ There are some GRBs that were thought to be associated with SNe [@dv03; @stanek]. GRB980425 / SN1998bw, located at $z=0.0085$ (36 Mpc), was the first event of such class [@ga98; @kul98; @pian00; @pian03]. It is important to investigate whether GRB980425 is similar to more or less typical long duration GRBs. However, GRB980425 showed unusual observational properties. The isotropic equivalent $\gamma$-ray energy is $E_{iso}\sim 6\times10^{47}$ ergs and the geometrically corrected energy is $E_\gamma=(\Delta\theta)^2E_{iso}/2\sim3 \times10^{46}$ergs$(\Delta\theta/0.3)^2$, where $\Delta\theta$ is the unknown jet opening half-angle. These energies are much smaller than the typical values of GRBs. The other properties of GRB980425 are also unusual; the large low-energy flux [@fro00a], the low variability [@frr00], the long spectral lag [@norris00], and the slowly decaying X-ray afterglow [@pian00; @pian03]. Previous works suggest that the above peculiar observed properties may be explained if the standard jet is seen from the off-axis viewing angle (e.g.[@in01; @na01]). Following this scenario, the relativistic beaming effect reduces $E_{iso}$ and hence $E_\gamma$. In this paper, in order to explain all of the observed properties of GRB980425, we reconsider the prompt emission of this event using our simple jet model [@yin02; @yin03a; @yin03b; @yyn03]. SPECTRAL ANALYSIS OF GRB980425 USING BATSE DATA {#sec:analysis} =============================================== We argue the time-averaged observed spectral properties of GRB980425. Using the BATSE data of GRB980425, we analyze the spectrum within the time of FWHM of the peak flux in the light curve of BATSE channel2. We fit the observed spectrum with the Band function. The best-fit values are $\alpha=-1.0\pm0.3$, $\beta=-2.1\pm0.1$, and $E_p=54.6\pm20.9$keV, which are consistent with those derived by the previous works [@fro00a; @ga98]. This spectral property is similar to one of the recently identified class of the X-ray flash (XRF) [@He01a; @ki02]. The observed fluence of the entire emission is $S$(20–2000keV) $=(4.0\pm0.74)\times10^{-6}$ergcm$^{-2}$, thus we find $E_{iso}=(6.4\pm1.2)\times10^{47}$ergs. The fluence ratio is $R_s=S$(20–50keV)/$S$(50–320keV) $=0.34\pm 0.036$. MODEL OF PROMPT EMISSION OF GRB980425 ===================================== We use a simple jet model of prompt emission of GRBs, where an instantaneous emission of infinitesimally thin shell is adopted [@in01; @yin02; @yin03a; @yin03b; @yyn03]. See @yyn03 for details. We fix model parameters as $\alpha_B=-1$, $\beta_B=-2.1$, $\gamma\nu'_0=2600\,{\rm keV}$, and $\gamma=100$. Normalization of emitted luminosity is determined so that $E_\gamma$ be observationally preferred value of $1.15\times10^{51\pm0.35}(h/0.7)^{-2}$ergs [@bloom03] when we see the jet from the on-axis viewing angle $\theta_v=0$. Our calculations show that on-axis intrinsic peak energy becomes $E_p^{(\theta_v=0)}(1+z)\sim1.54\gamma\nu'_0\sim 4.0$ MeV, in order to reproduce the observed quantities of GRB980425. Indeed, there are some GRBs with higher intrinsic $E_p$; for example, $E_p(1+z)\sim2.0$MeV for GRB990123 and 3.6MeV for GRB021004 [@amati02; @bar03]. The left panel of Figure.1 shows $E_{iso}$ as a function of the viewing angle $\theta_v$. When $\theta_v\lesssim\Delta\theta$, $E_{iso}$ is constant, while for $\theta_v\gtrsim\Delta\theta$, $E_{iso}$ is considerably smaller than the typical value of $\sim10^{51-53}$ergs because of the relativistic beaming effect. We next calculated $E_p$ and $R_s$ for the set of $\Delta\theta$ and $\theta_v^\ast$ that reproduces the observed $E_{iso}$ of GRB980425. For our parameters, $\Delta\theta$ should be between $\sim$18$\degr$ and $\sim$31$\degr$, and then $\theta_v^\ast$ ranges between $\sim$24$\degr$ and $\sim$35$\degr$ in order to reproduce the observation results. Thorough discussions on the right panel of Figure1 is found in [@yyn03]. ![ (Left panel): the isotropic equivalent $\gamma$-ray energy $E_{iso}$ is shown as a function of the viewing angle $\theta_v$ for a fixed jet opening half-angle $\Delta\theta$. The source is located at $z=0.0085$. The values of $\Delta\theta$ are shown in parentheses. Solid lines correspond to the case of $\gamma\nu'_0=2600$ keV, while dotted lines $\gamma\nu'_0=1300$ keV. Horizontal dashed line represents the observed value of GRB980425. (Right panel): the upper panel shows $\theta_v^\ast$ for which $E_{iso}$ is the observed value of GRB980425, while the middle and the lower panels represent the fluence ratio $R_s^\ast=R_s^{(\theta_v=\theta_v^\ast)}$ and the peak energy $E_p^\ast=E_p^{(\theta_v=\theta_v^\ast)}$, respectively. Solid lines correspond to the fiducial case. The dotted lines represent regions where $E_{iso}$ becomes $(6.4\pm1.2)\times10^{47}$ ergs when $\E_\gamma$ is in 1 $\sigma$ and 5 $\sigma$ level around the fiducial value, respectively. The dot-dashed line in the upper panel represents $\theta_v^\ast=\Delta\theta$. Horizontal dashed lines in the middle and the lower panels represent the observational bounds. ](nakamura_fig1 "fig:"){width="45.00000%" height=".27\textheight"} ![ (Left panel): the isotropic equivalent $\gamma$-ray energy $E_{iso}$ is shown as a function of the viewing angle $\theta_v$ for a fixed jet opening half-angle $\Delta\theta$. The source is located at $z=0.0085$. The values of $\Delta\theta$ are shown in parentheses. Solid lines correspond to the case of $\gamma\nu'_0=2600$ keV, while dotted lines $\gamma\nu'_0=1300$ keV. Horizontal dashed line represents the observed value of GRB980425. (Right panel): the upper panel shows $\theta_v^\ast$ for which $E_{iso}$ is the observed value of GRB980425, while the middle and the lower panels represent the fluence ratio $R_s^\ast=R_s^{(\theta_v=\theta_v^\ast)}$ and the peak energy $E_p^\ast=E_p^{(\theta_v=\theta_v^\ast)}$, respectively. Solid lines correspond to the fiducial case. The dotted lines represent regions where $E_{iso}$ becomes $(6.4\pm1.2)\times10^{47}$ ergs when $\E_\gamma$ is in 1 $\sigma$ and 5 $\sigma$ level around the fiducial value, respectively. The dot-dashed line in the upper panel represents $\theta_v^\ast=\Delta\theta$. Horizontal dashed lines in the middle and the lower panels represent the observational bounds. ](nakamura_fig2 "fig:"){width="40.00000%" height=".3\textheight"} DISCUSSION {#sec:dis} ========== We have found that when the jet of opening half-angle of $\Delta\theta\sim10$–30$\degr$ is seen from the off-axis viewing angle of $\theta_v\sim\Delta\theta+6\degr$, observed quantities can be well explained. Observed low variability can be explained since only subjets at the edge of the cone contribute to the observed quantities [@yin02]. If the time unit parameter $r_0/c\beta \gamma^2$ is about 3sec, which is in the reasonable parameter range, the spectral-lag of GRB980425 can be also explained. Our result might be able to explain the slowly decaying X-ray afterglow of GRB980425. If we assume the density profile of ambient matter as $n=n_0 (r/r_{ext})^{-2}$ with $n_0r_{ext}^2=4\times10^{17}$cm$^{-2}$, the break in the afterglow light curve should occur at $t_b = 3.1\times10^2$days$E_{51}(\Theta/0.4\,{\rm rad})^2$, where $\Theta$ is defined by $\Theta^2=(\Delta\theta)^2+\theta_v^2$, and $E$ is the total energy in the collimated jet [@na99; @na01]. Since our calculation suggests $\Theta$ should range between 0.4 and 0.67rad, $t_b$ is consistent with the observation [@na01]. Up to the break time, one can estimate the flux in the X-ray band as $F$(2–10keV) $\propto t^{-0.2}$, where we assume $\theta_v\gg\Delta\theta$ and the spectral index of accelerated electrons as $p=2.2$ [@na99; @na01]. This result is also consistent with the observation [@pian00; @pian03]. Furthermore, the adopted value of $n_0r_{ext}^2$ corresponds to the mass loss rate of the progenitor star $\dot{M}=1.3\times10^{-6}M_\odot$yr$^{-1}$$(v_{\rm W}/10^3\, {\rm km}\,{\rm s}^{-1})$, which might be able to explain the radio data (see [@wa03]). The observed quantities of small $E_p$ and large fluence ratio $R_s$ are the typical values of the XRF [@fro00a; @He01a; @ki02]. The operational definition of the [*BeppoSAX*]{}-XRF is a fast X-ray transient with duration less than $\sim10^3$ s which is detected by WFCs and not detected by the GRBM. If the distance to the source of GRB980425 were larger than $\sim90$ Mpc, the observed flux in the $\gamma$-ray band would have been less than the limiting sensitivity of GRBM, so that the event would have been detected as an XRF. We might be able to explain the origin of a class with low $E_\gamma$ such as GRB980326 and GRB981226 [@bloom03], and GRB030329 whose $E_\gamma$ is about $\sim 5 \times 10^{49}$ergs if the jet break time of $\sim0.48$ days is assumed [@tamagawa; @vanderspek]. Let us consider the jet seen from a viewing angle $\theta_v\sim\Delta\theta+\gamma_i^{-1}$, where $\gamma_i$ is the Lorentz factor of a prompt $\gamma$-ray emitting shell. Due to the relativistic beaming effect, observed $E_\gamma$ of such a jet becomes an order of magnitude smaller than the standard energy. At the same time, the observed peak energy $E_p$ is small because of the relativistic Doppler effect. In fact, the observed $E_p$ of the above three bursts are less than $\sim$70keV. In our model the fraction of low-$E_\gamma$ GRBs becomes $2/(\gamma_i\Delta\theta)\sim 0.1$ since the mean value of $\Delta \theta \sim 0.2$, while a few of them are observed in $\sim$30 samples [@bloom03]. In later phase, the Lorentz factor of afterglow emitting shock $\gamma_f$ is smaller than $\gamma_i$, so that $\theta_v<\Delta\theta+\gamma_f^{-1}$. Then, the observed properties of afterglow may be similar to the on-axis case $\theta_v\ll\Delta\theta$; hence the observational estimation of the jet break time and the jet opening angle remains the same. This work was supported in part by Grant-in-Aid for Scientific Research of the Japanese Ministry of Education, Culture, Sports, Science and Technology, No.05008 (R.Y.), No.14047212 (T.N.), and No.14204024 (T.N.). Amati, L., et al. 2002, A&A, 390, 81 Barraud, C., et al. 2003, A&A, 400, 1021 Bloom, J.S., et al. 2003, ApJ, 594, 674 Della Valle, M. et al. 2003, A&A, 406, L33 Fenimore, E. E. & Ramirez-Ruiz. E., 2000, astro-ph/0004176 Frontera, F. et al. 2000a, ApJS, 127, 59 Galama, T.J., et al. 1998, Nature, 395, 670 Heise, J., et al. 2001, in Proc. 2nd Rome Workshop: GRBs in the Afterglow Era, astro-ph/0111246 Ioka, K., & Nakamura, T. 2001, ApJ, 554, L163 Kippen, R. M., et al. 2002, in Proc. Woods Hole Gamma-Ray Burst Workshop, astro-ph/0203114 Kulkarni, S.R., et al. 1998, Nature, 395, 663 Nakamura, T. 1999, ApJ, 522, L101 Nakamura, T. 2001, Prog. Theor. Phys. Suppl, 143, 50 Norris, J.P., Marani, G.F., & Bonnell, J.T. 2000, ApJ, 534, 248 Pian, E. et al, 2000, ApJ, 536, 778 Pian, E. et al, 2003, astro-ph/0304521 Stanek, K.Z. et al. 2003, ApJ, 591, L71 Tamagawa, T. et al. 2003, in this proceeding Vanderspek, R. et al. 2003, GCN circ. 1997 Waxman, E. 2003, astro-ph/0310320 Yamazaki, R., Ioka, K., & Nakamura, T. 2002, ApJ, 571, L31 Yamazaki, R., Ioka, K., & Nakamura, T. 2003a, ApJ, 591, 283 Yamazaki, R., Ioka, K., & Nakamura, T. 2003b, ApJ, 593, 941 Yamazaki, R., Yonetoku, D., & Nakamura, T. 2003, ApJ, 594, L79
{ "pile_set_name": "ArXiv" }
--- author: - 'Ivan Bliznets [^1]' - 'Fedor V. Fomin [^2]' - 'Marcin Pilipczuk [^3]' - 'Michał Pilipczuk [^4]' bibliography: - '../completion.bib' title: | A subexponential parameterized algorithm for\ <span style="font-variant:small-caps;">Interval Completion</span>[^5] --- Introduction {#sec:intro} ============ Preliminaries {#sec:prelims} ============= Overview of the algorithm {#sec:overview} ========================= Modules and neighborhood classes {#sec:neighbors} ================================ Listing potential maximal cliques and sections {#sec:pmc} ============================================== Guessing fill-in edges with fixed endpoint {#sec:fill-in} ========================================== Small-separation lemma {#sec:left-right} ====================== Dynamic programming {#sec:dp} =================== Conclusions {#sec:conc} =========== Appendix {#sec:boring .unnumbered} ======== [^1]: St. Petersburg Academic University of the Russian Academy of Sciences, Russia, `ivanbliznets@tut.by`. [^2]: Department of Informatics, University of Bergen, Norway, `fomin@ii.uib.no`. [^3]: Department of Computer Science, University of Warwick, United Kingdom, `M.Pilipczuk@dcs.warwick.ac.uk`. [^4]: Faculty of Mathematics, Computer Science, and Mechanics, University of Warsaw, Poland, `michal.pilipczuk@mimuw.edu.pl`. [^5]: The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 267959
{ "pile_set_name": "ArXiv" }
--- abstract: 'The stringent requirements of a 1,000 times increase in data traffic and one millisecond round trip latency have made limiting the potentially tremendous ensuing energy consumption one of the most challenging problems for the design of the upcoming fifth-generation (5G) networks. To enable sustainable 5G networks, new technologies have been proposed to improve the system energy efficiency and alternative energy sources are introduced to reduce our dependence on traditional fossil fuels. In particular, various 5G techniques target the reduction of the energy consumption without sacrificing the quality-of-service. Meanwhile, energy harvesting technologies, which enable communication transceivers to harvest energy from various renewable resources and ambient radio frequency signals for communication, have drawn significant interest from both academia and industry. In this article, we provide an overview of the latest research on both green 5G techniques and energy harvesting for communication. In addition, some technical challenges and potential research topics for realizing sustainable green 5G networks are also identified.' author: - bibliography: - 'IEEEabrv.bib' - 'mybib.bib' title: An Overview of Sustainable Green 5G Networks --- Green radio, 5G techniques, energy harvesting. Introduction ============ The fifth-generation (5G) wireless networks will support up to a 1,000-fold increase in capacity compared to the existing networks. It is anticipated to connect at least 100 billion devices worldwide with approximately 7.6 billion mobile subscribers due to the tremendous popularity of smartphones, electronic tablets, sensors, etc. and provide an up to 10 Gb/s individual user experience [@andrews2014will]. Along with the dramatic traffic explosion and device proliferation, 5G wireless networks also have to integrate human-to-machine and machine-to-machine communications in order to facilitate more flexible networked social information sharing aiming for one million connections per square kilometer. Consequently, sensors, accessories, and tools are expected to become wireless communication entities exchanging information, giving rise to the well-known “Internet of Things (IoT)”. With such tremendously expanding demand for wireless communications in the future, researchers are currently looking for viable solutions to meet the stringent throughput requirement. In this regard, three paradigms have emerged: - Reduce the transmitter-receiver (Tx-Rx) distance and improve frequency reuse: ultra-dense networks (UDNs) and device-to-device (D2D) communications; - Exploit unused and unlicensed spectrum: millimeter wave (mmWave) communications and Long Term Evolution (LTE) in unlicensed spectrum (LTE-U); - Enhance spectral efficiency (SE) by deploying a massive amount of antennas: massive multiple-input multiple-output (M-MIMO). The technologies listed above increase the system throughput from three different angles. However, the performance gains introduced by these technologies do not come for free. For example, M-MIMO exploits a large number of antennas for potential multiplexing and diversity gains at the expense of an escalating circuit power consumption in the radio frequency (RF) chains which scales linearly with the number of antennas. In addition, power-hungry transceiver chains and complex signal processing techniques are needed for reliable mmWave communications to operate at extremely high frequencies. In light of this, the network energy consumption may no longer be sustainable and designing energy-efficient 5G networks is critical and necessary. In fact, in recent years, energy consumption has become a primary concern in the design and operation of wireless communication systems motivated by the desire to lower the operating cost of the base stations (BSs), prolong the lifetime of the user terminals (UTs), and also protect the environment. As a result, energy efficiency (EE), measured in bits-per-Joule, has emerged as a new prominent figure of merit and has become the most widely adopted green design metric for wireless communication systems [@chen2011fundamental]. ![image](sustainable_green_5g_33){width="80.00000%"} Meanwhile, it should also be noted that EE cannot be improved infinitely by only applying spectral efficient communication technologies, due to the constraint imposed by Shannon capacity bound as well as the non-negligible circuit power consumption. Consequently, even with the aforementioned 5G paradigms for improving the power consumption will still grow because of the explosive future data rate requirements. Therefore, improving EE can only alleviate the power consumption problem to a certain extent and is insufficient for enabling sustainable 5G communications. Hence, energy harvesting technologies, which allow BSs and devices to harvest energy from renewable resources (solar, wind, etc.) and even RF signals (television signals, interference signals, etc.), have received significant attention recently. They provide green energy supply solutions for powering different components of wireless communication networks. Therefore, integrating energy harvesting technologies into future wireless networks is imperative [@gunduz2014designing]. Gaining a better understanding of the specific challenges of energy harvesting technologies and alternative energy sources will pave the way for incorporating them smoothly into the upcoming 5G wireless network planning, design, and deployment. This article focuses on the state-of-the-art of energy harvesting and green 5G technologies, which will create an ecosystem of 5G wireless networks as shown in Fig. \[system\]. The rest of this article is organized as follows. After briefly introducing the tradeoff between EE and spectral efficiency (SE) in Section II, we discuss green 5G technologies from the perspectives of enlarging spectrum availability, shortening Tx-Rx distances, and enhancing the spatial degrees of freedom in Sections III, IV, and V, respectively. Then, energy harvesting technologies for 5G systems are discussed in Section IV, and conclusions are drawn in Section VI. [|p[2cm]{}|p[2cm]{}|p[1.5cm]{}|p[2.23cm]{}|p[2.15cm]{}|p[2.15cm]{}|]{} Technology &High EE at& Coverage & Transmit Power & Circuit Power & Signalling Overhead\ -------- mmWave -------- & BS and UT & 200 m & Low & High & High\ LTE-U& BS and UT & 500 m & Moderate & Moderate & Moderate\ ------ UDNs ------ &UT & 10-200 m& Low &High & High\ ----- D2D ----- & BS& 2-100 m & Low & Low & Moderate\ M-MIMO& ---- UT ---- & 1000 m &Low & High & High\ ![Fundamental tradeoff between EE and SE.[]{data-label="tradeoff"}](EE_SE_tradeoff){width="45.00000%"} General EE-SE Tradeoff ====================== In general, the system achievable throughput or SE can be expressed as $$\begin{aligned} \label{SE} {\rm{SE}} =K\times B\times N\times \log_2\bigg(1+{\rm{SINR}}(d)\bigg),\end{aligned}$$ where $K$, $B$, $N$, and $d$ are the frequency reuse factor, the signal bandwidth, the number of spatial beams (spatial multiplexing factor), and a single link distance, respectively. $\rm{SINR}$ is the signal-to-interference-plus-noise ratio at the receiver that increases with decreasing $d$. Correspondingly, the system EE can be expressed as $$\begin{aligned} \label{EE} {\rm{EE}} = \frac{K\times B\times N\times \log_2\bigg(1+{\rm{SINR}}(d)\bigg)}{P_{\rm{t}} + P_{\rm{c}}},\end{aligned}$$ where $P_{\rm{t}}$ and $P_{\rm{c}}$ are the consumed transmit and circuit powers, respectively. From (\[SE\]), the SE of wireless networks can be improved by increasing the frequency reuse factor (reducing the Tx-Rx distance), the signal bandwidth, and/or the number of spatial beams, which will be discussed in the subsequent three sections, respectively. Table \[table1\] compares various performance aspects of several 5G technologies. From the table, most of the 5G technologies enable the reduction of the transmit power at the expense of incurring additional circuit power consumption due to the required hardware expansion, sophisticated signal processing, etc. For point-to-point link level communications, where $K$, $B$, $N$, and $d$ in (\[SE\]) and (\[EE\]) are fixed, the relation between EE and SE relation can be analyzed using the approach in [@chen2011fundamental] which is illustrated in Fig. \[tradeoff\]. From the figure, - if the circuit power consumption is ignored, i.e., $P_{\rm{c}}=0$, the EE decreases monotonically with SE ; - if the circuit power consumption is considered, i.e., $P_{\rm{c}}>0$, the EE increases with increasing SE below a threshold and decreases with increasing SE beyond the threshold; - as the SE increases, regardless of the circuit power, the EE eventually converges to the same values as for $P_{\rm{c}}=0$, due to the domination of the transmit power; - reducing the circuit power will enlarge the EE-SE tradeoff region. Although observed from a single communication link, the fundamental tradeoff between EE and SE in Fig. \[tradeoff\] carries over to more complicated systems employing the aforementioned 5G technologies. Besides, the EE and SE tradeoff can be achieved via different system configurations such as spectrum management, frequency reuse, spatial multiplexing, power allocation, etc. In addition, energy harvesting technologies can provide green energy, which allows 5G networks to possibly operate at higher SEs compared to the conventional energy limited networks. Green 5G Technologies: Enlarging Spectrum availability ====================================================== Given the extreme shortage of available spectrum at traditional cellular frequencies, both mmWave (30 to 300 GHz) and LTE-U (5 GHz) communications aim at boosting the throughput by directly expanding the available radio spectrum for cellular networks, i.e., increasing $B$ in (\[SE\]) and (\[EE\]). Millimeter Wave ---------------- MmWave communications exploit the enormous chunks of spectrum available at mmWave frequencies and are expected to push the mobile data rates to gigabits-per-second for 5G wireless networks. However, the propagation conditions at the extremely high frequencies of the mmWave bands lead to several fundamental issues including high path attenuation, severe blockage effects, and atmospheric and rain absorption, which poses new challenges for providing guaranteed quality-of-service (QoS). Thanks to the small wavelengths of mmWave bands, a large number of antenna elements can be integrated in a compact form at both the BSs and the UTs and leveraged to synthesize highly directional beams leading to large array gains. Energy-efficient mmWave communications comprises the following aspects. **1) Energy-aware hybrid transceiver architectures:** For mmWave frequencies, the conventional fully digital architecture employed for micro-wave frequencies, where each antenna is connected to a dedicated RF chain, is unsustainable. In particular, an excessive power consumption arises from the processing of massive parallel gigasamples per second data streams, leading to an energy inefficient and expensive system. Thus, a direct approach to reduce power consumption in practice is to adopt analog beamforming. Specifically, one RF chain is connected to multiple antennas and the signal phase of each antenna is controlled by a network of analog phase shifters (PSs). However, pure analog beamforming with a single beamformer only supports single user and single data stream transmission, which does not fully exploit the potential multiplexing gains provided by multiple antennas. As such, hybrid architectures have been proposed as energy-efficient alternatives for mmWave communications [@han2015large]. Specifically, analog beamforming applies complex coefficients to manipulate the RF signals by means of controlling phase shifters and/or variable gain amplifiers and aims to compensate for the large path loss at mmWave bands, while digital beamforming is done in the form of digital precoding that multiplies a particular coefficient to the modulated baseband signal per RF chain to optimize capacity using various MIMO techniques. Basically, two hybrid structures have been developed: - Fully-connected architecture: each RF chain is connected to all antennas. - Partially-connected architecture: each RF chain is connected only to a subset of antennas. From the energy-aware hardware design perspective, the fully-connected architecture requires thousands of PSs and introduces additional energy consumption, such as the high power compensation required for the giant phased radar and the insertion loss of the PSs. In contrast, for the partially-connected architecture, the number of PSs needed decreases by a factor equal to the number of RF chains and all signal processing is conducted at the subarray level. Hence, the system complexity as well as the circuit power consumption are significantly reduced, although at the expense of less flexibility in utilizing the spatial degrees of freedom. Nevertheless, for these structures, the hybrid precoding and combining schemes, the number of RF chains and antennas, and the transmit power can be separately or jointly optimized for customizing the system performance depending on the needs and requirements. **2) Low resolution design:** Besides hybrid structures, employing low resolution analog-to-digital converters (ADCs) at the receivers is an alternative approach towards energy-efficient designs. The theoretical motivation is that the power dissipation of an ADC scales linearly with the sampling rate and exponentially with the number of bits per sample. Furthermore, the data interface circuits connecting the digital components to the ADCs are usually power hungry and highly depend on the adopted resolution. Thus, the power consumption of high speed and high resolution ADCs becomes a critical concern and system performance bottleneck in mmWave systems employing a large number of antennas and very wide bandwidth. This motivates the configuration of low resolution ADCs in practical systems, especially for battery-constrained handheld devices. Yet, the characterization of the channel capacity taking into account low-resolution ADCs is in its infancy for mmWave communications and requires extensive research efforts. In addition, deploying a mix of high-resolution and low-resolution ADCs, i.e., mixed ADCs, is also a promising direction for power savings. LTE-U ----- LTE-U further increases the LTE capacity by enabling the LTE air interface to operate in unlicensed spectrum. The unlicensed spectrum available in the 5 GHz band/WiFi band ($\geq 400$ MHz bandwidth) provides LTE-U with additional spectrum resources and has been launched in 3GPP Rel-13. **Harmonious Coexistence:** The fundamental challenge of LTE-U is the coexistence of the LTE system and the incumbent unlicensed systems, such as WiFi systems. LTE employs a scheduling-based channel access mechanism, where multiple users can be served simultaneously by assigning them different resource blocks. In contrast, WiFi adopts contention-based channel access with a random backoff mechanism, where users are allowed to only access channels that are sensed as idle. Multi-user transmission with centralized scheduling enables LTE to make better use of the unlicensed band and also to achieve a higher EE than WiFi since the channel sensing and backoff time in WiFi lead to a waste of spectrum resources. However, if left unconstrained, LTE-U transmission may generate continuous interference to WiFi systems such that the channel is detected busy most of the time [@zhang2015lte]. This will lead to unceasing backoff times for the WiFi links and incur high energy consumption and low EE for the WiFi users even though no WiFi data is transmitted. Therefore, intelligent modifications to the resource management in the unlicensed band becomes critical to the harmonious coexistence. So far, two coexistence methods have been proposed in contributions to 3GPP: - Listen before talk (LBT): LTE-U devices are required to verify whether the channel is occupied by WiFi systems before transmitting. - Duty cycling: The LTE signal is periodically turned on and off, occupying and vacating the channel for periods of time without verifying the channel occupancy before transmitting. Clearly, contention-based WiFi systems inherently follow the LBT protocol, which is deemed to facilitate a fair coexistence. However, an increasing number of contending devices will lead to high transmission collision probabilities and thereby limit the system performance. In light of this, the duty cycling protocol seems to enable a more efficient utilization of the unlicensed band. However, duty cycling assumes that the carriers are in charge of on-and-off scheduling, which contradicts the common property nature of the unlicensed spectrum. This is a drawback as the carriers are under no real obligation to provide a decent time window for Wi-Fi, . Although there is no globally accepted coexistence protocol yet, one popular consensus is that “LTE-U is often a better neighbor to Wi-Fi than Wi-Fi itself". Green 5G Technologies: Shortening Tx/Rx distance ================================================ Both UDNs and D2D communications boost the SE via shortening the distances between Txs and Rxs. Short range communication has the dual benefits of providing high-quality links and a high spatial reuse factor compared to the current wireless networks, i.e., increasing $K$ and decreasing $d$ in (\[SE\]) and (\[EE\]). Ultra-Dense Networks -------------------- UDNs are obtained by extensively deploying diverse types of BSs in hot-spot areas and can be regarded as a massive version of heterogenous networks [@samarakoon2016ultra]. A general guideline for realizing green UDNS is that reducing the cell size is undoubtable the way towards high EE, but the positive effect of increasing the BS density saturates when the circuit power caused by larger amount of hardware infrastructure dominates over the transmission power. Energy-efficient UDNs involve the following aspects. **1) User-centric design:** One of the green networking design principles of UDNs in 5G is the *user-centric* concept where signaling and data as well as uplink and downlink transmissions are decoupled [@chih2014toward]. A well-known architecture for the decoupling of signaling and data is the *hyper-cellular structure* where macrocells provide an umbrella coverage and microcells utilizing e.g. mmWave BSs, aim at fulfilling the high capacity demands. Thereby, the significant signaling overheads caused by mobile device roaming in small size cells are reduced, which decreases the associated energy consumption at both transceivers substantially. Furthermore, the macrocell BSs that are only capable of providing reliable coverage, can be replaced by more energy-efficient types of BSs rather than the conventional energy-consuming ones. In addition, the separation of signaling and data also eases the integration of other radio access technologies (RATs), such as WiFi and future mmWave RATs, which may help to realize further potential EE gains. Meanwhile, decoupling downlink and uplink enables more flexible user association schemes, which also leads to substantial energy savings for both BSs and UTs. For example, for two neighbor BSs where BS 1 and BS 2 are heavily loaded and with limited available spectrum left in the downlink and uplink, respectively, an UT can dynamically connect to BS 1 in the uplink and to BS 2 in the downlink. As such, both uplink and downlink transmit power consumption can be reduced, which is significant for 5G applications with ultra-high data rate requirements. **2) BS on/off switching:** Separating signaling and data also enables efficient discontinuous transmission/reception, i.e., BS sleeping, to save energy via exploiting the dynamics of the wireless traffic load across time and geographical locations [@zhang2015many]. It has been shown that today 80% of BSs are quite lightly loaded for 80% of the time but still consume almost their peak energy to account for elements such as cooling and power amplifying circuits. Therefore, BS sleeping is deemed to be an effective mechanism for substantial energy savings, especially for UDNs with highly densified BSs [@cai2016green]. Specifically, the data BSs (DBSs) can be densely deployed in order to satisfy the capacity requirement during peak traffic times, while a portion of the DBSs can be switched off or put to the sleep mode if the traffic load is low in order to save energy. **3) Interference management:** In general, cellular networks are exposed to two major sources of interference, namely, intra-cell interference and intercell interference. The former is not a significant issue in today’s cellular networks due to the use of orthogonal frequency-division multiple access (OFDMA) technology and BS controlled scheduling. However, the latter one is a critical concern for UDNs due to the high frequency reuse factor in multi-tier and heterogeneous networks. For instance, because of interference, the increases of the transmit powers of two neighboring BSs will cancel each other out without improving the system throughput, which leads to a low system EE. In addition, femtocell BSs may create “dead zones” around them in the downlink, especially for cell-edge users associated with other BSs. Therefore, efficient interference management schemes such as power control, resource partitioning and scheduling, cooperative transmission, and interference alignment are needed for the successful deployment of energy-efficient UDNs. Although completely eliminating interference is overly optimistic for practical systems, it is expected that that removing the two or three strongest interferers still brings an order-wise network EE improvement. D2D Communications ------------------ D2D communications [@feng2013device] enable densified local reuse of spectrum and can be regarded as a special case of ultra-dense networks with the smallest cell consisting of two devices as the transmitter and the receiver. In light of this, techniques that are used in UDNs may be applied to D2D scenarios.Energy-efficient D2D communications involve the following aspects. **1) Mode selection and power control:** In D2D communication, there are two modes of communication: the cellular mode and the D2D mode. In the cellular mode, UTs are treated as regular cellular devices and communicate via BSs. In contrast, in the D2D mode, UTs communicate directly with each other without going through BSs by reusing the spectrum that has already been assigned to a cellular user (underlay communication) or has not been assigned to a cellular user (overlay communication). Underlay D2D communication generates co-channel interference and may switch to overlay D2D communication when the generated co-channel interference is strong. It has been shown that underlay D2D communication is preferable for EE-oriented designs while overlay D2D communication tends to be more efficient for SE-oriented designs. This is mainly due to the interference mitigation characteristics of EE-oriented designs via limiting the transmit power. **2) Active user cooperation:** With the unprecedented growth of the number of mobile devices and the data traffic, another benefit of D2D communication is the possibility of active user cooperation, which facilitates energy savings in 5G networks, especially with regard to extending the battery lifetime of handheld devices. In particular, D2D devices can - act as mobile relays for assisting cellular transmissions or other pairs of D2D transmissions. For example, for the uplink transmission of cell-edge users, the channel conditions are generally severely degraded and direct uplink transmissions to the BS incurs exceedingly high energy consumption, which would heavily affect the user experience and satisfaction. With proper devices as relays, significant energy consumption can be saved and it can also be further extended into multi-hop, two-way, and multiple relay scenarios. - act as cluster heads for multicast transmission, e.g. for synchronous video streaming. It is known that if the data rate of multicast transmission is limited by the worst channel among targeted members. However, With D2D functionality, any member, e.g. the member with the best channel, can be selected as the cluster head and further multicasts its received data to other members, which achieves a multiuser spatial diversity. Alternatively, the content may be divided into multiple chunks and each member that receives a subset of them can share its data with others. - act as local caching devices for content exchange, e.g. for asynchronous video streaming. Wireless caching is appealing since the data storage at the devices is often under-utilized and can be leveraged for wireless caching. It provides a way to exploit the inherent content reuse characteristic of 5G networks while coping with asynchronism of the demands. Devices storage is probably the cheapest and most rapidly growing network resource that, so far, has been left almost untapped for incumbent wireless networks. Taking into account content popularity, deciding what content to cache plays a important role in alleviating backhaul burden as well as reducing power consumption. Although active user cooperation facilitates the efficient use of spectrum, it is still of practical interest to study how self-interest devices can be motivated to relay, share, and cache data for other devices at the cost of sacrificing their own limited energy. Some rewarding and pricing schemes from economic theory may be resorted to design efficient D2D cooperative protocols and mechanisms [@wu2016energy]. Green 5G Technologies: Enhancing spatial degrees of freedom via M-MIMO ======================================================================= M-MIMO is expected to provide at least an order of magnitude improvement in multiplexing gain and array gain via deploying a large number of antennas at the BSs while serving a large number of users with the same time-frequency resource, i.e., by increasing $N$ in (\[SE\]) and (\[EE\]). Energy-efficient M-MIMO involves the following aspects. **1) How many antennas are needed for green M-MIMO systems?** For an M-MIMO system with $M$ antennas equipped at the BS and $K$ single-antenna users, it has been shown in [@Hien2013] that each user can scale down its uplink transmit power proportional to $M$ and $\sqrt{M}$ with perfect and imperfect channel state information (CSI), while achieving the same performance as a corresponding single-input single-output (SISO) system. However, only reducing the transmit power consumption at the users is not sufficient for improving system EE, since the overall power consumption also includes the circuit power consumption which increases linearly with the number of hardware components. Basically, a general guideline towards determining the number of antennas needed for achieving green M-MIMO is: *when the transmit power largely dominates the overall power consumption, deploying more antennas yields a higher EE; while when the circuit power largely dominates the overall power consumption, further deploying more antennas is no longer energy efficient.* This is due to the fundamental interplay between the achievable throughput gain and the additional circuit power consumed as a result of deploying more antennas. In light of this, a realistic power consumption model is established in [@bjornson2015optimal], where it was shown that deploying $100$-$200$ antennas to serve a relatively large number of terminals is the EE-optimal solution for today’s circuit technology. **2) Signal processing and green hardware implementation:** Centralized digital signal processing enables large savings in both hardware and power consumption. The favourable propagation conditions arising from deploying massive antennas significantly simplifies the precoding and detection algorithms at the BS. Linear signal processing techniques, such as maximum-ratio combining (MRC) and maximum-ratio transmission (MRT), can achieve a near-optimal system throughput performance. Compared with existing sophisticated signal processing methods such as dirty-paper-coding (DPC) and successive interference cancellation, the simplifications facilitated by M-MIMO reduce the dissipated energy required for computations. Besides, a large number of antennas also permits green hardware implementation. It has been shown that M-MIMO systems require a significantly lower RF transmit power, e.g. in the milliwatt range, which will result in substantial power savings in power amplifier operations. However, the mobile hardware at UTs is still the system performance bottleneck due to more stringent requirements on power consumption and limited physical size, which needs further investigation. **3) Pilot design:** For M-MIMO systems, massive timely and accurate CSI acquisition will also lead to the significant pilot power consumption since the accuracy of the channel estimation will directly affect the achievable throughput. It is known that the required pilot resources for CSI acquisition are proportional to the number of the transmit antennas, which makes frequency-division duplexing (FDD) unaffordable for practical implementation of M-MIMO systems and novel schemes, such as pilot beamforming and semi-orthogonal pilot design, have to be exploited. Current research efforts mainly focus on time-division duplexing (TDD) systems, where users first send uplink pilots and then the BSs estimate the required CSI via pilots for downlink data transmission. Furthermore, for multi-cell scenarios, pilot contamination caused by reusing the same pilot resources in adjacent cells also affects the EE and SE significantly. Therefore, how to balance the the use of the time-frequency resource for pilot training and data transmission in both uplink and downlink as well as designing low complexity pilot mitigation schemes are important challenges for achieving high EE in M-MIMO systems. Enabler of Sustainable 5G Communications: Energy Harvesting =========================================================== Beside the above techniques for improving the EE, the energy harvesting technologies drawing energy from renewable resources or RF signals are important enablers of sustainable green 5G wireless networks. In Table \[table2\], energy harvesting from renewable resources and RF energy harvesting are compared from different perspectives. Energy Harvesting from Renewable Resources ------------------------------------------ Renewable resources energy harvesting transforms the natural energy sources such as solar, wind, etc. into electrical energy and has been recognized as a promising green technology to alleviate the energy crisis and to reduce the energy costs. The following research directions for communication system design based on renewable energy have emerged. [|p[4.8cm]{}|p[2.23cm]{}|p[1.3cm]{}|p[3cm]{}|p[3cm]{}|]{} Technology & Energy Source Controllability& Efficiency & Ultimate Goal & 5G Applications\ Natural Resource Energy Harvesting & Non-controllable &- &Reduce oil dependency& Hybrid BSs\ ---------------------- RF Energy Harvesting ---------------------- & Controllable & Low & Sustainable devices& IoT\ **1) Distinctive energy-data causality and battery capacity constraints.** A fundamental feature that distinguishes energy harvesting communication systems from non-energy harvesting communication systems is the existence of an energy causality constraint [@gunduz2014designing]. This constraint imposes a restriction on the energy utilization, i.e., energy cannot be utilized before it is harvested, which is in essence attributed to intermittent availability of renewable resources. In order to fully embrace the potential benefits of energy harvesting, carefully incorporating this constraint into the design 5G networks and gaining deep understanding of its impacts on 5G networks are vital. Two other relevant constraints are the data causality constraint and the battery capacity constraint, which depend on the data traffic variations and energy storage limitations. Specifically, the data causality constraint accounts for the fact that a data packet cannot be delivered before it has arrived at the transmitter. The battery capacity constraint models the limited capacity of the battery for storing harvested energy. Thus, the conventional research studies assuming infinitely backlogged data buffers and unlimited energy storages can be regarded as special cases of the above constraints. However, in upcoming 5G networks, these assumptions are no longer valid due to the extremely diverging traffic and device types. Thus, new QoS concerns on energy harvesting communication naturally arise in 5G systems. **2) Offline versus online approaches:** Depending on whether the energy arrivals, CSI, data traffic, etc., are predictable or not, resource allocation algorithms for communication systems with energy harvesting can be categorized into *offline* and *online* policies. Specifically, based on non-causal knowledge information, optimal offline solutions can be developed by exploiting tools from optimization theory. Although this approach is impractical due to highly dynamic network variations in 5G communication networks, offline solutions provide performance upper bounds for online algorithms. In contrast, based on causal information, online schemes can be developed and further categorized into two branches. The first branch assumes that statistical knowledge information is available at the transmitter and the optimal solutions thereby can be numerically obtained by exploiting dynamic programming theory. Yet, such an approach lacks of analytical insight and cannot be exploited for handling large scale system due to the “curse of dimensionality". Alternatively, some suboptimal online algorithms are desirable in practice, which can be designed based on the insights observed from the optimal offline results. However, in some of the 5G application scenarios, statistical characteristics may change over time, or such information may not be available before deployment. This leads to a more aggressive branch that does not require any priori information at the transmitter. The optimal solutions in this branch rely on the learning theory, where the transmitter learns the optimal transmission policy over time by iteratively performing actions and observing their immediate rewards. **3) Hybrid energy supply solutions and energy cooperation.** Due to the uncertain and dynamic changing environmental conditions, the harvested energy is generally intermittent by nature. This poses new challenges for powering 5G systems that require stable and satisfactory QoS. A pragmatic approach is to design a hybrid energy harvesting system, which uses different energy sources, e.g. solar, wind, diesel generators, or even the power grid, in a complementary manner to provide uninterrupted service. As an example, for 5G communication scenarios such as UDNs with a massive number of BSs, the integration of energy harvesting not only provides location diversity to compensate for the additional circuit power consumption caused by deploying more BSs but also reduces the power grid power consumption significantly. In addition, densified BSs employing energy harvesting also facilitate possible energy cooperation between BSs, which has huge potential for realizing green 5G networks. For example, the BSs located in “good” spots will harvest more energy and can correspondingly serve more users or transfer their superfluous energy through power lines to BSs that harvest less energy to realize energy cooperation. RF Energy Harvesting -------------------- RF energy harvesting, also known as wireless power transfer (WPT), allows receivers to harvest energy from received RF signals [@lu2015wireless; @bi2015wireless]. Compared to opportunistic energy harvesting from renewable resources, WPT can be fully controlled via wireless communication technologies, which has been thereby regarded as a promising solution for powering the massive amount number of small UTs expected in 5G applications such as IoT. The following research directions for RF energy harvesting have been identified. **1) Typical system architectures and extensions:** In the literature, two canonical lines of research can be identified for implementing WPT. The first line focuses on characterizing the fundamental tradeoff between the achievable rate and the harvested energy by exploring the dual use of the same signal. This concept is known as simultaneous wireless information and power transfer (SWIPT). Specifically, wireless devices split the received signal sent by a BS into two parts, one for information decoding and the other for energy harvesting. Thus, a fundamental issue of this line of work is to optimize the power splitting ratio at the receiver side to customize the achievable throughput and the harvested energy. The other line of research separates energy harvesting and information transmission in time, i.e., harvest and then transmit, leading to wireless powered communication networks (WPCNs). Specifically, wireless devices first harvest energy from signals sent by a power station and then perform wireless information transmission (WIT) to the target BS/receiver using harvested energy. Thus, the WPT and WIT durations have to be carefully designed, since more time for WPT leads to more harvested energy for using in WIT but leaves less time for WIT. *These two canonical architectures as well as their extensions serve as a foundation for wireless-powered UDNs, M-MIMO, mmWave networks, etc., which utilize widely RF energy flow among different entities of 5G networks.* **2) Improving the efficiency of WPT:** Despite the advantages of WPT, the system performance is basically constrained by the low energy conversion efficiency and the severe path loss during WPT [@krikidis2014simultaneous]. Generally, the 5G techniques that are able to improve the performance of a wireless communication link can also be exploited to improve the efficiency of WPT. For example, narrow beams can be realized by employing multiple antennas at the transmitter side with optimized beamforming vectors, which is known as *energy beamforming* [@bi2015wireless] and this fits well the applications of M-MIMO and mmWave. In addition, short range communication based 5G techniques such as D2D communications and UDNs are capable of improving the efficiency of WPT by reducing the energy transfer distance. Besides improving the amount of harvested energy, the circuit power consumed by information decoding is also an important design issue since it reduces the net harvested energy that can be stored in the battery for future use. In particular, the active mixers used in conventional information receivers for RF to baseband conversion are substantially power-consuming. This motivates additional efforts on designing novel receiver architectures that consume less power by avoiding the use of active devices. **3) Rethinking interference in wireless networks with WPT:** Another advantage of WPT is that not only dedicated signals but also co-channel interference signals can be exploited for energy harvesting. This fundamentally shifts our traditional viewpoint of “harmful” co-channel interference, which now becomes a potential source of energy. In this regard, UDNs and D2D underlaying communications where the spectrum is heavily reused across the whole network, provide plentiful opportunities for WPT to extract the potential gain from co-channel interference. In practice, WPT enabled devices can harvest energy when co-channel interference is strong and decode information when co-channel interference is relatively weak. Besides, one may inject artificial interference deliberately into communication systems, which may be beneficial for the overall system performance, especially when receivers are hungry for energy to support its normal operations. As such, this paradigm shift provides new degrees of freedom to optimize the network interference levels for achieving the desired tradeoff between information transmission and energy transfer. Conclusions =========== In this article, we have surveyed the advanced technologies which are expected to enable sustainable green 5G networks. A holistic design overview can be found in Fig. \[summary\]. Energy harvesting underpins the green expectation towards 5G networks while promising spectrum-efficient 5G technologies can be tailored to realize energy-efficient wireless networks. Facing the highly diversified communication scenarios of future, user traffic, channel, power consumption, and even content popularity models need to be jointly taken into account for improving the system EE. Thereby, it is evident that the diverse applications and heterogeneous user requirements of sustainable green 5G networks cannot be satisfied with any particular radio access technology. Instead, an ecosystem of interoperable technologies is needed such that the respective technological advantages of the different components, can be exploited together pushing towards the ultimate performance limits. This, however, also poses new challenges for the system designers. ![image](design_guidance){width="100.00000%"}
{ "pile_set_name": "ArXiv" }
--- abstract: | In this paper the kinematical correlations from the [*phase conjugated optics (* ]{}equivalently with [*crossing*]{} [*symmetric spontaneous parametric down conversion (SPDC) phenomena)*]{} in the nonlinear crystals are used for the description of a new kind of optical[* *]{}device called SPDC-[*quantum mirrors.* ]{}Then,[* s*]{}ome important laws of the[* plane*]{} [*SPDC-quantum mirrors*]{} combined with usual mirrors or lens are proved only by using geometric optics concepts. In particular, these results allow us to obtain a new interpretation of the recent experiments on the [*two-photon geometric optics*]{}. PACS:  42. 50. Tv ; 42. 50. Ar ; 42. 50. Kb ; 03. 65. Bz.  author: - 'M. L. D. Ion and D. B. Ion' title: '[**PLANE SPDC-QUANTUM MIRROR**]{}' --- [**Introduction**]{}[* *]{} =========================== The [*spontaneous parametric down conversion*]{} (SPDC) is a nonlinear optical process \[1\] in which a laser pump (p) beam incident on a nonlinear crystal leads to the emission of a correlated pair of photons called signal (s) and idler (i). If the [*S-matrix crossing symmetry \[2\]* ]{}of the electromagnetic interaction in the [*spontaneous parametric down conversion*]{} (SPDC) crystals is taken into account, then the existence of the [*direct SPDC process*]{} $$\label{1} p\rightarrow s+i$$ will imply the existence of the following [*crossing symmetric processes \[3\]*]{} $$\label{2} p+\stackrel{\_}{s}\rightarrow i$$ $$\label{3} p+\stackrel{\_}{i}\rightarrow s$$ as real processes which can be described by the same[* transition amplitude.*]{} Here, by $\stackrel{\_}{s}$ and $\stackrel{\_}{i}$ we denoted the [*time reversed*]{} [*photons (*]{}or antiphotons in sense introduced in Ref. \[4\][*)*]{} relative to the original photons $s$ and $i$, respectively. In fact the SPDC effects (1)-(3) can be identified as being directly connected with the $\chi ^{(2)}$-[*second-order nonlinear effects*]{} called in general [*three wave mixing* ]{}(see[* *]{}Ref.\[5\][*).* ]{}So,[* *]{}the process (1) is just the [*inverse of second-harmonic generation,* ]{}while, the effects (2)-(3) can be interpreted just as emission of [*optical phase conjugated replicas*]{} in the presence of pump laser via [*three wave mixing.* ]{} In this paper a new kind of geometric optics called [*quantum SPDC-geometric optics*]{} is systematically[* *]{}developed by using [*kinematical correlations of the pump, signal and idler photons*]{} from the SPDC processes. Here we discuss only the plane quantum mirror. Other kind of the SPDC-quantum mirrors, such as spherical SPDC quantum mirrors, parabolic quantum mirrors, etc., will be discussed in a future paper.[* *]{} [**Quantum kinematical correlations**]{}[* *]{} =============================================== In the SPDC processes (1) the energy and momentum of photons are conserved: $$\label{4} \omega _p=\omega _s+\omega _i,\smallskip\ {\bf k_p}={\bf k_s}+{\bf k_i}$$ Moreover, if the crossing SPDC-processes (2)-(3) are interpreted just as emission of [*optical phase conjugated replicas* ]{}in the presence of input pump laser then Eqs. (4) can be identified as being the[* phase matching*]{} [*conditions*]{} in the three wave mixing (see again Ref. \[5\]). Indeed, this scheme exploits the second order optical nonlinearity in a crystal lacking inversion symmetry. In such crystals, the presence of the input pump (${\bf % E_p}$) and of the signal (${\bf E_s}$[**)**]{} fields[* *]{} induces in the medium a [*nonlinear optical polarization* ]{}(see Eqs. (26)-(27) in Pepper and Yariv Ref.\[5\]) which is: $P_i^{NL}=\chi _{ijk}^{(2)}E_{pj}(\omega _p)E_{sk}^{*}(\omega _s)\exp \{i[(\omega _p-\omega _s)t-({\bf k_p-k_s)\cdot r% }]\}+c.c.,$ where $\chi _{ijk}^{(2)}$ is the susceptibility of rank two tensor components of the crystal. Consequently, such polarization, acting as a [*source*]{} in the [*wave equation*]{} will radiate a [*new wave*]{} $% {\bf E_i}$ at frequency $\omega _i=$ $\omega _p-\omega _i,$ with an amplitude proportional to ${\bf E_i^{*}}(\omega _i),$ i.e., to the[*complex*]{} [*conjugate*]{} of the spatial amplitude of the low-frequency probe wave at $\omega _s.$ Then, it is easy to show that a necessary condition for a [*phase-coherent*]{} cumulative buildup of [*conjugate-field*]{} [*radiation*]{} at $\omega _i=\omega _p-\omega _s$ is that the wave vector [**k$% _i$**]{} at this new frequency must be equal to[** k$_i$**]{}$={\bf k}_p-{\bf k}% _s,$ i.e., we have the phase matching conditions (4). Hence, the [*optical*]{} [*phase conjugation by three-wave mixing* ]{} help[* *]{} us to obtain a complete proof of the existence of the crossing reactions (2)-(3) as real processes which take place in the nonlinear crystals when the [*energy-momentum* ]{}(or [*phase*]{} [*matching) conditions*]{} (4) are fulfilled.[* *]{} Now, it is important to introduce the [*momentum projections*]{}, parallel and orthogonal to the pump momentum, and to write the momentum conservation law from (4) as follows $$\label{5} k_p=k_s\cos \theta _{ps}+k_i\cos \theta _{pi}$$ $$\label{6} k_s\sin \theta _{ps}=k_i\sin \theta _{pi}$$ where the angles $\theta _{pj,}$ $j=s,i,$are the angles (in crystal) between momenta of the [*pump*]{} (p)$\equiv $[*($\omega _p,$*]{}${\bf k}$[**$% _p,e_p,\mu _p)$**]{}, [*signal*]{} (s)$\equiv ($[*$\omega _s,$*]{}${\bf k}$[**$% _s,$**]{}${\bf e}$[**$_s,\mu _s)$**]{} and [*idler* ]{}(i)$\equiv $([*$\omega _i,$*]{}${\bf k}$[**$_i,e_i,\mu _i)$**]{} [*photons.*]{} By[** e$_j$**]{} and $\mu _j{\bf ,\ }j\equiv p,s,i,$[** **]{}we denoted the photon polarizations and photon helicities, respectively. Now, let $\beta _{ps},$and $\beta _{pi}$ be the corresponding exit angles of the signal and idler photons from crystal. Then from (6) in conjunction with Snellius law, we have $$\label{7} \sin \beta _{ps}=n_s\sin \theta _{ps},\smallskip\ \sin \beta _{pi}=n_i\sin \theta _{pi}$$ $$\label{8} \omega _s\sin \beta _{ps}=\omega _i\sin \beta _{pi}$$ **Quantum mirrors via SPDC phenomena** ======================================= (D.1) [**Quantum Mirror** ]{}(QM). By definition a [*quantum mirror (QM) is a combination of standard devices*]{} (e.g., usual lenses, usual mirrors, lasers, etc.) with a nonlinear crystal [*by which one involves the use of a variety of quantum phenomena to exactly transform ${\bf \ }$not only the direction of propagation of a light beam but also their polarization characteristics.*]{} (D.2) [**SPDC**]{}-[**Quantum Mirror**]{} (SPDC-QM). A [*quantum mirrors*]{} is called SPDC-QM if is based on the quantum SPDC phenomena (1)-(3) in order to transform [*signal photons*]{} characterized by [*($\omega _s,{\bf % k_s,e_s,\mu _s)}$*]{}${\bf \ }$into [*idler photons*]{} with [*($\omega _p-\omega _s,{\bf k_p-}{\bf k_s,e_s^{*},-\mu _s)\equiv }$($\omega _i,{\bf % k_i,e_i,\mu _i)}$*]{}. Now, since the crossing symmetric SPDC effects (2)-(3) can be interpreted just as emission of [*optical phase conjugated replicas*]{} in the presence of pump laser via [*three wave mixing,* ]{}the high quality of the SPDC-QM will be given by the following peculiar characteristics: (i)[* Coherence*]{}:The SPDC-QM [*preserves high coherence*]{} between s-photons and i-photons; (ii) [*Distortion undoing:*]{} The SPDC-QM [*corrects all the aberrations*]{} which occur in signal or idler beam path; (iii) [*Amplification:* ]{}A SPDC-QM [*amplifies the conjugated wave*]{} if some conditions are fulfilled.   [*3.1. Plane SPDC-quantum mirrors.*]{} The quantum mirrors can be [*plane quantum mirrors* ]{}(P-QM) (see Fig.1), [*spherical quantum mirrors (S-QM), hyperbolic quantum mirror (H-QM), parabolic quantum mirrors (PB-QM), etc., *]{}according with the character of incoming laser wave fronts ( [*plane waves,*]{} [*spherical waves, etc.).* ]{}Here we discuss only the[* plane SPDC-quantum mirror.*]{} Other kind of the [*SPDC-quantum mirrors*]{}, such as [*spherical SPDC quantum mirrors, parabolic quantum mirrors*]{}, etc., will be discussed in a future paper.[* *]{} In order to avoid many complications, in the following we will work only in the [*thin crystal approximation*]{}. Moreover, we do not consider here the so called optical aberrations. (L.1)[* *]{}Law of [*thin plane SPDC-quantum mirror*]{}: Let BBO be a SPDCcrystal illuminated uniform by a high quality laser pump. Let Z$_s$ and Z$_i $ be the distances shown in Fig.1 ( from the [*object point*]{} P to crystal (point A) and from crystal (point A) to [*image point*]{} I$.$ Then, the system behaves as a [*plane mirror*]{} but satisfying the following important laws: $$\label{9} \frac{Z_i}{Z_s}=\frac{\omega _i}{\omega _s}=\frac{\sin \beta _{ps} }{\sin \beta _{pi}}=\frac{n_s\sin \theta _{ps}}{n_i\sin \theta _{pi}},\smallskip\ M=% \frac{\omega _sZ_i}{\omega _iZ_s}=1$$ where M is the[* linear magnification* ]{}of[* *]{}the plane SPDC-quantum mirror. [*3.2. Plane SPDC-QM combined with thin lens.* ]{}The basic optical geometric configurations of a plane SPDC-QM combined with thin lens is presented in Figs. 2a and 2b. The system in this case behaves as in usual geometric optics but with some modifications in the non degenerate case introduced by the presence of the [*plane SPDC-quantum mirror*]{}. The remarkable law in this case is as follows. (L.2) [*Law*]{} of the[* thin lens combined with a plane SPDC-QM: The distances S ( lens-object), S’(lens-crystal-image plane), D$_{CI}$ (crystal-image plane) and f (focal distance of lens), satisfy the following thin lens equation $$\label{10} \frac 1S+\frac 1{S^{\prime }+(\frac{\omega _s}{\omega _i}-1)\:% D_{CI}}=\frac 1f$$ The SPDC-QM system in this case has the magnification M given by*]{} $$\label{11} M=\frac{S^{\prime }+(\frac{\omega _s}{\omega _i}-1)\:D_{CI}}S=M_0+(% \frac{\omega _s}{\omega _i}-1)\frac{D_{CI}}S$$ In degenerate case $(\omega _s=\omega _i=\omega _p/2)$ we obtain the usual [*Gauss law for thin lens*]{} with the magnification $M_0=S^{\prime }/S$. [*Proof:*]{} The proof of the predictions (10)-(11) can be obtained by using the basic geometric optical configuration presented in Fig. 2a. Hence, the image of the object P in the thin lens placed between the crystal and object is located according to the Gauss law $$\label{12} \frac 1S+\frac 1{S_1}=\frac 1f$$ where $S_1$ is the distance from lens to image I$_1.$ Now the final image I of the image I$_1$ in the plane SPDC-QM is located according to the law (9). Consequently, if d is the lens-crystal distance then we have $$\label{13} S_1=S^{^{\prime }}+(Z_s-Z_i)=S^{^{\prime }}+(\frac{\omega _s}{\omega _i}% -1)D_{CI}$$ since S$_1=d+Z_s,$ S’=d+Z$_i,$and D$_{CI}$ is the crystal-image distance. A proof a the magnification factor can be obtained on the basis of geometric optical configuration from Fig. 2b. Hence, the magnification factor is $$\label{14} M=\frac{y_I}{y_O}=\frac{y_I}{y_I^{\prime }}\cdot \frac{y_I^{\prime }}{y_O}=% \frac{y_I^{\prime }}{y_O}$$ since the plane SPDC-QM has the magnification $\frac{y_I}{y_I^{\prime }}=1.$ Obviously, from $\Delta PP^{\prime }V\sim \Delta I_1I_1^{\prime }V,$we get y$% _I^{\prime }$/y$_O=S_1/S$ and then with (13) we obtain the magnification (11). [*(L.3)*]{} [*Law of thin lens + plane SPDC-QM with the null crystal-lens distance*]{} $$\label{15} \frac 1S+\frac 1{\frac{\omega _s}{\omega _i}S^{\prime }}=\frac 1f \:,% \smallskip\ M=\frac{\omega _s}{\omega _i}\frac{S^{\prime }}S$$ [*Proof:* ]{}Here we note that (L.4) is the particular case of (L.3) with d=0 for which we get S$_1=Z_s,$and S’=Z$_i.$ Then from (9) and (12) we obtain (15). [*3.3. Thin lens combined with plane SPDC-QM and classical mirror.*]{} [*(L.4)* ]{} [*Law of thin lens + plane SPDC-QM +classical mirror (*]{} see the basic geometric optical configuration presented in Fig. 3). The distances S ( lens-object), S$_1^{^{\prime }}$(lens-crystal-first image plane I$_1$), S$_2^{^{\prime }}$(lens-crystal-second image plane I$_2$), D$% _{CI_1}$ (crystal-first image plane), D$_{CI_2}$ (crystal-second image plane) and f (focal distance of lens), must satisfy the following law[*  $$\label{16} \frac 1S+\frac 1{S_1^{\prime }+(\frac{\omega _s}{\omega _i}-1) \:% D_{CI_1}}=\frac 1f$$* ]{}and the magnification M$_1$ given by $$\label{17} M_1=\frac{S_1^{\prime }+(\frac{\omega _s}{\omega _i}-1)\:D_{CI_1}}S$$ and[*  $$\label{18} \frac 1{S+2D_{OM}}+\frac 1{S_2^{\prime }+(\frac{\omega _s}{\omega _i}-1)% \:D_{CI_2}}=\frac 1f$$* ]{} the magnification M$_2$ given by $$\label{19} M_2=\frac{S_2^{\prime }+(\frac{\omega _s}{\omega _i}-1)\:D_{CI_2}}S$$ where D$_{OM}$ is the distance from object to the classical mirror M (see Fig. 3). The [*proof of*]{} [*(L.4)*]{} is similar to that of[* (L.3)*]{} and here will be omitted. **Experimental tests for the geometric SPDC-quantum optics** ============================================================ For an experimental test of [*the Gauss like law of the thin lens combined with a plane SPDC-QM* ]{}we propose an experiment based on a detailed setup presented in Fig. 4 and in the optical geometric configuration shown in Fig. 2b. Then, we predict that the image I of the object P (illuminated by a high quality signal laser SL with s[*($\omega _s,{\bf k_s,e_s,\mu _s))}$*]{} will be observed in the idler beam, i[*($\omega _i,{\bf % k_i,e_i,\mu _i)\equiv }$*]{}i($\omega _p$-$\omega _s$,[**k$_p$-**]{}[**k$_s$,e$% _s^{*}$,-$\mu _s$**]{}), when distances lens-object (S), lens-crystal-image plane (S’), crystal-image plane (D$_{CI})$ and focal distance f of lens, satisfy [*thin lens+QM law (10).* ]{}Moreover[*,* ]{}if [*thin lens+QM law (10)*]{} is satisfied, the image I of that object P can be observed even when instead of the signal source SL we put a detector D$_s.$ This last statement is clearly confirmed recently, in the degenerate case $\omega _s=\omega _i=\omega _p/2,$ by a remarkable [*two-photon imaging experiment*]{} \[8\]. Indeed, in these recent experiments, inspired by the papers of Klyshko et al (see refs. quoted in \[9\]), was demonstrated some unusual [*two-photon effects*]{}, which looks very strange from classical point of view. So, in these experiments, an argon ion laser is used to pump a nonlinear BBO crystal ($\beta -BaB_2O_4)$ to produce pairs of [*orthogonally polarized photons*]{} (see Fig. 1 in ref. \[8\] for detailed experimental setup). After the separation of the [*signal*]{} and[* idler* ]{}beams, an aperture (mask) placed in front of one of the detectors (D$_s$) is illuminated by the [*signal beam* ]{}through a convex lens. The surprising result consists from the fact that an image of this aperture is observed in coincidence counting rate by scanning the other detector (D$_i$) in the transverse plane of the idler beam, even though both detectors single counting rates remain constants. For understanding the physics involved in their experiment they presented an ”equivalent ” scheme ( in Fig. 3 in ref. \[8\]) of the experimental setup. By comparison of their ”scheme” with our optical configuration from Fig. 2b we can identify that the observed validity of the [*two-photon*]{} [*Gaussian thin-lens equation*]{} $$\label{20} \frac 1f=\frac 1S+\frac 1{S^{\prime }}$$ as well as of the[* linear magnification*]{} $$\label{21} M_0=\frac{S^{\prime }}S=2$$ can be just explained by our results on the two-photon geometric law (10)-(11) [*of the thin lens combined with a plane SPDC-QM* ]{}for the degenerate case $\omega _s=\omega _i=\omega _p/2.$ Therefore, the general tests of the predictions (10)-(11) using a setup described in Fig. 4, are of great importance not only in measurements in presence of the signal laser LS (with and without coincidences between LS and idler detector D$_i),$ but also in the measurements in which instead of the laser LS we put the a signal detector D$_s$ in coincidence with D$_i.$   **Conclusions** =============== In this paper the class of the [*SPDC-phenomena*]{} (1) is enriched by the introducing the [*crossing symmetric*]{} [*SPDC-processes* ]{}(2)-(3) satisfying the same energy-momentum conservation law (4). Consequently, the kinematical correlations (4)-(8) in conjunction with the Snellius relations (7) allow us to introduce a new kind of optical devices called [*quantum mirrors.* ]{}Then, some laws of the [*quantum mirrors,* ]{}such as:[* *]{}law (9) of [*thin plane SPDC-quantum mirror,* ]{}the [*law*]{} (10)-(11) [*of the thin lens combined with a plane SPDC-QM,* ]{}as well as,[* the laws (16)-(19),*]{} [* *]{}are proved. These results are natural steps towards a [*new geometric optics* ]{}which can be constructed for the kinematical correlated SPDC-photons. In particular, the results obtained here are found in a very good agreement with the recent results \[8\] on [*two-photon imaging experiment*]{}. Moreover, we recall that [* *]{}all the results obtained in the [*two-photon ghost interference-diffraction*]{} experiment \[6\] was recently explained by using the concept of [*quantum mirrors (*]{}see Ref. \[3\]). Finally, we note that all these results can be extended to the case of the [*spherical quantum mirrors.* ]{}Such results, which are found in excellent agreement to the recent experimental results \[7\] on [*two-photon geometric optics*]{}, will be presented in a future publication.   (This paper was published in Romanian Journal of Physics, Vol.45, P. 15, Bucharest 2000) [99]{} A. Yariv,[* Quantum Electronics,* ]{}Wiley, New York, 1989[*.*]{} See e. g., A. D. Martin and T. D. Spearman[*, Elementary Particle Theory*]{}, Nord Holland Publishing Co., Amsterdam, 1970. D. B. Ion and P. Constantin, [*A New Interpretation of two-photon entangled*]{} [*Experiments*]{}, [*NIPNE-1996 Scientific Report*]{}, National Institute for Physics and Nuclear Engineering Horia Hulubei, Bucharest, Romania, p. 139; D. B. Ion, P. Constantin and M. L. D. Ion, Rom. J. Phys. [**43** ]{}(1998) 3. M. W. Evans, in [*Modern Nonlinear Optics*]{}, [*Vol. 2*]{}, (M. W. Evans and S. Kielich (Eds)), John Wiley&Sons, Inc.,1993, pp.249. For a review see for example: D. M. Pepper and A. Yariv, in R. A. Fischer (Ed.), [*Optical Phase Conjugation,*]{} Academic Press, Inc., 1983, p 23; See also: H. Jagannath et al., [*Modern Nonlinear Optics, Vol 1,*]{} (M. Evans and S .Kielich (Eds)) John Wiley&Sons, Inc.,1993, pp.1. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, Phys. Rev. Lett. [**74** ]{}(1995) 3600. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, Phys. Rev. [**A 52 **]{}(1995) R3429. T. B. Pittman, D. V. Strekalov, D. N. Klyshko, M. H. Rubin, A. V. Sergienko, and Y. H. Shih, Phys. Rev. [**A 53 **]{}(1996) 2804.  A. V. Belinski and D. N. Klyshko, Sov. JETP [**78** ]{}(1994) 259.        ![image](Pl-QM-Fig-1.jpg) Fig. 1: : The basic optical configuration of a[* plane SPDC-quantum mirror.*]{}  ![image](Pl-QM-Fig-2a.jpg) Fig. 2a: : The basic optical configuration for usual lens combined with a [*plane SPDC-quantum mirror.*]{}  ![image](Pl-QM-Fig-2b.jpg) Fig. 2b: : The basic optical configuration for a proof of magnification factor for a usual lens combined with a [*plane*]{} [*SPDC-quantum mirror*]{}. ![image](Pl-QM-Fig-3.jpg) Fig. 3: : The basic optical configuration for usual lens combined with a [*plane*]{} [*SPDC-quantum mirror* ]{}and with[* *]{}a[* classical mirror*]{}. ![image](Pl-QM-Fig-4.jpg) Fig. 4: : The scheme of the experimental setup for a test of the geometric optics of correlated photons. The QM indicates the SPDC- [*quantum mirror*]{}, PBS is a polarization beam splitter, SL is a signal laser, P is an object, L a convergent lens, D$_{i}$ is an idler detector and CC is the coincidence circuit.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper argues that a combined treatment of probabilities, time and actions is essential for an appropriate logical account of the notion of probability; and, based on this intuition, describes an expressive probabilistic temporal logic for reasoning about actions with uncertain outcomes. The logic is *modal* and *higher-order*: modalities annotated by actions are used to express possibility and necessity of propositions in the next states resulting from the actions, and a higher-order function is needed to express the probability operator. The proposed logic is shown to be an adequate extension of classical mathematical probability theory, and its expressiveness is illustrated through the formalization of the Monty Hall problem.' author: - Bruno Woltzenlogel Paleo bibliography: - 'bibliography.bib' title: An Expressive Probabilistic Temporal Logic --- Higher-Order Modal Logics, Probability Theory Introduction ============ In order to reason about probabilistic knowledge, we must reason about time and actions as well. When we say, for example, that “the probability of ‘heads’ after a coin toss is 50% and that of ‘tails’ is 50%”, we implicitly assume that there is an action (in this example, tossing a coin) which can bring the world to different states in the next moment in time. The uncertainty lies in the state transition: the world may end up in a state where the coin shows heads or in a state where it shows tails. Despite the evident dependence of our informal notion of probability on the notions of action and time, the formal mathematical languages that we use to talk about probabilities rarely support mentioning action and time explicitly. Kolmogorov’s probability theory, for example, merely defines probability as the measure function in a measure space with total measure 1 [@Kolmogorov]. The task of modeling time-dependent actions and their possible outcomes in terms of events in a probabilistic space remains informal. While this informality is not problematic in the simplest situations (e.g. when we are interested in the possible outcomes of a single action, or when multiple actions are independent of each other), slightly more complex situations may already lead to confusion and difficulty. A famous example is the Monty Hall problem [@MontyHall]. Another inconvenience of dealing with probabilities just in terms of a measure space is that its set-theoretic language (where events are represented as subsets of the sample space) is rather limited. There are obvious parallels between, for instance, set intersection and conjunction or set union and disjunction, which allow us to represent *propositional* probabilistic knowledge easily (e.g. the event of a randomly picked coin showing heads *and* the same coin being made of silver can be represented as the *intersection* of the event of showing heads with the event of being made of silver). However, it is not clear how this analogy could be extended to more expressive logics with quantifiers. The main contribution of this paper, addressing the above mentioned issues, is the development of the syntax (in Section \[sec:Syntax\]) and the semantics (in Section \[sec:Semantics\]) of an expressive probabilistic temporal logic ([**PTL**]{}) for reasoning about actions with uncertain outcomes. [**PTL**]{}is an adequate extension of classical probability theory (as demonstrated in Section \[sec:Adequacy\]), and its greater expressiveness allows us to reason explicitly about event independence (as discussed in Section \[sec:Independence\]) and to avoid typical ambiguities of natural language discourse about probabilities (as shown in Section \[sec:Disambiguation\]). This capacity of [**PTL**]{}to avoid ambiguities related to outcomes and events is one of its main conceptual novelties in comparison to related work (cf. Section \[sec:RelatedWork\]). [**PTL**]{}’s convenience and expressive power are illustrated through the formalization of the Monty Hall problem (in Section \[sec:MontyHall\]). Syntax {#sec:Syntax} ====== The aim of [**PTL**]{}’s language is to be sufficiently expressive to capture typical probabilistic statements, conveniently similar to natural language, and yet more precise than natural language in cases when the latter is ambiguous. Intuitively, probability is an inherently higher-order function, since it takes a proposition (representing an event) as an argument. Therefore, if a probabilistic logical language is to include a probability operator in the syntactic level, it is only natural that it should be a higher-order language. Furthermore, because thinking probabilistically involves numerical computation and reasoning about states and actions, it is convenient to have a typed language, with distinct basic types for numbers, states and actions. The types used here are mostly the well-known simple types, but a list type constructor is included as well, in order to allow the representation of temporal sequences of actions and propositions. *Types* are freely generated from the set of basic types $\{ \beta, \iota, \eta, \mu \}$, the right-associative function type constructor ${\rightarrow}$ and the list type constructor ${\texttt{list}}$. $\mu$ is the type for *states*, $\beta$ is the type for *booleans*, $\iota$ is the type for *objects* and $\eta$ is the type for *real numbers*. The set of all types is denoted $T$. The type of (local) *propositions* $o$ is defined to be an abbreviation for $\mu {\rightarrow}\beta$ and the type of actions $\alpha$ is defined to be an abbreviation for $\mu {\rightarrow}{\texttt{list}}[\mu]$. The definition of $o$ ensures that the truth of a proposition depends on states. The definition of $\alpha$ follows the intuition that an action can be seen as a function that maps a state to a list of possible next states. As shown in Definition \[def:Symbols\], [**PTL**]{}contains, besides the usual logical symbols, also symbols for arithmetical functions and relations, the hybrid logic symbols for explicitly referring to states, list constructors and functions, and a probability operator. As modal operators (${\Box}$ and ${\Diamond}$) implicitly bind states, they have a more fundamental role, which reminds that of the $\lambda$ binder. Therefore, they are treated separately in Definition \[def:Expressions\]. \[def:Symbols\] For every type $\tau$, $S_{\tau}$ is a countably infinite set of uninterpreted symbols of type $\tau$. The set of arithmetic function symbols $S_{AF}$ is the set $\{ 0_{\eta}, 1_{\eta}, +_{\eta {\rightarrow}\eta {\rightarrow}\eta}, *_{\eta {\rightarrow}\eta {\rightarrow}\eta} \}$. The set of arithmetic relation symbols $S_{AR}$ is $\{ =_{\eta {\rightarrow}\eta {\rightarrow}o}, <_{\eta {\rightarrow}\eta {\rightarrow}o} \}$. The set of propositional logical symbols $S_L$ is $\{ \top_o, \bot_o, \vee_{o {\rightarrow}o {\rightarrow}o}, \wedge_{o {\rightarrow}o {\rightarrow}o}, {\rightarrow}_{o {\rightarrow}o {\rightarrow}o}, {\leftrightarrow}_{o {\rightarrow}o {\rightarrow}o}, \neg_{o {\rightarrow}o} \}$. The set of hybrid logical symbols $S_H$ is $\{ @_{\mu {\rightarrow}o {\rightarrow}o}, {\texttt{in}}_{\mu {\rightarrow}o} \}$. The set of quantifiers $S_Q$ is $\bigcup_{\tau \in T} \{{\forall}_{(\tau {\rightarrow}o) {\rightarrow}o}, {\exists}_{(\tau {\rightarrow}o) {\rightarrow}o}, =_{o {\rightarrow}o {\rightarrow}o} \}$. The symbol ${\texttt{nil}}^\tau$ has type ${\texttt{list}}[\tau]$, ${::}^\tau$ has type $\tau {\rightarrow}{\texttt{list}}[\tau] {\rightarrow}{\texttt{list}}[\tau]$, $\in^{\tau}$ has type $\tau {\rightarrow}{\texttt{list}}[\tau] {\rightarrow}o$ and the length operator $|.|^{\tau}$ has type ${\texttt{list}}[\tau] {\rightarrow}\mu$. The probability operator ${\mathcal{P}}$ has type ${\texttt{list}}[\alpha] {\rightarrow}o {\rightarrow}\eta$. The set of all symbols $S$ is defined as $S_\tau \cup S_{AF} \cup S_{AR} \cup S_L \cup S_H \cup S_Q \cup \bigcup_{\tau \in T} \{{\texttt{nil}}^{\tau}, {::}^{\tau}, |.|^{\tau}, \in^{\tau}\} \cup \{ {\mathcal{P}}\}$. Expressions are constructed as in the lambda calculus, using the symbols from $S$, application, abstraction and modalities. \[def:Expressions\] *Expressions* are constructed according to the following rules: - if $s_{\tau} \in S$, then $s_{\tau}$ is an expression of type $\tau$. - if $t_1$ is an expression of type ${\tau {\rightarrow}\tau'}$ and $t_2$ is an expression of type $\tau$, then $(t_1~t_2)$ is an expression of type $\tau'$. - if $x_{\tau} \in S_{\tau}$ and $t$ is an expression of type $\tau'$, then $\lambda x_{\tau}. t$ is an expression of type $\tau {\rightarrow}\tau'$. - if $\varphi$ is an expression of type $o$, $p$ is an expression of type $\eta$ and $a$ is an expression of type $\alpha$, then ${\Diamond}^p_a \varphi$ are ${\Box}_a \varphi$ expressions of type $o$. *Formulas* are expressions of type $o$. *Actions* are expressions of type $\alpha$. The set of expressions of type $\tau$ is denoted $E_{\tau}$. ${\mathcal{L}}= \bigcup_{\tau \in T} E_{\tau}$. Types are omitted when they can be inferred from the context. The usual parenthesis conventions are followed. Numerals are occasionally written in decimal notation. Infix notation is employed as usual for logical connectives, arithmetical functions and relations and the list constructor ${::}$. Binding notation is used for quantifiers. Additionally, the following notation conventions and abbreviations are used: - ${\Diamond}_a \varphi \equiv {\exists}x_{\eta}. {\Diamond}^{x}_a \varphi$ - ${\mathcal{P}}_l(\varphi) \equiv (({\mathcal{P}}~l)~\varphi)$ - ${\forall}x:G.~H(x) \equiv {\forall}x.~G(x) {\rightarrow}H(x)$ - ${\exists}x:G.~H(x) \equiv {\exists}x.~G(x) \wedge H(x)$ - ${\forall}x_{\tau} \in \ell_{{\texttt{list}}[\tau]}.~H(x) \equiv {\forall}x.~ (x \in \ell) {\rightarrow}H(x)$ - ${\exists}x_{\tau} \in \ell_{{\texttt{list}}[\tau]}.~H(x) \equiv {\exists}x.~ (x \in \ell) \wedge H(x)$ - ${\mathcal{P}}_{a{::}l}(\varphi {::}L) \equiv {\mathcal{P}}_{a{::}l}(\varphi) \wedge {\mathcal{P}}_{l}(L)$ (with ${\mathcal{P}}_{{\texttt{nil}}}({\texttt{nil}}) \equiv \top$) Probabilities appear in the logical language in two ways: firstly, as annotations on the diamond modal operator, in order to indicate how probable the corresponding state transition is; and secondly, through the higher-order probability function ${\mathcal{P}}$, which takes a list of actions and a proposition as arguments and returns the probability that the proposition will hold after the execution of the listed actions. The following are some simple examples of probabilistic statements and their corresponding formalizations in [**PTL**]{}: 1. Tossing a coin has a transition with probability $0.5$ to a state where the coin shows heads: $ {\forall}x:\mathit{Coin}. {\Diamond}^{0.5}_{\mathit{toss}(x)} \mathit{heads}(x) $ 2. The probability of a coin showing heads after it is tossed is $0.5$: $${\forall}x:\mathit{Coin}. {\mathcal{P}}_{\mathit{toss}(x){::}{\texttt{nil}}}(\mathit{heads}(x)) = 0.5$$ 3. The probability of a coin showing heads twice after it is tossed twice is less than $0.5$: $ {\forall}x:\mathit{Coin}. {\mathcal{P}}_{\mathit{t}(x){::}\mathit{t}(x){::}{\texttt{nil}}}(\mathit{h}(x){::}\mathit{h}(x){::}{\texttt{nil}}) < 0.5 $ , where $t = \mathit{toss}$ and $h = \mathit{heads}$. 4. After a coin is tossed it is necessarily either heads or tails: $${\forall}x:\mathit{Coin}. {\Box}_{\mathit{toss}(x)} (\mathit{heads}(x) \vee \mathit{tails}(x))$$ 5. After a coin is tossed it is possibly tails: $ {\forall}x:\mathit{Coin}. {\Diamond}_{\mathit{toss}(x)} (\mathit{tails}(x)) $ Semantics {#sec:Semantics} ========= For each type $\tau$, we need a domain $D_{\tau}$ of elements on which expressions of type $\tau$ are interpreted. For numerical expressions, we assume the domain to be a real closed field. For booleans, we assume the set with the usual two truth values. For function types, we require *all* functions to be present in the type’s domain. This effectively results in a *standard* higher-order semantics. For *Henkin* semantics, it would suffice to drop this last condition. A *domain* $D_{\tau}$ for a type $\tau$ is a non-empty set such that $D_{\tau' {\rightarrow}\tau}$ is the set of all functions from $D_{\tau'}$ to $D_{\tau}$ (for every $\tau'$ and $\tau$), $D_o = \{ {\mathbf{T}}, {\mathbf{F}}\}$, $D_{\eta} = {\mathbb{R}}$ and $D_{{\texttt{list}}[\tau]}$ is the set of all lists of elements from $D_{\tau}$. As in the most common modal logics [@Blackburn], we use *frames* as the foundation for the modal aspects of the semantics. A frame is essentially a set of states and a relation for the transitions between states. What is different here is that transitions are labeled by actions and by probabilities, and the transition relation and actions must be mutually consistent. \[def:Frame\] A *probabilistic labeled frame* is a triple $(W,R,{P})$ such that $W$ is a non-empty set of *states*, $R \subseteq W \times W \times D_{\alpha}$ satisfying the condition that if $(w, w', \ell) \in R$ then $(w, w'', \ell) \in R$ for every $w'' \in \ell(w)$, and ${P}: R {\rightarrow}[0,1]$ is a probability function satisfying the condition that for all $w \in W$ and for all $\ell \in D_{\alpha}$ such that there exists $w' \in W$ with $(w, w', \ell) \in R$, $$\sum_{w' ~ | ~ (w, w', \ell) \in R } {P}((w,w',\ell)) = 1$$ The relation $R$ in definition \[def:Frame\] may be cyclic. This is convenient, for instance, when specifying Markov chains. A model extends a frame with an interpretation function that assigns denotations to expressions. The denotation of an expression may generally vary with the state. In such cases, we say that the interpretation is *flexible*; otherwise, it is *rigid* [@Fitting]. In the examples considered in this paper, boolean expressions and probabilistic expressions are always flexibly interpreted, whereas other expressions are always rigidly interpreted. \[def:Model\] A *model* is a tuple $(W, R, {P}, \{ D_{\tau} \}_{\tau \in T}, I)$ where $(W, R, {P})$ is a probabilistic labelled frame, $\{ D_{\tau} \}_{\tau \in T}$ is a domain, $W = D_{\mu}$ and $I$ is an interpretation function that maps states and expressions of any type $\tau$ to elements in $D_{\tau}$. It is assumed that any interpretation $I$ maps arithmetic symbols, list constructors and functions, and logical constants to their usual fixed denotations. Therefore (as usual, non-exhaustively): - $I_w(A \wedge B) = {\mathbf{T}}$ iff $I_w(A) = {\mathbf{T}}$ and $I_w(B) = {\mathbf{T}}$ - $I_w(A \vee B) = {\mathbf{T}}$ iff $I_w(A) = {\mathbf{T}}$ or $I_w(B) = {\mathbf{T}}$ - $I_w(A {\rightarrow}B) = {\mathbf{T}}$ iff $I_w(A) = {\mathbf{F}}$ or $I_w(B) = {\mathbf{T}}$ - $I_w(\neg A) = {\mathbf{T}}$ iff $I_w(A) = {\mathbf{F}}$ - $I_w({\forall}x_{\tau}. \varphi) = {\mathbf{T}}$ iff $I_w[x \mapsto e](\varphi) = {\mathbf{T}}$ for every $e \in D_{\tau}$ - $I_w({\exists}x_{\tau}. \varphi) = {\mathbf{T}}$ iff $I_w[x \mapsto e](\varphi) = {\mathbf{T}}$ for some $e \in D_{\tau}$ - $I_w( (t_1~t_2) ) = (I_w(t_1)~I_w(t_2))$ - $I_w(\lambda x_{\tau}. t)$ is the function taking an element $e \in D_{\tau}$ and returning $I_w[x \mapsto e](t)$. - $I_w({\texttt{in}}(s)) = {\mathbf{T}}$ iff $w = I_w(s)$ - $I_w(@_s \varphi) = {\mathbf{T}}$ iff $I_{I_w(s)}(\varphi) = {\mathbf{T}}$ where $I_w[x \mapsto e](x) = e$ and $I_w[x \mapsto e](t) = I[t]$ for any $t$ distinct from $x$. Furthermore, and most importantly, the interpretations of expressions formed with modal and probabilistic operators are defined as follows: - $I_w({\Box}_a \varphi) = {\mathbf{T}}$ iff $I_{w'}(\varphi) = {\mathbf{T}}$\ for every $w'$ such that $(w, w', I_w(a)) \in R$ - $I_w({\Diamond}_a^p \varphi) = {\mathbf{T}}$ iff ${P}((w,w',I_w(a))) = I_w(p)$ and $I_{w'}(\varphi) = {\mathbf{T}}$\ for some $w'$ such that $(w, w', I_w(a)) \in R$ - $I_w({\mathcal{P}}_{\texttt{nil}}(\varphi)) = \begin{cases} 1, & \text{if } I_w(\varphi) = {\mathbf{T}}\\ 0, & \text{if } I_w(\varphi) = {\mathbf{F}}\end{cases} $ - $I_w({\mathcal{P}}_{a{::}l}(\varphi)) = \sum\limits_{w' | (w, w', I_w(a)) \in R } {P}((w,w',I_w(a))) . I_{w'}({\mathcal{P}}_l(\varphi)) $ In the probabilistic logic [**PTL**]{}, validity and satisfaction of a formula by a model are standard non-probabilistic notions, as defined below. The logic handles probabilities explicitly in its language; not at the semantic level. \[def:Satisfaction\] A formula $\varphi$ is *satisfied* in a model $M \equiv (W, R, {P}, \{ D_{\tau} \}_{\tau \in T}, I)$ in a state $w$, denoted $M, w \vDash \varphi$ iff $I_w(\varphi) = {\mathbf{T}}$. A formula $\varphi$ is *globally satisfied* in a model $M$, denoted $M \vDash \varphi$ iff $M, w \vDash \varphi$ for all $w \in W$. A formula $\varphi$ is *valid*, denoted $\vDash \varphi$ iff $M \vDash \varphi$ for every model $M$. A set of formulas $T$ entails a formula $\varphi$, denoted $T \vDash \varphi$, iff $M \vDash \varphi$ for every model $M$ such that $M \vDash \bigwedge_{G \in T} G$. Adequacy {#sec:Adequacy} ======== This section shows how the usual mathematical presentation of probability theory, as recalled in Definition \[def:ProbabilitySpace\], can be considered a special case of the probabilistic logic presented here. This is done by showing (in Theorem \[theorem:Adequacy\]) how to translate probability spaces into models and the usual set-theoretic language for probabilistic events into [**PTL**]{}’s language. *Set expressions* over a set $\Omega$ are expressions freely generated from singleton subsets of $\Omega$ and operators for complementation ($\overline{\phantom{\{ \}}}$), union ($\cup$) and intersection ($\cap$). As usual, by abuse of notation, set expressions and the sets they denote are not explicitly distinguished. If $\Omega = \{ w_1, w_2 \}$ then the following are examples of set expressions: $\{w_1\}$, $\{w_2\}$, $\overline{\{w_2\}}$ (denoting the set $\{ w_1 \}$), $\{w_1\} \cup \{w_2\}$ (denoting the set $\{w_1, w_2\}$), $\{w_1\} \cap \{w_2\}$ (denoting the empty set), … \[def:ProbabilitySpace\] A *probability space* is a triple $(\Omega, \Sigma, Q)$ where $\Omega$ is the *sample space* (whose elements are *outcomes*), $\Sigma$ is a *$\sigma$-algebra* on $\Omega$ (i.e. a collection of subsets of $\Omega$ (*events*) closed under complementation, countable union and countable intersection) and $Q: \Sigma {\rightarrow}[0,1]$ is a probability function satisfying Kolmogorov’s axioms: 1. $Q(E) \geq 0$, for all $E \in \Sigma$ 2. $Q(\Omega) = 1$ 3. For any countable collection $C$ of mutually disjoint events $$Q(\bigcup_{E \in C} E) = \sum_{E \in C} Q(E)$$ \[theorem:Adequacy\] For every probability space $(\Omega, \Sigma, Q)$, there is a model $M$ and a language translation function $g$ from set expressions over $\Omega$ to formulas such that $Q(E) = p$ iff $M, w \vDash {\mathcal{P}}_a(g(E)) = p$, for some $w$ and some $a$. Let $W$ be $\{ w \} \cup \Omega$. For each $w_k \in \Omega$, let $F_k$ be a distinct atomic proposition. Let $I$ be any interpretation such that $I_{w_i}(F_j) = {\mathbf{T}}$ iff $i = j$. Let $R$ be $\{ (w, w_k, I_w(a)) | w_k \in \Omega \}$. Let the probabilistic transition function be defined such that ${P}((w, w_k, I_w(a))) = Q(\{w_k\})$. Since $\Omega = \bigcup_k \{w_k\}$, all $\{w_k\}$ are mutually disjoint and $Q(\Omega) = 1$, the condition (from Definition \[def:Frame\]) that $$\sum_{w_k ~ | ~ (w, w_k, I_w(a)) \in R } {P}((w,w_k,I_w(a))) = 1$$ holds. Finally let $M$ be the model $(W,R,{P},\{D_{\tau}\}_{\tau \in T},I)$. The translation function $g$ is defined recursively: $$g(E) = \begin{cases} F_k, & \text{if } E = \{ w_k \} \\ g(E') \vee g(E''), & \text{if } E = E' \cup E'' \\ g(E') \wedge g(E''), & \text{if } E = E' \cap E'' \\ \neg g(E'), & \text{if } E = \overline{E'} \end{cases}$$ Now the fact that $Q(E) = p$ iff $M, w \vDash {\mathcal{P}}_a(g(E)) = p$ must be proven. First notice that, by Definition \[def:Satisfaction\], $M, w \vDash {\mathcal{P}}_a(g(E)) = p$ iff $I_w({\mathcal{P}}_a(g(E)) = p)$, and by Definition \[def:Model\], $I_w({\mathcal{P}}_a(g(E)) = p)$ iff $$\sum_{w' ~ | ~ (w, w', I_w(a)) \in R } {P}((w,w',I_w(a))). I_{w'}({\mathcal{P}}_{{\texttt{nil}}}(g(E))) = p$$ By Definition \[def:Model\] again and the definition of $R$, the summation above can be simplified, resulting in the following equation: $$\sum_{w' | w' \in \Omega \text{ and } I_{w'}(g(E)) = {\mathbf{T}}\}} {P}((w,w',I_w(a))) = p$$ Furthermore, unfolding the definition of ${P}$, the equation above reduces to: $$\sum_{w' | w' \in \Omega \text{ and } I_{w'}(g(E)) = {\mathbf{T}}\}} Q(\{w'\}) = p$$ Therefore, it suffices to prove that the equation above holds iff $Q(E) = p$, or equivalently, that: $$\sum_{w' | w' \in \Omega \text{ and } I_{w'}(g(E)) = {\mathbf{T}}\}} Q(\{w'\}) = Q(E)$$ By Kolmogorov’s third axiom, $Q(E) = \sum_{w' \in E} Q(\{w'\}$. Hence, letting $X$ be the following set: $$\{ x | x \in \Omega \text{ and } I_x(g(E)) = {\mathbf{T}}\}$$ A sufficient condition for the equation above to hold is that $X = E$. This is proven below by induction on the structure of $E$: - **Base case** ($E = \{w_k\}$): then $g(E_k) = F_k$ and, by definition of $I$, $I_x(F_k) = {\mathbf{T}}$ iff $x = w_k$. - **Induction cases**: - ($E = \overline{E'}$): then $g(E) = \neg g(E')$ and hence: $$X = \{ x | x \in \Omega \text{ and not } I_x(g(E')) = {\mathbf{T}}\}$$ Let $$Y = \{ x | x \in \Omega \text{ and } I_x(g(E')) = {\mathbf{T}}\}$$ By induction hypothesis, $Y = E'$. Therefore, $X = \Omega \setminus Y = \Omega \setminus E' = E$. - ($E = E' \cap E''$): then $g(E) = g(E') \wedge g(E'')$ and hence: $$X = \{ x | x \in \Omega \text{ and } I_x(g(E') \wedge g(E'')) = {\mathbf{T}}\}$$ and so: $$X = \{ x | x \in \Omega \text{ and } I_x(g(E')) = {\mathbf{T}}\text{ and } I_x(g(E'')) = {\mathbf{T}}\}$$ Let: $$Y = \{ x | x \in \Omega \text{ and } I_x(g(E')) = {\mathbf{T}}\}$$ $$Z = \{ x | x \in \Omega \text{ and } I_x(g(E'')) = {\mathbf{T}}\}$$ By induction hypothesis, $Y = E'$ and $Z = E''$. Therefore, $X = Y \cap Z = E' \cap E'' = E$. - ($E = E' \cup E''$): this case is analogous to the case above. Informally, the idea of the proof of Theorem \[theorem:Adequacy\] is to translate a probability space into a model with a distinguished initial state and a future state for each possible outcome in the space. Any set expression (specifying an event) has a corresponding logical formula. The correspondence is as expected: union corresponds to disjunction, intersection to conjunction and complementation to negation. The translation is *adequate* in the sense that the probability of an event in the space is equal to the probability of the corresponding formula in the model. Expressiveness {#sec:Expressiveness} ============== A corollary of Theorem \[theorem:Adequacy\] is that the probabilistic logic [**PTL**]{}is more expressive than classical probability theory, in two distinct informal senses. The first one is syntactical: whereas the usual language of classical probability theory (which relies on set expressions) can naturally express formulas containing propositional connectives such as negation, conjunction and disjunction (through the inverse of the translation function $g$ defined in the proof of the theorem), there are formulas in [**PTL**]{}’s language (e.g. formulas containing quantifiers or nested probability operators) which have no (natural) counterpart in the language of classical probability theory. The second one is semantical: the proof of Theorem \[theorem:Adequacy\] shows that probability spaces correspond to models with a very simple frame; it would be inconvenient to express models with more complex frames in terms of probability spaces, because the frame structure would have to be flattened. Independence {#sec:Independence} ------------ Shortcomings and limitations of probability spaces for knowledge representation become apparent in situations where a sequence of independent actions is performed over time. Suppose that a fair coin is tossed twice. Representing this as a probability space requires a sample space with four outcomes $\{ h_1h_2, h_1t_2, t_1h_2, t_1t_2 \}$. Saying that, for instance, $P(\{h_1h_2\}) = P(H_1 \cap H_2) = P(H_1) P(H_2) = 0.25$ (where $H_1 = \{h_1t_2,h_1h_2\}$ and $H_2 = \{h_1h_2,t_1h_2\}$) requires the assumption of independence for the tosses. Two events $E_1$ and $E_2$ are often defined to be *independent* if and only if $P(H_1 \cap H_2) = P(H_1) P(H_2)$. But this definition is epistemologically unsatisfactory. How do we actually come to know that $H_1$ and $H_2$ are independent? According to this definition, we must know $P(H_1 \cap H_2)$ in advance. But that is precisely what, in practice, we do *not* know and would like to compute (based on our knowledge of $P(H_1)$ and $P(H_2)$)! We can easily get trapped in circular reasoning, trying to justify, for instance, our claim that $P(H_1 \cap H_2) = 0.25$ by saying that $H_1$ and $H_2$ are independent and then trying to justify that they are independent by saying that $P(H_1 \cap H_2) = 0.25$. Of course, we tend to escape from such cases of fallacious circular reasoning by simply assuming that the events are independent. However, the assumption is *tacit*. Classical probability theory provides no way to represent knowledge of the independence and any reason that we might have for justifying the assumption of independence of the events remains at an informal level, external to the representation. In the [**PTL**]{}, on the other hand, the possibility to represent independence comes naturally and for free. For instance, when the axiom ${\forall}x:Coin. {\Diamond}_{t(x)}^{0.5}. H(x) \wedge {\Diamond}^{0.5}_{t(x)}T(x)$ is assumed, it follows from the semantics of the logic that it holds in any state of the model. And since the axiom states the equal probabilities for heads and tails in a way that does not depend on anything except the action of the toss itself, it is clear that tossing a coin at a state $s$ has no effect on tossing the coin at another state $s'$. Therefore, the two tosses must be independent, and consequently it follows that: $${\forall}x:Coin. {\Diamond}_{t(x)}^{0.5}. H(x) \wedge {\Diamond}^{0.5}_{t(x)}T(x) \vDash {\mathcal{P}}_{t(x){::}t(x){::}{\texttt{nil}}}(H(x){::}H(x) {::}{\texttt{nil}}) = 0.25$$ Also dependence can be easily represented. For example, consider a magical coin $c_m$ that behaves as a fair coin in an initial state, but when tossed in any other state always gives the opposite result of the previous toss. This may be represented by the following axioms: - $@_s {\Diamond}_{t(c_m)}^{0.5}. H(c_m) \wedge {\Diamond}^{0.5}_{t(c_m)}T(c_m)$ - $T(c_m) {\rightarrow}{\Diamond}_{t(c_m)}^{1}. H(c_m)$ - $H(c_m) {\rightarrow}{\Diamond}_{t(c_m)}^{1}. T(c_m)$ - ${\Box}\neg {\texttt{in}}(s)$ (no state is a predecessor of $s$) The inadequacy of classical probability theory’s usual definition of independence can be further illustrated in a situation where we have to randomly get an object from a bag with four objects: a black sphere, a white sphere, a black cube and a white cube. For simplicity, we assume tacitly that we put the object back in the bag after the action. This can be represented by the following axioms: **A1:** $S(s_b) \wedge B(s_b)$; **A2:** $S(s_w) \wedge W(s_w)$; **A3:** $C(c_b) \wedge B(c_b)$; **A4:** $C(c_w) \wedge W(c_w)$; **A5:** $\mathit{Bag} = s_b {::}s_w {::}c_b {::}c_w {::}{\texttt{nil}}$; and **A6:** ${\forall}x \in \mathit{Bag}. {\Diamond}_{a}^{1/|\mathit{Bag}|} G(x)$. It then follows, by the semantics, that: $$\textrm{A1}, \textrm{A2}, \textrm{A3}, \textrm{A4}, \textrm{A5}, \textrm{A6} \vDash {\forall}x \in \mathit{Bag}. {\mathcal{P}}_a(S(x) \wedge B(x)) = {\mathcal{P}}_a(S(x)) . {\mathcal{P}}_a(B(x))$$ Nevertheless, we should not be willing to conclude from this result, as classical probability theory does, that the event of getting a spherical object and the event of getting a black object are independent. It is merely coincidental that ${\mathcal{P}}_a(S(x) \wedge B(x)) = {\mathcal{P}}_a(S(x)) . {\mathcal{P}}_a(B(x))$. If the bag had an additional black tetrahedral, for instance, the two sides of this equation would not be equal anymore. In the formalization above, it is evident that both events are correlated, because they consist of outcomes from a single action. A simple formal theory $T_{\mathit{indep}}$ of (in)dependence of actions could provide the following definition for *independence* of an action $a$ from an action $b$: - $ \mathit{Independent}(a,b) \equiv ({\forall}s. @_s (({\forall}\varphi. {\forall}p. {\mathcal{P}}_a(\varphi) = p {\rightarrow}{\Box}_b ({\mathcal{P}}_a(\varphi) = p))) $ It follows from the semantics that $T_{\mathit{indep}}$ entails the following *shortcut* theorem: $$\bigwedge_{1\leq i < j \leq n} \mathit{Independent}(a_i,a_j) {\rightarrow}{\mathcal{P}}_{a_1{::}\ldots{::}a_n {::}{\texttt{nil}}}(E_1{::}\ldots{::}E_n {::}{\texttt{nil}}) = {\mathcal{P}}_{a_1}(E_1)\ldots{\mathcal{P}}_{a_n}(E_n)$$ The notion of independence defined in $T_{\mathit{indep}}$ is non-circular. We may, from the logical specification of a system in the [**PTL**]{}’s language, explicitly reason about the actions of the system, conclude that some of them are mutually independent and use the general theorem above as a shortcut for computing probabilities of sequences of actions. This is arguably more satisfactory than the teleological definition of independence from classical probability theory, which depends on the very shortcut theorem that we would have liked to derive. It is not an aim of this paper to discuss $T_{\mathit{indep}}$ or other theories of independence in detail. $T_{\mathit{indep}}$ is just a (very simple) example showing that [**PTL**]{}is expressive enough to allow explicit reasoning about concepts that are very relevant in a probabilistic context. Disambiguation {#sec:Disambiguation} -------------- Informal statements about probabilities are sometimes imprecise and ambiguous. Their intended meanings are not always clear. If a person $A$ tried to describe to a person $B$ the random effects of an action $a$, her description might include a sentence such as: “the probability of $\varphi$ after $a$ is $p$”. The most straightforward and literal logical meaning for this sentence would be ${\mathcal{P}}_a(\varphi) = p$. However, it is often the case that the meaning intended by $A$ is actually ${\Diamond}_a^p \varphi$. $B$ must guess, from the context of the conversation and the common knowledge, which of the two alternatives is actually meant. A formula such as ${\Diamond}_a^p \varphi$ provides fine-grained information about one particular state transition that is made possible by the action, whereas ${\mathcal{P}}_a(\varphi) = p$ provides coarse-grained aggregated information about transitions to all states where $\varphi$ holds. The aggregated information is incomplete, because it doesn’t say how many such states there are and it doesn’t specify the transition probability to each of these states. The power to disambiguate is an interesting qualitative criterium to estimate the usefulness of a formal language. The formal probabilistic logical language proposed here is expressive enough to precisely disambiguate between ${\Diamond}$ and ${\mathcal{P}}$, which are subtly but importantly different in meaning, even though they are often expressed indistinguishably in natural language. It is important to note that neither ${\Diamond}_a^p \varphi {\rightarrow}{\mathcal{P}}_a(\varphi) = p$ nor ${\mathcal{P}}_a(\varphi) = p {\rightarrow}{\Diamond}_a^p \varphi$ is valid. Understanding the difference between ${\Diamond}_a^p \varphi$ and ${\mathcal{P}}_a(\varphi) = p$ is crucial for a correct use of [**PTL**]{}. Furthermore, the difference in the meanings of ${\Diamond}$ and ${\mathcal{P}}$ is essential to a semantics for probabilities that is compatible with our intuition about probabilities. Therefore, any sufficiently rich probabilistic logic should strive to distinguish between these important notions. [**PTL**]{}does so explicitly and syntactically. In natural language dialogues, $B$ tends to cope with the ambiguity by subconsciously attempting to presuppose that $\varphi$ fully specifies a single outcome of $a$, in which case $A$ means ${\Diamond}_a^p \varphi$. If this presupposition is incompatible with pre-existing knowledge or even with knowledge acquired later during the dialogue, the presupposition is canceled and the meaning falls back to ${\mathcal{P}}_a(\varphi) = p$. Fully understanding the dynamics of presuppositions is an open linguistic challenge, and probabilities bring yet another dimension of complexity to this difficult problem. Consider the following statement: - “the probability of picking number $n$ (for $1 \leq n \leq 6$) is $1/6$” Upon hearing this sentence, we tend to presuppose that there are six outcomes (i.e. ${\Diamond}_{\mathrm{pick}}^{1/6} \mathrm{Picked}(n)$). However, if we are later told that: - “the number is picked by throwing a 12-faced fair dice where each $n$ (for $1 \leq n \leq 6$) occurs in two distinct faces.” we are forced to cancel our presupposition and revise our logical interpretation of the previous sentence. Implementation and Automation {#sec:Implementation} ============================= A preliminary implementation of [**PTL**]{}in `Coq` is available in <https://github.com/Paradoxika/ProbLogic>. It follows the embedding methodology used in [@ECAI; @CSR], which is based on a higher-order and typed version of the standard translation of modal logics into predicate logic, with three important differences. Firstly, whereas in the standard translation the accessibility relation is a primitive constant, in the embedding of [**PTL**]{}it is derived from the primitive notion of action. Secondly, the higher-order modal logics used in [@ECAI] were *rigid*, while [**PTL**]{}includes a flexible probability function ${\mathcal{P}}$ (which is simulated by a flexible predicate in the implementation). Finally, in contrast to the logics from [@ECAI], [**PTL**]{}requires numerical reasoning. It is this last point that makes the embedding of [**PTL**]{}significantly harder than previous embeddings and justifies its preliminary status. The current implementation still does not provide convenient modal tactics (as those described in [@CSR]) and numerical reasoning is done with `Coq`’s standard `QArith` library for rationals (instead of real-closed fields). Decidability (of the satisfiability, validity and entailment problems) is indeed, of course, hopeless for the proposed *higher-order* logic. But even for logics with undecidability issues, automated theorem provers are occasionaly sufficiently efficient for practical applications [@ECAI]. It is also important to note that, even if arithmetical expressions (of type $\eta$) are restricted to be ground (i.e. by forbidding quantifiers of type $(\eta {\rightarrow}o) {\rightarrow}o$), [**PTL**]{}thus restricted would still be expressive enough to formalize all the examples shown in this paper. In this restricted logic, the only automation of arithmetic needed is simplification/computation of arithmetic expressions and reduction of ground simplified arithmetic propositions to $\top$ or $\bot$. With the recent progress in SMT-solving and automated theorem proving modulo arithmetic (even with quantifiers), it is reasonable to hope that automated provers will soon be able to cope with [**PTL**]{}problems. In the meanwhile, the current implementation in `Coq` has already proven to be sufficient for a fully interactive formalization of the Monty Hall problem, as described in the next section. The Monty Hall Problem {#sec:MontyHall} ====================== [**PTL**]{}is used here in the formalization of vos Savant’s famous *Monty Hall problem* [@MontyHall], whose description is reproduced below: *Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, ‘Do you want to pick door No. 2?’ Is it to your advantage to switch your choice?* This probabilistic puzzle is seemingly paradoxical, because people very often make mistakes when they reason *informally* about the problem, as they tend to wrongly compute the probabilities. Therefore, despite its apparent simplicity, this problem is an interesting benchmark for evaluating *formal* probabilistic logics. A good probabilistic logic should allow a sufficiently natural and unambiguous formal representation of the problem and should entail correct probability values. From the player’s point of view, the Monty Hall problem can be formalized in [**PTL**]{}by the following axioms: - **Axiom 1:** “you’re given the choice of three doors”: $ {D}= d_1 {::}d_2 {::}d_3 {::}{\texttt{nil}}$ - **Axiom 2:** “behind one door is a car”: $ {\exists}d \in {D}. C(d) $ - **Axiom 3:** “behind the others, goats”: $ {\forall}d \in {D}. \neg C(d) {\leftrightarrow}G(d) $ - **Axiom 4:** “you pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat.”: $${\exists}s_c. ((@_{s_0} {\Diamond}_{{h}} {\Diamond}_{{p}(d_1)} {\Diamond}_{{o}} {\texttt{in}}(s_c)) \wedge @_{s_c} (O(d_3) \wedge G(d_3)) )$$ A more literal reading of Axiom 4 would be that “there exists a state (the current state), reachable from the initial state by the sequence of actions in which the host hides the car (${h}$), the player picks the first door (${p}(d_1)$), and the host opens a door (${o}$), where the third door is open and has a goat.”. It is fair to say that the axioms shown above capture the intended meanings of their corresponding informal natural language sentences. As desired, the axioms are reasonably similar to the corresponding sentences, although there are interesting differences worth discussing, particularly in relation to Axiom 4. Firstly, it illustrates the need for the hybrid logic operators $@$ and ${\texttt{in}}$ in situations where it is important to declare *local* conditions, which hold only in a single given state. Secondly, it shows the convenience of having a versatile approach to actions. The $\mathit{pick}$ ($p$) action, for instance, takes the picked door as an argument whereas the $\mathit{open}$ ($o$) action takes no argument. This allows us to express that, from the point of view of the player, the action of picking a door is an action of the player and he can choose which door to pick, while opening a door is an action performed by the host, with uncertain outcomes to the player. The opening of the third door is represented as a random event of the action, through the proposition $O(d_3)$. These subtle differences between Axiom 4 and its corresponding sentence in the informal description of the problem are evidence that, as expected from a formal language, [**PTL**]{}offers a higher degree of precision than what we are used to in natural language. There are many assumptions that are not explicitly mentioned in the description of the problem. But they must be formalized as well. We list below only some of them. Other axioms (e.g. stating what remains unchanged when actions are excuted) can be see in the `Coq` formalization discussed in Section \[sec:Implementation\]. - **Axiom 5:** Each door has equal probability of having the car after the $\mathit{hide}$ ($h$) action: $ {\forall}d \in {D}. {\Diamond}_{{h}}^{1/|{D}|} C(d) $ - **Axiom 6:** The $\mathit{pick}$ ($p$) action marks the picked door: $ {\forall}d \in {D}. {\Diamond}_{{p}(d)}^1 P(d) $ - **Axiom 7:** The host opens a door containing a goat with uniform probability among the doors that are neither picked nor contain a car: $${\forall}d^c. {\forall}d^p. (C(d^c) {\rightarrow}P(d^p) {\rightarrow}{\forall}d \in (({D}- d^c) - d^p). {\Diamond}_{{o}}^{1/|(({D}- d^c) - d^p)|} O(d) )$$ - **Axiom 8:** When the player does the *switch* (${s}$) action, the newly picked door is different from the previously picked door and from the open door: $${\forall}d^o. {\forall}d^p. (O(d^o) {\rightarrow}P(d^p) {\rightarrow}{\exists}d. (d \neq d^o \wedge d \neq d^p \wedge {\Diamond}_{{s}}^1 P(d) ) )$$ - **Axiom 9:** When the player does the *no switch* (${\bar{s}}$) action, the newly picked door is the same as the previously picked door: $ {\forall}d. (P(d) {\rightarrow}{\Diamond}^1_{{\bar{s}}} P(d) ) ) $ - **Axiom 10:** A state is a victorious state if and only if the car is behind the picked door: $ V \leftrightarrow ({\exists}d. C(d) \wedge P(d)) $ The next step is the formalization of (the intended meaning of) the question (“Do you want to pick door No. 2? Is it to your advantage to switch your choice?”) as a conjecture. However, this is significantly less straightforward than the formalization of the axioms. A naive and literal reading of the question could result in the following tentative conjecture: $${\mathcal{P}}_{{s}}(V) > {\mathcal{P}}_{{\bar{s}}}(V)$$ But the formula above is only satisfied in models where the probability of victory by switching is greater than the probability of victory by not switching in *all* states, whereas the question is interested in a few states only, namely those reachable by a given sequence of actions (i.e. hiding, picking, opening and re-picking). Taking this into account, an apparently plausible alternative formalization could be: $$@_{s_0} {\Box}_{{h}} {\Box}_{{p}(d_1)} {\Box}_{{o}} ({\mathcal{P}}_{{s}}(V) > {\mathcal{P}}_{{\bar{s}}}(V))$$ But this is trivially false in any model $M$ that satisfies the axioms above, because the action $\mathit{hide}$ has a successor state $s_1$ (where the car was hidden behind the first door) such that: $$M \vDash @_{s_1} {\Box}_{{p}(d_1)} {\Box}_{{o}} ({\mathcal{P}}_{{s}}(V) < {\mathcal{P}}_{{\bar{s}}}(V))$$ Yet another possible attempt would be to formalize the conjecture as: $$\forall s. \varphi(s) {\rightarrow}@_s {\mathcal{P}}_{{s}}(V) > {\mathcal{P}}_{{\bar{s}}}(V)$$ where $s$ is the current state when the question is asked and $\varphi(s)$ is a formula specifying whether $s$ is a posible current state (i.e. consistent with the player’s observations). However, for a similar reason, this formula is also false in any model $M$ that satisfies the axioms: there is a possible current state $s^*$, where $I_{s^*}({\mathcal{P}}_{{s}}(V)) = 0$ and $I_{s^*}({\mathcal{P}}_{{\bar{s}}}(V)) = 1$. In fact, it is easy to see that, in any possible current state $s$, $I_{s^*}({\mathcal{P}}_{{\bar{s}}}(V))$ and $I_{s^*}({\mathcal{P}}_{{s}}(V))$ are always either $0$ and $1$, because the action of switching has always only one possible outcome. As evidenced by the failed conjectures above, there is a structural gap between the natural language question and the correct formalization of its intended meaning, and therein lies a potential reason (though probably not the only one) why people tend to have difficulties to reason about the Monty Hall problem. As it is posed, the question induces the player to think in terms of probabilistic outcomes of the action of switching or not switching in the current state. In contrast, the correct thinking requires the player to hypothetically backtrack to the initial state and formulate the conjecture as follows: - **Conjecture:** $ @_{s_0} ({\mathcal{P}}_{{h}{::}{p}(d_1) {::}{o}{::}{s}{::}{\texttt{nil}}}(V) > {\mathcal{P}}_{{h}{::}{p}(d_1) {::}{o}{::}{\bar{s}}{::}{\texttt{nil}}}(V)) $ In any model satisfying the axioms (including the omitted axioms), $I_{s_0}({\mathcal{P}}_{{h}{::}{p}(d_1) {::}{o}{::}{s}{::}{\texttt{nil}}}(V)) = 2/3$ and $I_{s_0}({\mathcal{P}}_{{h}{::}{p}(d_1) {::}{o}{::}{\bar{s}}{::}{\texttt{nil}}}(V)) = 1/3$. Therefore, the conjecture is a theorem[^1]. Related Work {#sec:RelatedWork} ============ Many probabilistic logics are surveyed in [@SEP]. Among those logics, most depart from classical logic by adopting a probabilistic notion of validity and entailment. [**PTL**]{}, on the other hand, remains strictly classical in this respect. The probabilistic modal logics described in Sections 4.1 and 4.2 of [@SEP] are probably the most similar to [**PTL**]{}. However, they are propositional, lack the probabilistic diamond operator, and are atemporal. Probabilistic logics that incorporate time include **PCTL** [@PCTLOriginal; @PCTL], which extends **CTL** by replacing the existential and universal path quantifiers by a probabilistic operator. **PCTL** is an excellent logic for *model checking* Markov chains. However, its lack of a probabilistic diamond operator makes it susceptible to the issues discussed in Section \[sec:Disambiguation\], thereby limiting its use beyond model checking. They also lack an explicit handling of actions, which is necessary for a convenient formalization of the Monty Hall problem and other examples discussed here. On the other hand, **PCTL**’s temporal modalities (which include, for instance, the *until* operator) are more sophisticated than [**PTL**]{}’s temporal modalities (which can only make statements about the *next* moment in time). [**PTL**]{}’s parsimony is intentional: it includes only the minimal set of temporal modalities needed to capture the desired notion of probability. Nevertheless, in practical applications where other temporal modalities are needed, they could be easily added to [**PTL**]{}as well. Conclusion and Future Work {#sec:Conclusion} ========================== The large number of available probabilistic logics indicates that conciliating logic and probability is a non-trivial task. The expressive probabilistic temporal logic [**PTL**]{}described here provides a novel alternative approach, based on the simple intuition that the notion of probability can only be fully grasped in combination with the notions of action and time. The complex interaction of time, action and probability naturally leads to a modal and higher-order logic. [**PTL**]{}is adequate with respect to classical probability theory, of which it can be considered an extension (as shown in Section \[sec:Adequacy\], where a correspondence between events and formulas has been established in detail). [**PTL**]{}’s convenient expressive power allowed a natural formalization of the famous Monty Hall problem. One of the main insights in the development of [**PTL**]{}came with the discovery of the need for both a higher-order probability function and a probabilistic diamond operator, as discussed in Section \[sec:Disambiguation\]. Besides the higher order, the satisfaction of this need is a distinguishing feature of [**PTL**]{}. In the near future, the implementation of [**PTL**]{}in `Coq` needs to be made more user-friendly, through the implementation of tactics that automate and hide technical details for users. On the philosophical side, it would be interesting to extend [**PTL**]{}with past temporal modalities, since we often need to reason about actions that have happened in the past but whose outcomes we have not yet observed, and to define conditional probabilities, in order to explore the question about the relationship between probabilities of conditionals (e.g. $P(A{\rightarrow}B)$) and conditional probabilities (e.g. $P(B|A)$) [@HajekProbabilities] from [**PTL**]{}’s perspective. [^1]: An interactive proof of this theorem using the embedding of [**PTL**]{}in `Coq` is freely available in the online repository of the implementation discussed in Section \[sec:Implementation\]. For the sake of simplicity, this formalization of the Monty Hall problem does not concern itself with specifying in which states each action is allowed or disallowed. But this could also be done.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Motivated by the recent experimental evidence of commensurate surface charge-density-waves (CDW) in Pb/Ge(111) and Sn/Ge(111) $\surd{3}$-adlayer structures, as well as by the insulating states found on K/Si(111):B and SiC(0001), we have investigated the role of electron-electron interactions, and also of electron-phonon coupling, on the narrow surface state band originating from the outer dangling bond orbitals of the surface. We model the $\surd{3}$ dangling bond lattice by an extended two-dimensional Hubbard model at half-filling on a triangular lattice. The hopping integrals are calculated by fitting first-principle results for the surface band. We include an on-site Hubbard repulsion $U$ and a nearest-neighbor Coulomb interaction $V$, plus a long-ranged Coulomb tail. The electron-phonon interaction is treated in the deformation potential approximation. We have explored the phase diagram of this model including the possibility of commensurate $3\times 3$ phases, using mainly the Hartree-Fock approximation. For $U$ larger than the bandwidth we find a non-collinear antiferromagnetic SDW insulator, possibly corresponding to the situation on the SiC and K/Si surfaces. For $U$ comparable or smaller, a rich phase diagram arises, with several phases involving combinations of charge and spin-density-waves (SDW), with or without a net magnetization. We find that insulating, or partly metallic $3\times 3$ CDW phases can be stabilized by two different physical mechanisms. One is the inter-site repulsion $V$, that together with electron-phonon coupling can lower the energy of a charge modulation. The other is a novel magnetically-induced Fermi surface nesting, stabilizing a net cell magnetization of 1/3, plus a collinear SDW, plus an associated weak CDW. Comparison with available experimental evidence, and also with first-principle calculations is made.' address: | $^{(1)}$ International School for Advanced Studies (SISSA), Via Beirut 2, Trieste, Italy\ $^{(2)}$ Istituto Nazionale per la Fisica della Materia (INFM), Via Beirut 2, Trieste, Italy\ $^{(3)}$ International Center for Theoretical Physics (ICTP), Strada Costiera, Trieste, Italy author: - 'Giuseppe Santoro$^{1,2}$, Sandro Scandolo$^{1,2}$, and Erio Tosatti$^{1,2,3}$' title: 'Charge density waves and surface Mott insulators for adlayer structures on semiconductors: extended Hubbard modeling' --- Introduction ============ Pb and Sn $\sqrt{3}\times \sqrt{3}$ adlayer structures on the (111) surface of Ge have recently revealed a reversible charge density wave (CDW) transition to a low temperature reconstructed $3\times 3$ phase. [@nature; @Modesti_1; @Modesti_2; @Avila; @LeLay; @Sn_carpinelli; @asensio] A half-filled surface state band makes the high temperature phase metallic. The low temperature phase is either metallic – as seems to be the case for Sn/Ge(111) – or weakly gapped, or pseudo-gapped, as suggested for Pb/Ge(111). Related isoelectronic systems, like the $\sqrt{3}$-adlayer of Si on the (0001) surface of SiC [@SiC] and on K/Si(111):B,[@KSi] show a clear insulating behavior, with a large gap, no structural anomalies, no CDWs, and no transitions, at least to our present knowledge. These adsorbate surfaces are altogether mysterious. The very existence of a $\sqrt{3}\times \sqrt{3}$ adsorbate phase, with coverage 1/3, is puzzling. For isoelectronic Si on Si(111), or Ge on Ge(111), for instance, there exists no such phase. The stable low-coverage phase are $7\times 7$ and $c(2\times 8)$ respectively, whose coverage is instead close to 1/4. They are made up of $2 \times2$ basic building blocks, each with one adatom saturating three out of four first-layer atoms, and one unsaturated first-layer atom, the “restatom”. In this adatom-restatom block, the nominally unpaired electron of the adatom and that of the restatom pair off together, giving rise to a stable, fully saturated, insulating surface. By contrast, the $\sqrt{3}\times \sqrt{3}$ phases with 1/3 coverage are very common for trivalent adsorbates, such as Ga and In, and for pentavalent ones like As, on the same (111) surfaces. These adatoms lack the unpaired electron, and can therefore lead to a fully saturated insulating surface without the need for any restatoms. A $\sqrt{3}\times \sqrt{3}$ adsorbate phase of [*tetravalent*]{} adatoms is bound by construction to possess one unpaired electron per adatom, giving rise to a very destabilizing half-filled metallic surface state band. Seen in this crude light, it is a puzzle why this kind of coverage should constitute even only a locally stable state of the surface. Looking more closely, we may speculate that SiC(0001)[@SiC] and K/Si(111):B,[@KSi] most likely Mott-Hubbard insulators, [@santoro; @Northrup; @Anisimov; @KSi] are perhaps “stabilized” by Coulomb repulsions, so large to make it anyway difficult for electrons to move. For the more innocent-looking, less correlated, Pb/Ge(111) and Sn/Ge(111), this argument is less obvious, and the puzzle remains. The function of the $3\times 3$ CDW state – whatever its real nature – most likely serves the function of stabilizing these otherwise unstable surfaces at low temperatures. Nonetheless, the CDW periodicity chosen by the surface CDW – $3\times 3$, meaning a $\sqrt{3}\times \sqrt{3}$ super-cell of adatoms – is not at all evident. In fact, it replaces a supposedly unstable state characterized by an odd number of electrons/cell (three), with another where the electron number (nine) is, alas, odd again. Be all that as it may, there is little doubt that the main factor driving the phenomena on all these surfaces, appears to be precisely the half-filled– and extremely narrow – surface state band. We thus begin with a discussion that in principle encompasses all the $\sqrt{3}\times \sqrt{3}$ tetravalent adsorbed surfaces. We believe the following points to be of general validity: [*i)*]{} [**Poor nesting**]{}. Two-dimensional Fermi surface (FS) nesting in the half-filled surface states[@tosatti] has been repeatedly invoked as the driving mechanism for the CDW instability in the case of Pb/Ge,[@nature; @asensio] but excluded for the case of Sn/Ge.[@Sn_carpinelli; @Sn_scandolo] However, by inspecting either photoemission $k(E)$ data, [@Modesti_2; @Avila; @LeLay; @asensio] and existing first-principle (LDA) calculations[@nature; @Sandro; @Sn_carpinelli] of the surface half-filled band (the“adatom dangling bond band”), we fail to detect a particularly good nesting of the two-dimensional FS at the surface Brillouin zone (BZ) corner ${\bf K}=(4\pi/3a,0)$. The wavevector-dependent susceptibility generated by the calculated band structure, in particular, has no especially large value at this k-point, and rather peaks elsewhere (see inset in Fig. \[band\_bz\_chi0:fig\]). To be sure, there is nothing preventing in general a good nesting at ${\bf K}=(4\pi/3a,0)$, or any other k-point. However, insofar as the surface state band is really lying in a bulk gap at each single k-point, it should be with good accuracy – by simple state counting and charge neutrality – precisely half filled. This implies that the filled and empty state areas should be equal. Hypothetical Fermi surfaces with this kind of shape and good nesting at ${\bf K}=(4\pi/3a,0)$ do not appear to be compatible with an integer electron number. We thus believe lack of perfect nesting to be the case for both Pb/Ge as for Sn/Ge. Fig. \[band\_bz\_chi0:fig\], showing a tight binding fit to the LDA surface band dispersion for the test-case of Si(111)/Si,[@Sandro] as well as the corresponding FS and Lindhard density response function $\chi_o(\bf q)$, $$\chi_o(\bf q) \,=\, \int_{BZ} \frac{d\bf k}{(2\pi)^2} \; \frac{ n_{\bf k} - n_{{\bf k}+{\bf q}} } {\epsilon_{{\bf k}+{\bf q}} - \epsilon_{\bf k} } \;,$$ $n_{\bf k}$ and $\epsilon_{\bf k}$ being the occupation number and energy of an electron with Bloch momentum ${\bf k}$, provides a concrete illustration of these statements. We note, in passing, that a strong nesting at ${\bf K}$ is, on the contrary, automatically guaranteed if the surface band acquires a uniform magnetization in such a way that the densities of up and down electrons become, respectively, $2/3$ and $1/3$.[@Sandro] The majority spins would then fill the region external to the reduced BZ in Fig. \[band\_bz\_chi0:fig\], and their FS would be strongly nested. This suggestion, which turns out to be correct at the mean-field level, points into the direction of a possible role played by magnetism in these systems. [*ii)*]{} [**Importance of electron-electron interactions**]{}. The width $W$ of the surface band is relatively small: $W\approx 0.5$ eV for Pb and Sn/Ge(111), $W\approx 0.3$ eV for SiC(0001). Moreover, this band is half-filled. These facts call for a careful consideration of electron-electron interactions, as well as of electron-phonon (e-ph), as possible sources of instability. The importance of electron-electron interaction is underlined by the different phenomenology of SiC(0001) and K/Si(111):B with respect to Pb-Sn/Ge(111). The stronger insulating character of the former surfaces parallels closely their stronger electron-electron repulsions, connected both with more localized surface Wannier functions (see later on), and with reduced screening, due to larger bulk semiconducting gaps. [*iii)*]{} [**Weakness of LDA calculations for ground state prediction**]{}. LDA electronic structure calculations – an extremely well tested tool in many areas– are certainly suitable for a weakly interacting system, such as the bulk semiconductor, or a passivated semiconductor surface. They are less reliable, especially when they do not include spin, in predicting the stable state and the instabilities of a narrow band system. For instance, the phenomenology of SiC(0001) – suggesting a Mott-Hubbard insulator – is unreproducible by LDA. The onset of a CDW on Sn/Ge(111)is also not predicted by recent LDA calculations.[@Sn_carpinelli; @Sn_scandolo] While there is no reason to doubt the basic credibility of the one-electron band energies obtained from these Kohn-Sham equations, the mean-field treatment of interactions, the screened local exchange, and especially the neglect of magnetic correlations are the standard source of problems with LDA. As a consequence, it will be necessary to worry more substantially about interactions, and to use methods which, even if mean-field, permit the inclusion of strong correlations, including magnetic effects. [*iv)*]{} [**Interaction-driven mechanisms for $3\times 3$ CDW instabilities**]{}. There are several different couplings which the surface electrons, as they hop weakly between a surface adatom site and another, experience and can influence the formation of the CDW, or of an insulating ground state: a) on-site, and nearest-neighbor (n.n.) inter-site electron-electron repulsion; b) on-site effective attraction (negative Hubbard-$U$ term) of electron-phonon origin. Because of poor nesting, electron-phonon alone is unlikely to drive the $3\times 3$ CDW. At weak coupling, the susceptibility peak in Fig. \[band\_bz\_chi0:fig\] would rather drive an incommensurate periodicity. At strong coupling, the frustration associated to the triangular lattice, will favor, in general, a superconducting ground state over a CDW phase (see Appendix).[@santos] On the other hand, the electron-electron interaction, both on-site and, independently, nearest neighbor, naturally suggests, as we shall see later, the $3\times 3$ surface periodicity, which is found experimentally. The approach we will take is based on an extended Hubbard-Holstein model. It is by necessity a “non-first-principle” approach and, as such, has no strong predictive power. However, it is made more realistic by using parameters extracted from first-principle calculations, and we find it very helpful in clarifying the possible scenarios as a function of the strength of electron-electron interactions. Because of this rather qualitative use, we will make no attempt to push the accuracy of treatment of this model to a very high level of sophistication. The basic tool will be the unrestricted Hartree-Fock approximation. Although mean field, it allows magnetic solutions, favored by exchange which is unscreened. Model ===== Each tetravalent adatom on a (111) semiconductor surface carries a dangling bond – an unpaired electron in an unsaturated orbital. In the $\sqrt{3}\times\sqrt{3}$ structure, the dangling bonds of the adatoms give rise to a band of surface states which lies in the bulk semiconductor gap. [@nature; @Sandro] By electron counting, such a band is half-filled. Our basic starting point is the quantitatively accurate surface state band dispersion $\epsilon_{\bf k}$ which one calculates in gradient-corrected LDA, [@nature; @Sandro]. It is shown in Fig. \[band\_bz\_chi0:fig\] for the case of Si/Si(111). The solid and dashed lines in Fig. \[band\_bz\_chi0:fig\] are tight-binding fits to the LDA results obtained by including, respectively, up to the $6^{th}$ and up to the $2^{nd}$ shell of neighbors. The fit with hopping integrals $t_1,t_2,\cdots,t_6$ is quite good. Less good, but qualitatively acceptable, is the fit obtained using only nearest neighbor (n.n.) and next-nearest neighbor (n.n.n.) hopping integrals $t_1$ and $t_2$. The Fermi surface (FS) for the half-filled surface band is shown in the upper inset of Fig. \[band\_bz\_chi0:fig\]. It is important to stress that the FS does not show good nesting properties at the wavevector ${\bf q}={\bf K}$ (the BZ corner). This feature is shared by all LDA calculations on similar systems.[@nature; @Sandro; @Sn_carpinelli] Albeit small, the bandwidth $W$ of the surface band is much greater than one would predict by a direct overlap of adatom dangling bonds, as the adatoms are very widely apart, for instance about $7\AA$ on Ge(111). Hopping is indirect, and takes place from the adatom to the first-layer atoms underneath, from that to a second-layer atom, then again to a first-layer atom underneath the other adatom, and from there finally to other adatom dangling bond. Thus, when expressed in terms of elementary hopping processes between hybrid orbitals, electron hopping between two neighboring adatom dangling bonds is fifth order. As a result, the final dispersion of the surface state band strongly parallels that of the closest bulk band, the valence band. Correspondingly, hybridization effects of the dangling bond orbitals with first, second, and even third, bulk layer orbitals are strong, as shown by the extension into the bulk of the Wannier orbital associated to the LDA surface band (Fig. \[wannier:fig\]). In spite of this, we can still associate to every adatom a Wannier orbital and write the effective Hamiltonian for the surface band as follows: $$H \,=\, \sum_{{\bf k}}^{BZ} \sum_{\sigma} \epsilon_{\bf k} c^{\dagger}_{{\bf k},\sigma} c_{{\bf k},\sigma} \,+\, H_{\rm ph} \,+\, H_{\rm e-ph} \,+\, H_{\rm int} \;,$$ where $c^{\dagger}_{{\bf k},\sigma}$ is the Fourier transform of the Wannier orbital, namely the surface state in a Bloch picture. The sum over the wavevectors runs over the surface BZ. $H_{\rm int}$ includes correlation effects which are not correctly accounted for within LDA, which we parametrize as follows: $$H_{\rm int} = U \sum_{{\bf r}} n_{{\bf r},\uparrow} n_{{\bf r},\downarrow} + \frac{1}{2} \sum_{{\bf r}\ne {\bf r}'} V_{{\bf r}-{\bf r}'} (n_{{\bf r}}-1) (n_{{\bf r}'}-1) \;.$$ Here $U$ is an effective repulsion (Hubbard-$U$) for two electrons on the same adatom Wannier orbital, and $V_{{\bf r}-{\bf r}'}$ is the direct Coulomb interaction between different sites ${\bf r}$ and ${\bf r}'$.[@non-diag:nota] Let $V$ be the n.n. value of $V_{{\bf r}-{\bf r}'}$, which is, clearly, the largest term. We have considered two models for $V_{{\bf r}-{\bf r}'}$: a model (A) in which we truncate $V_{{\bf r}-{\bf r}'}$ to n.n., and a model (B) in which $V_{{\bf r}-{\bf r}'}$ has a long range Coulombic tail of the form $$V_{{\bf r}-{\bf r}'} \,=\, \frac{a V}{|{\bf r}-{\bf r}'|} \;,$$ where $a$ is the n.n. distance. The results for model B are qualitatively similar to those of A, and will be only briefly discussed later on. In other words, even if most of the detailed results in this paper will be base on the n.n.  $V_{{\bf r}-{\bf r}'}$, their validity is more general. LDA estimates of the [*bare*]{} coulomb repulsion $U_o$ and $V_o$ between two electrons respectively on the same and on neighboring Wannier orbitals are – for our test case of Si(111)/Si – of about $3.6$ eV and $1.8$ eV respectively.[@Sandro] Screening effects by the the underlying bulk are expected to reduce very substantially these repulsive energies. An order of magnitude estimate for $U$ and $V$ is obtained by dividing their bare values by the image-charge screening factor, $(\epsilon +1)/2\approx 6$, yielding, for Si, $U=0.6$ eV ($10 t_1$), and $V=0.3$ eV ($5 t_1$). Corresponding values would be somewhat smaller for Ge(111), in view of a very similar dispersion [@Sn_scandolo] and of a ratio of about 4/3 between the dielectric constants of Ge and Si. SiC(0001), the opposite is true. The surface state band is extremely narrow, of order $0.3$ eV [@pollmann], while the bulk dielectric constant is only about $6.5$. As for the e-ph interaction, in principle both the on-site Wannier state energy and the hopping matrix elements between neighbors depend on the positions of the adatoms. Within the deformation potential approximation, we consider only a linear dependence of the on-site energy from a single ionic coordinate (for instance, the height $z_{\bf r}$ of the adatom measured from the equilibrium position), and take $$\label{e-ph-ham:eqn} H_{\rm e-ph} = -g \sum_{{\bf r}} z_{\bf r} (n_{\bf r}-1) \;,$$ with $g$ of the order of $\approx 1$ eV/$\AA$. The free-phonon term will have the usual form $$H_{\rm ph} = \sum_{\bf k}^{BZ} \hbar \omega_{\bf k} \left( b^{\dagger}_{\bf k} b_{\bf k} + \frac{1}{2} \right) \;,$$ where $b_{\bf k}$ is the phonon annihilation operator, and $\hbar \omega_{\bf k}$ a typical phonon frequency of the system, which we take to be about 30 meV, independent of [**k**]{}. Phase diagram: some limiting cases {#pd_no_g:sec} ================================== Preliminary to the full treatment of Sect. \[hf:sec\], we consider first the purely electronic problem in the absence of e-ph interaction. We start the discussion from particular limiting cases for which well-controlled statements, or at least intuitively clear ones, can be made, without the need of any new specific calculations. In the Appendix we will also consider, because it is useful in connection with the electron-phonon case, the unphysical limit of strong on-site attraction (large and negative $U$). Large positive $U$: the Mott insulator. {#large_u:sec} --------------------------------------- For $U\gg V,W$, the system is deep inside the Mott insulating regime.[@Anderson_SE] The charge degrees of freedom are frozen, with a gap of order U. The only dynamics is in the spin degrees of freedom. Within the large manifold of spin degenerate states with exactly one electron per site, the kinetic energy generates, in second order perturbation theory, a Heisenberg spin-1/2 antiferromagnetic effective Hamiltonian governing the [*spin*]{} degrees of freedom, $$H_{\rm eff} = \sum_{(ij)} J_{ij} \, {\bf S}_{{\bf r}_i} \cdot {\bf S}_{{\bf r}_j} \;,$$ with $J_{ij}=4|t_{ij}|^2/U$.[@Anderson_SE] For our test case of Si(111)/Si, the values of the hoppings are such that $J_1 \approx 20$ meV, $J_2/J_1\approx 0.12$ while the remaining couplings $J_3,\cdots$ are very small. Antiferromagnetism is frustrated on the triangular lattice. Zero temperature long range order (LRO) – if present – should be of the three-sublattice $120^o$-Néel type, which can be also seen as a commensurate spiral spin density wave (s-SDW). Because it does not imbalance charge, this state is not further affected by electron-phonon coupling. In summary, we expect for large values of $U$ a wide-gap Mott insulator with a s-SDW (spins lying in a plane, forming $120^o$ angles), a $3\times 3$ [*magnetic*]{} unit cell, but uniform charge (no CDW). This is, most likely, the state to be found on the Si-terminated and C-terminated SiC(0001) surface at T=0 [@Northrup; @Anisimov]. Strong inter-site repulsion: an asymmetric CDW with three inequivalent sites. {#large_v:sec} ----------------------------------------------------------------------------- The e-ph coupling can effectively reduce $U$, but not $V$. Therefore, it is of interest to consider the hypothetical regime $W<U\ll V$. When the first-neighbor electron-electron repulsion $V$ is large the system, in order to minimize the interaction energy, will prefer a $3\times 3$ CDW-like ground state, with two electrons on one sublattice (A), a single electron on another sublattice (B), and zero electrons on the third sublattice (C) (see Fig. \[triang\_cdw:fig\]). These states are still highly degenerate (in the absence of hopping) due to spin degeneracy for the single unpaired electron on sublattice B. A gap $U$ separates these states from the lowest-energy excited configurations (see Fig. \[triang\_cdw:fig\]). The spin degeneracy can be removed in second-order perturbation theory, owing to $t_2$, which leads to an effective spin-1/2 Heisenberg Hamiltonian within sublattice B, $$H_{\rm eff} = J \sum_{(ij)}^{\rm sublattice \, B} {\bf S}_{{\bf r}_i} \cdot {\bf S}_{{\bf r}_j} \;,$$ with a weak antiferromagnetic exchange constant $J=4t^2_2/U$.[@Anderson_SE] Summarizing, we expect in this regime a strong $3\times 3$ asymmetric CDW (a-CDW) with three inequivalent sites ($\phi_{\rho}\approx \pi/6$, see below), and a spiral $3\sqrt{3}\times 3\sqrt{3}$ SDW, governing the unpaired electron spins, superimposed on it. Notice that, while the charge periodicity is $3\times 3$, the actual unit cell is larger, i.e., $3\sqrt{3}\times 3\sqrt{3}$. Despite having the correct charge periodicity, namely $3\times 3$, this a-CDW is not compatible with the experimental findings on Pb-Sn/Ge, which is a symmetric CDW. We conclude that the low-temperature CDW state of these systems is not completely dominated by $V$. Mean-field theory. {#hf:sec} ================== In order to get a more complete picture of additional phases for smaller $U$, and of the possible phase diagram of the model we now turn to a quantitative mean field theory analysis. The first issue is to include the possibility of magnetic correlations. For small values of the interactions $U$ and $V$, the Stoner criterion can be used to study the possible magnetic instabilities of the paramagnetic metal obtained from LDA calculations. The charge and spin susceptibilities are given, within the random phase approximation,[@Mahan] by $$\begin{aligned} \chi_C(\bf q) &\,=\,& \frac{2 \chi_o({\bf q}) }{1 + (U+2V_{\bf q})\chi_o({\bf q})} \nonumber \\ \chi_S(\bf q) &\,=\,& \frac{\chi_o({\bf q}) }{1 - U\chi_o({\bf q})} \;,\end{aligned}$$ where $\chi_o$ is the non-interacting susceptibility per spin projection, and both factors of $2$ account for spin degeneracy. The divergence of $\chi_S$ is governed, in this approximation, by $U$ only. Since $\chi_o({\bf q})$ is finite everywhere, a finite $U$ is needed in order to destabilize the paramagnetic metal. The wavevector ${\bf q}^*$ at which $\chi_S$ first diverges, by increasing $U$, is in general incommensurate with the underlying unit cell. The instability is towards an incommensurate, metallic, spiral SDW.[@KMurthy] Fig. \[band\_bz\_chi0:fig\] shows that, in our case, ${\bf q}^*=(1.32 K,0)$ (with $K=4\pi/3a$, the BZ corner). We get $U_c^{HF}/t_1\approx 3.7$. (The other maximum of $\chi_o$ at ${\bf q}=(0.525 K,0)$ is very close to the result obtained for the triangular lattice with n.n.hopping only.[@KMurthy]) As for the charge susceptibility, a divergence can be caused only by an attractive Fourier component of the potential $V_{\bf q}$. $V_{\bf q}$ has a minimum at the BZ corners $\pm {\bf K}$, with $V_{\pm {\bf K}}=-3V$ for the n.n. model (A) ($V_{\pm {\bf K}}\approx -1.5422 V$ if a Coulomb tail is added, model B). This minimum leads to an instability towards a $3\times 3$ CDW as $(U+2V_{{\bf K}})\chi_o({\bf K})=-1$, i.e., given our value of $\chi_o({\bf K})\approx 0.2/t_1$, $(U+2V_{{\bf K}})\approx -5t_1$. For model A we get a transition, when $U=0$, at $V_c^{MF}/t_1\approx 0.83$. In general, the small coupling paramagnetic metal is surrounded by an intermediate coupling region, where complicated incommensurate – generally metallic – solutions occur. For stronger $U$ and $V$, commensurate solutions are privileged.[@KMurthy] In view of the fact that a $3\times 3$ CDW is experimentally relevant, we concentrate our analysis on the simplest commensurate phases. These are easy to study with a standard Hartree-Fock (HF) mean-field theory. In particular, we restrict ourselves to order parameters associated with non-vanishing momentum space averages of the type $\langle c^{\dagger}_{{\bf k},\sigma} c_{{\bf k},\sigma '} \rangle$ and $\langle c^{\dagger}_{{\bf k},\sigma} c_{{\bf k}\pm{\bf K},\sigma '}\rangle$. Possible non-vanishing order parameters are the uniform magnetization density ${\bf m}$, $$\label{m:eqn} {\bf m} = \frac{1}{N_s} \sum_{\bf k}^{BZ} \sum_{\alpha,\beta} \langle c^{\dagger}_{{\bf k},\alpha} ({\vec \sigma})_{\alpha\beta} c_{{\bf k},\beta} \rangle = \frac{2}{N_s} \langle {\bf S}_{\rm tot} \rangle \;,$$ the ${\bf K}$-component of the charge density, $$\label{rho:eqn} \rho_{\bf K} = \frac{1}{N_s} \sum_{\bf k}^{BZ} \sum_{\sigma} \langle c^{\dagger}_{{\bf k},\sigma} c_{{\bf k}-{\bf K},\sigma} \rangle \;,$$ and the ${\bf K}$-component of the spin density $${\bf S}_{\bf K} = \frac{1}{N_s} \sum_{\bf k}^{BZ} \sum_{\alpha,\beta} \langle c^{\dagger}_{{\bf k},\alpha} \frac{({\vec \sigma})_{\alpha\beta}}{2} c_{{\bf k}-{\bf K},\beta} \rangle \;.$$ Note that only $\rho_{\bf K}$ and ${\bf S}_{\bf K}$ are $3\times 3$ periodic. Moreover, $K$-components of bond order parameters of the type $\langle c^{\dagger}_{{\bf r},\sigma} c_{{\bf r}',\sigma '} \rangle$ are automatically included in the calculation. $\rho_{\bf K}$ and ${\bf S}_{\bf K}$ have phase freedom, and are generally complex: $\rho_{\bf K}=|\rho_{\bf K}| e^{i\phi_{\rho}}$, etc. The role of the phase is clarified by looking at the real-space distribution within the $3\times 3$ unit cell. For the charge, for instance, $\langle n_{{\bf r}_j}\rangle=1+2|\rho_{\bf K}|\cos{(2\pi p_j/3+\phi_{\rho})}$, where $p_j=0,1,2$, respectively, on sublattice A, B, and C. The e-ph coupling is included but, after linearization, the displacement order parameter is not independent, and is given by $\langle z_{\bf K}\rangle=(g/M\omega_{\bf K}^2)\rho_{\bf K}$. Only the phonon modes at $\pm{\bf K}$ couple directly to the CDW. The phonon part of the Hamiltonian can be diagonalized by displacing the oscillators at $\pm{\bf K}$. This gives just an extra term in the electronic HF Hamiltonian of the form $\Delta U (\rho^*_{\bf K} {\hat \rho}_{\bf K} + {\rm H.c.})$, with an energy $\Delta U=-g^2/M\omega^2_{\bf K}$ which is the relevant coupling parameter. This term acts, effectively, as a negative-$U$ contribution acting only on the charge part of the electronic Hamiltonian. With the previous choice of non-vanishing momentum space averages, the Hartree-Fock Hamiltonian reads: $$\begin{aligned} \label{hf_ham:eqn} H_{\rm H-F} &\,=\,& \sum_{{\bf k}}^{BZ} \sum_{\sigma} \epsilon_{\bf k} n_{{\bf k},\sigma} \,-\, U \, {\bf m} \cdot {\bf S}_{\rm tot} \nonumber \\ && + \sum_{\bf k}^{BZ} \sum_{\sigma} \left\{ \left[ \left( \frac{U}{2} + V_{\bf K} -\frac{g^2}{M\omega^2_{\bf K}} \right) \rho_{\bf K} - \sigma U S^z_{\bf K} \right] c^{\dagger}_{{\bf k},\sigma} c_{{\bf k}+{\bf K},\sigma} + {\rm H.c.} \right\} \nonumber \\ && - U \sum_{\bf k}^{BZ} \left\{ S^+_{\bf K} c^{\dagger}_{{\bf k},\downarrow} c_{{\bf k}+{\bf K},\uparrow} \,+\, S^-_{\bf K} c^{\dagger}_{{\bf k},\uparrow} c_{{\bf k}+{\bf K},\downarrow} \,+\, {\rm H.c.} \right\} \nonumber \\ && + \sum_{\bf k}^{BZ} \sum_{\sigma} \left\{ A^{(\sigma\sigma)}_{\bf k} c^{\dagger}_{{\bf k},\sigma} c_{{\bf k},\sigma} \,+\, \left[ B^{(\sigma\sigma)}_{\bf k} c^{\dagger}_{{\bf k},\sigma} c_{{\bf k}+{\bf K},\sigma} + {\rm H.c.} \right] \right\} \nonumber \\ && + \sum_{\bf k}^{BZ} \sum_{\sigma} \left\{ A^{({\bar \sigma}\sigma)}_{\bf k} c^{\dagger}_{{\bf k},{\bar \sigma}} c_{{\bf k},\sigma} \,+\, \left[ B^{({\bar \sigma}\sigma)}_{\bf k} c^{\dagger}_{{\bf k},{\bar \sigma}} c_{{\bf k}+{\bf K},\sigma} + {\rm H.c.} \right] \right\} \;.\end{aligned}$$ The last two terms originate exchange contributions due to the $V$-term; $A^{(\sigma'\sigma)}_{\bf k}$ and $B^{(\sigma'\sigma)}_{\bf k}$ are shorthands for the following convolutions: $$\begin{aligned} \label{AB:eqn} A^{(\sigma'\sigma)}_{\bf k} &\,=\,& - \frac{1}{N_s} \sum_{{\bf k}'}^{BZ} V_{{\bf k}-{\bf k}'} \, \langle c^{\dagger}_{{\bf k}',\sigma} c_{{\bf k}',\sigma'} \rangle \nonumber \\ B^{(\sigma'\sigma)}_{\bf k} &\,=\,& - \frac{1}{N_s} \sum_{{\bf k}'}^{BZ} V_{{\bf k}-{\bf k}'} \, \langle c^{\dagger}_{{\bf k}'+{\bf K},\sigma} c_{{\bf k}',\sigma'} \rangle \;.\end{aligned}$$ The BZ is divided into three regions: a reduced BZ (RBZ), and the two zones obtained by ${\bf k}\pm{\bf K}$ with ${\bf k}\in$ RBZ. The HF problem in Eq. \[hf\_ham:eqn\] reduces to the self-consistent diagonalization of a $6\times 6$ (including the spin) matrix for each ${\bf k}\in$ RBZ. Landau theory {#Landau:sec} ------------- The mean-field solutions must be compatible with the symmetry of the problem. Before discussing the HF phase diagram we obtain, it is useful to present a few general phenomenological considerations based on a symmetry analysis of the Landau theory built from the CDW order parameter $\rho_{\bf K}$ (a complex scalar), the SDW order parameter ${\bf S}_{\bf K}$ (a complex vector), and the uniform magnetization ${\bf m}$ (a real vector).[@Toledano] In the absence of spin-orbit coupling, the possible contributions to the Laundau free energy $F$ allowed by symmetry, up to fourth order, have the form $$\begin{aligned} F &\,=\,& \frac{1}{2} a_{\rho} |\rho_{\bf K}|^2 \,+\, \frac{1}{2} a_{m} |{\bf m}|^2 \,+\, \frac{1}{2} a_{s} |{\bf S}_{\bf K}|^2 \,+\, F_{3} \,+\, F_{4} \nonumber \\ F_3 &\,=\,& ( B_{\rho} \rho_{\bf K}^3 + {\rm c.c.} ) \,+\, [ B_{\rho s} \rho_{\bf K} ({\bf S}_{\bf K} \cdot {\bf S}_{\bf K} ) \,+\, {\rm c.c.} ] \nonumber \\ F_4 &\,=\,& b_{\rho} |\rho_{\bf K}|^4 + b_{m} |{\bf m}|^4 \,+\, b_s^{(1)} |{\bf S}_{\bf K}|^4 \,+\, b_s^{(2)} ({\bf S}_{\bf K} \times {\bf S}^*_{\bf K})^2 \nonumber \\ && +\, b_{\rho s} |\rho_{\bf K}|^2 |{\bf S}_{\bf K}|^2 +\, b_{\rho m} |\rho_{\bf K}|^2 |{\bf m}|^2 +\, b_{m s}^{(1)} |{\bf m}|^2 |{\bf S}_{\bf K}|^2 +\, b_{m s}^{(2)} ({\bf m} \cdot {\bf S}_{\bf K}) ({\bf m} \cdot {\bf S}^*_{\bf K}) \nonumber \\ && +\, [b_{m s}^{(3)} ({\bf m} \cdot {\bf S}_{\bf K}) ({\bf S}_{\bf K} \cdot {\bf S}_{\bf K}) + {\rm c.c.}] \,+\, [b_{\rho m s} \rho^2_{\bf K} ({\bf m} \cdot {\bf S}_{\bf K}) + {\rm c.c.}] \;,\end{aligned}$$ with $|{\bf S}_{\bf K}|^2=({\bf S}_{\bf K} \cdot {\bf S}^*_{\bf K})$. Notice that third order invariants are present due to commensurability, $3{\bf K}={\bf G}$ (reciprocal lattice vector). Therefore, first order transitions are generally possible.[@Toledano] This expansion suggests a number of additional comments: [*i*]{}) A CDW can occur without accompanying magnetism, i.e., $\rho_{\bf K}\ne 0$, while ${\bf m}=0$ and ${\bf S}_{\bf K}=0$. This is the case, as we shall see later, for the small $U$ region of the HF phase diagram. [*ii*]{}) The possible SDW phases are either collinear (l-SDW) (for which $({\bf S}_{\bf K} \times {\bf S}^*_{\bf K})=0$) or coplanar.[@Kivelson] The latter have, with a suitable choice of the phases, ${\bf S}^x_{\bf K}=|{\bf S}_{\bf K}|\cos{\alpha}$ and ${\bf S}^y_{\bf K}=-i |{\bf S}_{\bf K}|\sin{\alpha}$, and can be generally described as a spiral SDW (s-SDW) $$\langle {\bf S}_{\bf r} \rangle \,=\, 2 |{\bf S}_{\bf K}| \, [ {\hat{\bf x}} \cos{\alpha} \, \cos{({\bf K}\cdot {\bf r})} - {\hat{\bf y}} \sin{\alpha} \, \sin{({\bf K}\cdot {\bf r})} ] \;,$$ with an eccentricity parameter $\alpha\ne 0,\pi/2$. ($\alpha=0$ or $\pi/2$ are actually l-SDW along the ${\hat{\bf x}}$ or ${\hat{\bf y}}$ directions.) $\alpha=\pi/4$ describes a circular spiral SDW. Now, the only possibility of having a SDW without CDW is via a circular spiral SDW ($\alpha=\pi/4$). Indeed, the third order invariant $[B_{\rho s} \rho_{\bf K} ({\bf S}_{\bf K} \cdot {\bf S}_{\bf K})+{\rm c.c.}]$ vanishes by symmetry only for a circular spiral SDW, for which $({\bf S}_{\bf K} \cdot {\bf S}_{\bf K})=0$; in all other cases, a SDW implies – if $B_{\rho s}\ne 0$ – a CDW as well. [*iii*]{}) The simultaneous presence of a SDW and a CDW implies, generally, a finite magnetization $\bf m$, via the fourth order invariant $[b_{\rho m s} \rho^2_{\bf K} ({\bf m} \cdot {\bf S}_{\bf K})+{\rm c.c.}]$, unless the phases of $\rho$ and $S$ are such that $2\phi_{\rho}+\phi_{\sigma}=\pi/2+n\pi$. This happens in phase E of our phase diagram, which has therefore no uniform magnetization. [*iv*]{}) The presence of a SDW leads, generally, to a finite uniform magnetization as well, via the fourth order invariant $[b_{m s}^{(3)} ({\bf m} \cdot {\bf S}_{\bf K}) ({\bf S}_{\bf K} \cdot {\bf S}_{\bf K}) + {\rm c.c.}]$, unless the phase $\phi_{\sigma}$ is such that $3\phi_{\sigma}=\pi/2+m\pi$. Phase diagram in the Hartree-Fock approximation {#HF_pd:sec} ----------------------------------------------- We present a brief summary of the mean-field HF calculations for arbitrary $U$, $V$, and $g$, obtained by solving numerically the self-consistent problem in Eqs. \[m:eqn\]-\[AB:eqn\]. The main phases present in the HF phase diagram are shown in Fig. \[hf\_pd:fig\] for the case of $g=0$. The effect of $g\ne 0$ will be discussed further below. [**Phase A: Spiral SDW insulating phase.**]{} The circular spiral SDW (phase A) dominates the large $U$, small $V$ part of the phase diagram, as expected from the Heisenberg model mapping at $U\to\infty$ (see sect. \[large\_u:sec\]). This is the Mott insulator phase, probably relevant for SiC. Its HF bands are shown in Fig. \[hf\_bands:fig\](a). [**Phase A’: Collinear SDW with $m^z=1/3$ insulating phase.**]{} This is another solution of the HF equations in the large $U$, small $V$ region. It is an insulating state characterized by a linear l-SDW plus a small CDW with $\phi_{\rho}=0$, accompanied by a magnetization $m^z=1/3$ (phase A’). This collinear state lies above the s-SDW by only a small energy difference (of order $0.03 t_1$ per site), and could be stabilized by other factors (e.g., spin-orbit). A recent LSDA calculation for $\sqrt{3}$-Si/Si(111) has indicated this l-SDW as the ground state, at least if spins are forced to be collinear.[@Sandro] The HF bands for this solution are shown in Fig. \[hf\_bands:fig\](b), and are very similar to the LSDA surface band for Si/Si(111). The phase $\phi_{\rho}=0$ of the CDW order parameter corresponds to a real-space charge distribution in which one sublattice has a charge $1+2|\rho_{\bf K}|$, while the remaining two are equivalent and have charges $1-|\rho_{\bf K}|$, compatible with the experimental findings on Sn/Ge(111) and Pb/Ge(111). The amplitude $|\rho_{\bf K}|$ of the CDW is in general quite small in this phase. It should be noted, however, that a STM map is not simply a direct measure of the total charge density.[@Selloni; @Tosatti:high] This will be discussed in sect. \[stm:sec\]. [**Phase B’: Asymmetric CDW with $m^z=1/3$ insulating phase.**]{} By increasing the n.n. repulsion $V$, the energies of the s-SDW and of the l-SDW tend to approach, until they cross at a critical value $V_c$ of $V$. At $U/t_1=10$ we find $V_c/t_1\approx 3.3$ for model A, $V_c/t_1\approx 6.6$ for model B. As $V>V_c$, however, an insulating asymmetric CDW (a-CDW) prevails. This is simply the spin collinear version of the non-collinear phase described in Sect. \[large\_v:sec\]. Fig. \[e\_cdw:fig\] shows the energy per site of the most relevant HF solutions at $U/t_1=10$ as a function of $V$ for model B (Coulomb tail case). The s-SDW and the l-SDW cross at $V_c\approx 6.6 t_1$ where, however, the a-CDW insulating solution starts to be the favored one. This large-$V$ solution has a large $CDW$ order parameter with $\phi_{\rho}\ne 0$ (mod. $2\pi/3$), a concomitant l-SDW, and $m^z=1/3$. By recalling the discussion in sect. \[large\_v:sec\], we notice that a state with a magnetization $m^z=1/3$ and a l-SDW is the best HF solution once a $3\times 3$ restriction has been applied, since a spiral SDW on the singly occupied sublattice would involve a larger periodicity (phase B). [**Phase D: Symmetric non-magnetic CDW metallic phase.**]{} For small values of $U$ and $V$, or for large enough e-ph coupling $g$, a [*metallic*]{} CDW with $\phi_{\rho}=0$ (m-CDW) is found. (See Fig. \[hf\_bands:fig\](c) for the HF bands.) This phase constitutes a candidate, alternative to the magnetic phase B’, and compatible with the main experimental facts, which might be relevant for the case of Pb/Ge(111) and of Sn/Ge(111). The degree of metallicity of this phase is much reduced relative to the undistorted surface (pseudo-gap). We stress that the e-ph interaction can stabilize the $\phi_{\rho}=0$ m-CDW also at relatively large $U$, by countering $U$ with a large negative $\Delta U=-g^2/M\omega^2_{\bf K}$. We demonstrate this in Fig. \[e\_u8v2\_ph:fig\], where we plot the energy per site as a function of $\Delta U$ at $U/t_1=8$ and $V/t_1=2$, for the three relevant HF solutions, i.e., the spiral SDW (phase A), the collinear SDW with $m^z=1/3$ (phase A’), and the metallic non-magnetic CDW (phase D). The spiral SDW is unaffected by the electron-phonon coupling. The energy of the collinear SDW with $m^z=1/3$ improves a little bit by increasing $g$, due to the small CDW amplitude of this phase. This effect is not large enough as to make this phase stable in any range of couplings. At a critical value of $g$, the metallic non-magnetic CDW (where the CDW order parameter is large, $|\rho_{\bf K}| \sim 0.5$) wins over the magnetic phases. The Fourier transform of the lattice distortion at ${\bf K}$ is given by $\langle z_{\bf K}\rangle=(g/M\omega_{\bf K}^2)\rho_{\bf K} =\rho_{bf K} |\Delta U|/g$. A rough estimate shows that the order of magnitude of the electron-phonon coupling necessary to stabilize the CDW phase is not unreasonable. With $g=1$ eV/$\AA$, $M_{Si}=28$, and $\omega_{\bf K}\approx 30$ meV we get $\Delta U \approx -3 t_1$, sufficient to switch from a s-SDW ground state to a m-CDW for $U/t_1=8$ and $V/t_1=2$. With these values of the parameters we have $|\rho_{\bf K}| \approx 0.43$, and we estimate $|\langle z_{\bf K}\rangle| \approx 0.07 \AA$. This corresponds, since $\langle z_{\bf r} \rangle \sim 2\cos({\bf K}\cdot {\bf r}) |\langle z_{\bf K}\rangle|$, to a total displacement between the adatom going up and the two going down of $\Delta z = 3 |\langle z_{\bf K}\rangle| \approx 0.2\AA$. We notice that values of $g$ much larger than those used in Fig. \[e\_u8v2\_ph:fig\] would eventually stabilize a superconducting ground state (see Appendix). CDW order parameter and STM experiments {#stm:sec} ======================================= We discuss, in the present section, the relationship between the CDW order parameter, as defined in Eq. \[rho:eqn\], and an STM map of the surface. As the crudest approximation to the tunneling current for a given bias $V_{\rm bias}$ we consider the integral of the charge density for one-electron states within $V_{\rm bias}$ from the Fermi level, weighted with barrier tunneling factor $T(V)$,[@Selloni; @Tosatti:high] $$\label{stm:eqn} J(V_{\rm bias},{\bf r}=x,y;z) \approx \int_0^{V_{\rm bias}} dV \sum_{n{\bf k}} |\Psi_{n{\bf k}}({\bf r})|^2 \delta (E_{n{\bf k}}-E_F+V) T(V) \;.$$ The tunneling factor leads to weighting prominently the states immediately close to the Fermi level. In view of the purely qualitative value of Eq. (\[stm:eqn\]), we have moreover decided to ignore $T(V)$ altogether and to account for its effect by reducing the bias voltage $V_{\rm eff}$ in Eq. (\[stm:eqn\]), to an effective value $V_{\rm bias}^{\rm eff}$. By doing this, we have extracted an “STM map” for a point in phase A’ ($U/t_1=9$ and $V=2$, model A) – a spin-density waves where the amplitude of the CDW order parameter is rather small, $|\rho_{\bf K}|=0.039$ – and a point in phase D ($U/t_1=4$ and $V=2$, model A) – a pure CDW where the order parameter is quite large, $|\rho_{\bf K}|=0.4$. The results for constant $z$, and $x,y$ moving from adatom A to B to C, are shown in Fig. \[stm:fig\](a) and (b), for the two cases. The solid curves refer to positive bias (current flowing from the sample to the tip), probing occupied states close to the Fermi level. The dashed curve refers to negative bias, probing unoccupied states. In both cases a) and b), one of the three atoms yields a larger current at positive bias, while the other two atoms have larger currents at negative bias. The insets show the predicted “contrast” between the two peak values, $(J_1-J_2)/(J_1+J_2)$, $J_1$ and $J_2$ being in each case, respectively, the largest and the smallest of the STM peak currents at the positions of the adatoms. We notice the following points: i) for the occupied states (positive bias) the pure CDW phase has, as expected, a larger contrast than the magnetic phase. As we neglect the tunneling factors $T(V)$, in the limit of large positive effective bias we recover the total asymmetry in the charge of the two inequivalent atoms, $(n_1-n_2)/(n_1+n_2)$, indicated by a dashed horizontal line in the insets. Observe that the way this large bias limit is reached is completely different for the two cases a) and b): in the magnetic case a) the contrast overshoots at small biases attaining values substantially larger than the nominal CDW order parameter, and then goes to the limit $(n_1-n_2)/(n_1+n_2)$ from above; in the pure CDW case b), on the contrary, the limit is reached monotonically from below. ii) for empty states (negative bias) the contrast is even more surprising: at small bias it is very large in both cases a) and b). By increasing the bias, the contrast for the pure CDW case tends monotonically to a large value, whereas the magnetic case shows a strong non monotonicity. These results suggest that one should look more carefully, and quantitatively, at the behavior of the asymmetry between STM peak currents as a function of the bias, including the region of relatively small biases: the different behavior of the asymmetry of the magnetic case versus the pure CDW case should be marked enough – and survive in a more refined analysis including $T(V)$ – as to make the STM map a good way of discriminating between the two scenarios. Discussion and conclusions {#discussion:sec} ========================== Within our model study we have learned that on the surfaces considered: \(i) If $U$ and $V$ are ignored, there is no straight electron-phonon driven $3\times 3$ CDW surface instability. However, any phase involving a CDW, for example as a secondary order parameter attached to a primary SDW, can take advantage and gain some extra stabilization energy from a small surface lattice distortion, via electron-phonon coupling. \(ii) Electron-electron repulsion and the two-dimensional Fermi Surface are capable of driving transitions of the undistorted metallic surface to a variety of states, that are either insulating or in any case less metallic, some possessing the $3\times 3$ periodicity. \(iii) This can occur via two different mechanisms: a) the inter-site repulsion $V$ can stabilize insulating or semi-metallic CDWs, without a crucial involvement of spin degrees of freedom; b) the on-site repulsion $U$ can produce essentially magnetic insulators with or without a weak accompanying $3\times 3$ CDW, as required by symmetry. \(iv) For $U$ moderate of order $W$ and for smaller $V$, an interesting state is realized, with a large SDW and a small accompanying CDW. The state is either a small-gap insulator, or a semi-metal, and may or may not be associated with a net overall magnetization, depending on the nature (linear or spiral, respectively) of the leading SDW. \(v) For $U$ and $V$ both small but finite, a metallic CDW without any magnetism is obtained. The same phase can also be stabilized for larger values of $U$ by the presence of a substantial electron-phonon coupling. We stress that, in this case, $V$ is the coupling responsible for the $3\times 3$ symmetry of the unit cell, whereas the role of the electron-phonon coupling is that of destroying magnetism by effectively decreasing $U$. Electron-phonon coupling alone is not sufficient to justify a commensurate $3\times 3$ CDW. \(vi) Either of the phases in (iv) or (v) could be natural candidates for explaining the weak $3\times 3$ CDW seen experimentally on Sn-Pb/Ge(111). \(vii) Finally, for large $U$, small $V$ (in comparison with the bandwidth $W$) the Mott-Hubbard state prevails. It is a wide-gap insulator, with a pure spiral SDW, with $3\times 3$ overall periodicity, and coplanar $120^o$ long-range spin ordering at zero temperature. It possesses no net magnetization, and no accompanying CDW. \(viii) The above is the kind of state which we expect to be realized on SiC(0001), and also possibly on K/Si(111):B. Among existing experiments, we have addressed particularly photoemission and STM. Our calculated band structure for both the SDW/CDW state A’ (iv) and the pure CDW state D (v) exhibit features which are similar to those found in photoemission from Sn-Pb/Ge(111).[@Modesti_2; @Avila; @LeLay; @asensio] The simulated STM images for the two kind of states are predicted to differ in their voltage dependence. Future experiments are strongly called for, aimed at detecting whether magnetic correlations are actually dominant, as we think is very likely, on all these surfaces, or whether Sn-Pb/Ge(111) are instead non-magnetic and electron-phonon driven. The issue of whether magnetic long-range order – which we definitely propose for SiC(0001) and K/Si(111):B at $T=0$, and also hypothesize for Sn-Pb/Ge(111) – survives up to finite temperatures is one which we cannot settle at this moment. This due to the difficulty in estimating the surface magnetic anisotropy, without which order would of course be washed out by temperature. In any case, it should be possible to pursue the possibility of either magnetism or incipient magnetism using the appropriate spectroscopic tools. This line of experimental research, although undoubtedly difficult, should be very exciting since it might lead to the unprecedented discovery of magnetic states at surfaces possessing no transition metal ions of any kind, such as these seemingly innocent semiconductor surfaces. We acknowledge financial support from INFM, through projects LOTUS and HTSC, and from EU, through ERBCHRXCT940438. We thank S. Modesti, J. Lorenzana, M.C. Asensio, J. Avila, G. Le Lay, E.W. Plummer and his collaborators, for discussions. Appendix. Large negative $U$: a superconducting ground state. {#neg_u:sec} ============================================================= The limit of large negative $U$, $U\to -\infty$, is considered here to show that CDWs are not favored by on-site attraction alone. Instead, a superconducting ground state is favored.[@santos] To see this, consider the real-space states which are the low energy configurations for $U\to -\infty$: they consist of $N_e/2$ sites (if $N_e$ is the number of electrons) each of which is occupied by a pair of electrons with opposite spins. The large degeneracy in this manifold of states is – once again, like in the $U\to \infty$ case – removed by kinetic energy in second order perturbation theory. By assigning a pseudo-spin-1/2 state to each site (up, if occupied by a pair, down if empty) one can show that the effective Hamiltonian is [@santos] $$H_{\rm eff} = -\sum_{(ij)} \frac{J^{\perp}_{ij}}{2} \, \left( S^+_{{\bf r}_i} S^-_{{\bf r}_j} \,+\, {\rm H.c.} \right) \,+\, \sum_{(ij)} J^{z}_{ij} \, S^z_{{\bf r}_i} S^z_{{\bf r}_j} \;,$$ with $J^{\perp}_{ij}=4|t_{ij}|^2/|U|$ and $J^{z}_{ij}=J^{\perp}_{ij}$. If $V$-terms are added, $J^z$ is modified to $J^{z}_{ij}=J^{\perp}_{ij}+4V_{ij}$. Restricting our consideration to the n.n. case, we are left with a n.n. Heisenberg Hamiltonian with ferromagnetic xy-part and an antiferromagnetic z-part. The sign of the xy-part cannot be changed at will by a canonical transformation because the lattice is non-bipartite. The result is that the order is in the plane (i.e., superconductivity wins) for small $V$. Only if $V$ is large enough the CDW (i.e., order in the z-direction) will be favored. Entirely similar considerations apply to the case of strong electron-phonon coupling, $g\to \infty$. [99]{} J. M. Carpinelli [*et al.*]{}, Nature [**381**]{}, 398 (1996). A. Goldoni, C. Cepek, S. Modesti, Phys. Rev. B [**55**]{}, 4109 (1997). A. Goldoni and S. Modesti, Phys. Rev. Lett. [**79**]{}, 3266 (1997); S. Modesti (private commun.) J. Avila, A. Mascaraque, E.G. Michel, and M.C. Asensio, Appl. Surf. Sci. [**123/124**]{}, 626 (1998). G. Le Lay [*et al.*]{}, Appl. Surf. Sci. [**123/124**]{}, 440 (1998). J. M. Carpinelli, H. H. Weitering, M. Bartkowiak, R. Stumpf, and E. W. Plummer, Phys. Rev. Lett. [**79**]{}, 2859 (1997) . A. Mascaraque, J. Avila, E.G. Michel, and M.C. Asensio, Phys. Rev. B (to be published). H. H. Weitering [*et al.*]{}, Phys. Rev. Lett. [**78**]{}, 1331 (1997). L. I. Johansson [*et al.*]{}, Surf. Sci. [**360**]{}, L478 (1996); J.-M. Themlin [*et al.*]{}, Europhys. Lett. [**39**]{}, 61 (1997). G. Santoro, S. Sorella, F. Becca, S. Scandolo and E. Tosatti, Surf. Sci. [**402-404**]{}, 802 (1998). J.E. Northrup and J. Neugebauer, Phys. Rev. B [**57**]{}, R4230 (1998). V. Anisimov, G. Santoro, S. Scandolo, and E. Tosatti (work in progress). E. Tosatti and P. W. Anderson, in Proc. 2nd Int. Conf.on Solid Surfaces, ed. S. Kawaji, Jap. J. Appl. Phys., Pt. 2, Suppl. 2, 381 (1974); E. Tosatti, in [*Electronic surface and interface states on metallic systems*]{}, p. 67, eds. E. Bertel and M. Donath (World Scientific, Singapore, 1995). S. Scandolo [*et al.*]{}, to be published. S. Scandolo, F. Ancilotto, G. L. Chiarotti, G. Santoro, S. Serra, and E. Tosatti, Surf. Sci. [**402-404**]{}, 808 (1998). R. R. dos Santos, Phys. Rev. B [**48**]{}, 3976 (1993). We observe that, in principle, non-diagonal terms, i.e., terms which cannot be recast in the form of a density-density interaction, should be included. However, the magnitude of such terms – which involve overlap integrals between Wannier orbitals at different adatoms – can be estimated to be quite smaller than the diagonal terms we keep. M. Sabisch, P. Kruger, and J. Pollmann, Phys. Rev. B [**55**]{}, 10561 (1997). P. W. Anderson in [*Frontiers and Borderlines in Many-Particle Physics*]{}, Proc. E. Fermi Summer School in Varenna, july 1987 (North-Holland, Amsterdam, 1988). G.D. Mahan, [*Many-Particle Physics*]{}, 2nd ed. (Plenum Press, New York, 1990). H. R. Krishnamurthy [*et al.*]{}, Phys. Rev. Lett. [**64**]{}, 950 (1990); C. Jayaprakash [*et al.*]{}, Europhys. Lett. [**15**]{}, 625 (1991). See J. Tolédano and P. Tolédano, [*The Landau theory of phase transitions*]{}, (World Scientific, Singapore, 1987). O. Zachar, S. A. Kivelson, and V. J. Emery, (preprint) A. Selloni, P. Carnevali, E. Tosatti and C. D. Chen, Phys. Rev. B [**31**]{}, 2602 (1985). E. Tosatti, in [*Highlights in Condensed Matter Physics and Future Prospects*]{}, p. 631, ed. L. Esaki, Plenum Press (New York, 1991). [**Figure Captions**]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using high-cadence extreme-ultraviolet (EUV) images obtained by the Atmospheric Imaging Assembly (AIA) on board the [*Solar Dynamics Observatory*]{}, we investigate the solar sources of 26 $^{3}$He-rich solar energetic particle (SEP) events at $\lesssim$1 MeV nucleon$^{-1}$ that were well-observed by the [ *Advanced Composition Explorer*]{} during solar cycle 24. Identification of the solar sources is based on the association of $^{3}$He-rich events with type III radio bursts and electron events as observed by [*Wind*]{}. The source locations are further verified in EUV images from the [*Solar and Terrestrial Relations Observatory*]{}, which provides information on solar activities in the regions not visible from the Earth. Based on AIA observations, $^{3}$He-rich events are not only associated with coronal jets as emphasized in solar cycle 23 studies, but also with more spatially extended eruptions. The properties of the $^{3}$He-rich events do not appear to be strongly correlated with those of the source regions. As in the previous studies, the magnetic connection between the source region and the observer is not always reproduced adequately by the simple potential field source surface model combined with the Parker spiral. Instead, we find a broad longitudinal distribution of the source regions extending well beyond the west limb, with the longitude deviating significantly from that expected from the observed solar wind speed.' author: - 'Nariaki V. Nitta, Glenn M. Mason, Linghua Wang, Christina M. S. Cohen, and Mark E. Wiedenbeck' title: 'Solar Sources of $^{3}$H-rich Solar Energetic Particle Events in Solar Cycle 24' --- Introduction ============ Solar energetic particle (SEP) events are classified into two types, corresponding to different origins [@1999SSRv...90..413R; @2013SSRv..175...53R]. Gradual SEP events, which can be intense enough to be space weather hazardous, are attributed to shock waves driven by fast and wide coronal mass ejections (CMEs) as supported by their close correlation [e.g. @1984JGR....89.9683K]. Another class is often called “impulsive” and characterized first by anomalously enriched $^{3}$He and heavy ions. They have been known for a long time [e.g. @1970ApJ...162L.191H], but the origin of $^{3}$He-rich SEP events is still not well-understood, although theoretical models on the basis of stochastic acceleration [e.g. @2006ApJ...636..462L] have been developed. One of the reasons why $^{3}$He-rich events still lack compelling explanation may be the difficulty of observing their sources in the corona. Unlike gradual SEP events, the association of $^{3}$He-rich events with CMEs is not high [@1985ApJ...290..742K but see below for recent results]. Until recently it has been generally believed that $^{3}$He-rich events could arise without detectable solar activities or be associated with minor flares or brightenings at most [@1987SoPh..107..385K; @1988ApJ...327..998R]. During solar cycle 23, new studies have started to reveal the properties of solar activities that were possibly related to $^{3}$He-rich events, thanks to the uninterrupted full-disk images of the solar corona by the [*Solar and Heliospheric Observatory (SOHO)*]{}. Using images from the Extreme-ultraviolet Imaging Telescope [EIT; @1995SoPh..162..291D] and Large Angle Spectroscopic Coronagraph [LASCO; @1995SoPh..162..357B], [@2006ApJ...639..495W] reported on coronal jets (characterized by linear features) typically from coronal hole boundaries around the times of 25 $^{3}$He-rich events. Some of these jets were seen to extend into the high corona and observed as narrow CMEs. Indeed, due largely to the high sensitivity of LASCO, the association of $^{3}$He-rich events with CMEs has become higher than that known earlier if we include these narrow CMEs [@2001ApJ...562..558K]. $^{3}$He-rich events are most commonly observed at $\lesssim$1 MeV nucleon$^{-1}$. These particles take several hours to travel to 1 AU, with a wide temporal spread depending on the effective path length that they traverse in interplanetary space. Therefore it is not straightforward to isolate the solar activities related to $^{3}$He enrichment. It has been known that $^{3}$He-rich events are often associated with type III bursts at $<$2 MHz [@1986ApJ...308..902R] and 1–100 keV electron events [@1985ApJ...292..716R]. Using these two observables, [@2006ApJ...650..438N] identified the solar sources of 69 discrete $^{3}$He-rich events, many of which were jets. However, they failed to find compelling solar activities for $\sim$20% of $^{3}$He-rich events even when they had good coverage of full-disk coronal images. There are three possibilities for $^{3}$He-rich events without solar activities detectable in EUV and X-ray images. First, the acceleration and injection of $^{3}$He ions may not produce detectable EUV and X-ray emission, as could result, e.g., from flare-like processes that occur high up and leave no traces in the low corona [@1991ApJ...366L..91C]. This scenario is similar to the one that was originally proposed for impulsive electron events whose power-law spectrum extended down to $\sim$2 keV [@1980ApJ...236L..97P]. Second, the associated solar activities (such as jets) may last for too short a time to be detected by the EIT, which had typically a $\sim$12 m cadence. Lastly, the source region may be located on the far side and the processes responsible for $^{3}$He enrichment are limb-occulted. The primary objective of this paper is to explore these possibilities using the new capabilities that have become available in solar cycle 24. The Atmospheric Imaging Assembly [AIA; @2012SoPh..275...17L] on the [*Solar Dynamics Observatory (SDO)*]{}, which was launched in February 2010, takes full disk images every 12 seconds in a wide range of coronal to chromospheric temperatures. This is a significant improvement over the EIT whose high-rate ($\sim$12 m) data were taken only in one wavelength. With AIA we can detect minor transient activities that are short-lived and in narrow temperature ranges. Furthermore, we now continuously observe the far side of the Sun as viewed from the Earth, using the EUV Imager [EUVI; @2004SPIE.5171..111W; @2008SSRv..136...67H] on the [*Solar and Terrestrial Relations Observatory (STEREO)*]{}, which consists of twin spacecraft that have separated from the Sun-Earth line with the rate of $\sim$22$\arcdeg$ a year since 2006. The EUVI has taken full-disk images with a typical 5 m (10 m) cadence in the 195 Å (304 Å) channel during the period that overlaps with the [*SDO*]{}. We can readily determine whether $^{3}$He-rich events without associated solar activities detected in near-Earth observations are attributable to activities behind the limb. In the next section, we show how the $^{3}$He-rich events are selected and give a brief overview of their properties. Our procedure for finding the solar source is described in §3, using one of the selected events as an example. In §4, we show the results of the analysis of the entire sample of events. We discuss them in §5 in terms of the previous studies. A summary of the work is given in §6. ![image](f1_arxiv.eps) [ccccccc]{} 1 & 2010 Oct 17 00:02–Oct 19 18:00 & 0.79$\pm$0.07 & Yes & PL & Yes & 1.26$\pm$0.23\ & 2010 Oct 19 18:00–Oct 20 18:00 & 2.16$\pm$0.39 & Yes & C & Yes & 1.93$\pm$0.43\ 2 & 2010 Nov 02 00:02–Nov 03 23:57 & 1.37$\pm$0.10 & Yes & C & Yes & 0.91$\pm$0.32\ 3 & 2010 Nov 14 00:02–Nov 17 12:00 & 0.25$\pm$0.03 & No & C & & 1.22$\pm$0.18\ & 2010 Nov 17 15:00–Nov 18 12:00 & 4.25$\pm$0.62 & Yes & C & & 1.34$\pm$0.35\ 4 & 2011 Jan 27 12:00–Jan 30 12:00 & 0.08$\pm$0.01 & No & PL & & 1.13$\pm$0.17\ 5 & 2011 Jul 07 18:00–Jul 10 12:00 & 1.68$\pm$0.09 & Yes & C & Yes & 1.11$\pm$0.12\ 6 & 2011 Jul 31 21:00–Aug 01 18:00 & 0.05$\pm$0.01 & No & PL & & 2.66$\pm$0.45\ 7 & 2011 Aug 26 00:01–Aug 28 12:00 & 0.64$\pm$0.05 & Yes & C & & 1.51$\pm$0.13\ 8 & 2011 Dec 14 12:00–Dec 15 23:57 & 0.18$\pm$0.02 & Yes & PL & Yes & 1.35$\pm$0.22\ 9 & 2011 Dec 24 18:00–Dec 25 03:00 & 0.13$\pm$0.01 & Yes & PL & Yes & 1.44$\pm$0.26\ 10 & 2012 Jan 03 00:01–Jan 04 06:00 & 0.08$\pm$0.01 & No & PL & & 0.99$\pm$0.07\ 11 & 2012 Jan 13 12:00–Jan 14 23:56 & 1.35$\pm$0.14 & Yes & PL & Yes & 2.39$\pm$0.47\ 12 & 2012 May 14 12:00–May 16 18:00 & 0.05$\pm$0.01 & Yes & PL & Yes & 0.40$\pm$0.04\ 13 & 2012 Jun 08 03:00–Jun 09 18:00 & 0.36$\pm$0.03 & Yes & PL & Yes & 2.24$\pm$0.35\ 14 & 2012 Jul 03 00:00–Jul 05 18:00 & 0.09$\pm$0.01 & No & PL & Yes & 1.10$\pm$0.10\ 15 & 2012 Nov 18 06:01–Nov 20 06:00 & 0.55$\pm$0.07 & Yes & PL & & 0.86$\pm$0.18\ & 2012 Nov 20 06:01–Nov 21 03:00 & 6.89$\pm$0.71 & Yes & C & & 1.06$\pm$0.19\ 16 & 2013 May 02 12:00–May 04 06:00 & 0.21$\pm$0.03 & No & PL & & 1.04$\pm$0.21\ 17 & 2013 Jul 17 00:01–Jul 18 12:00 & 0.39$\pm$0.03 & Yes & C & & 0.86$\pm$0.18\ 18 & 2013 Dec 24 00:01–Dec 25 12:00 & 1.30$\pm$0.36 & Yes & PL & & 2.33$\pm$0.86\ 19 & 2014 Jan 01 06:00–Jan 02 18:00 & 0.26$\pm$0.02 & Yes & C & Yes & 1.83$\pm$0.16\ 20 & 2014 Feb 06 00:01–Feb 07 12:00 & 0.49$\pm$0.05 & Yes & C & & 1.14$\pm$0.12\ 21 & 2014 Mar 29 00:01–Mar 30 23:56 & 0.17$\pm$0.02 & Yes & C & & 0.36$\pm$0.07\ 22 & 2014 Apr 17 22:20–Apr 19 12:00 & 0.58$\pm$0.04 & No & PL & & 1.37$\pm$0.12\ 23 & 2014 Apr 24 00:01–Apr 25 23:58 & 1.28$\pm$0.07 & Yes & C & Yes & 0.71$\pm$0.09\ 24 & 2014 May 04 09:00–May 05 18:00 & 0.14$\pm$0.02 & Yes & PL & & 0.74$\pm$0.11\ 25 & 2014 May 16 06:00–May 17 18:00 & 14.88$\pm$1.36 & Yes & C & Yes & 2.13$\pm$0.27\ 26 & 2014 May 29 03:00–May 30 03:00 & 2.64$\pm$0.45 & Yes & C & & 2.95$\pm$0.40\ Event Selection =============== The $^{3}$He-rich SEP events that we study in this paper are based on data from the Ultra Low Energy Isotope Spectrometer [ULEIS; @1998SSRv...86..409M] on board the [*Advanced Composition Explorer (ACE)*]{}. The ULEIS is a time-of-flight mass spectrometer with geometric factor of $\sim$1 cm$^{2}$. We surveyed ULEIS data from May 2010 through May 2014, and selected 26 $^{3}$He-rich events that showed either clear ion injections or clear $^{3}$He presence preceded by a $>$40 keV electron event that was detected by the Electron, Proton, and Alpha Monitor [@1998SSRv...86..541G] on [*ACE*]{}. We also chose relatively intense events in terms of $^{3}$He yields so that we can derive the $^{3}$He spectrum in comparison with other ions, which may be important for identifying the right acceleration mechanisms [@2002ApJ...574.1039M]. In Table 1, we list the selected $^{3}$He-rich events with basic information including the $^{3}$He/$^{4}$He and Fe/O ratios. In the second column, the $^{3}$He-rich period is shown. Several events lasted longer than two days, comparable to the multiday events studied by [@2008ApJS..176..497K]. These long-lasting $^{3}$He-rich periods may represent either continuous injections or smeared discrete injections. During the survey period we dropped some multiday $^{3}$He-rich events because they neither have clear ion injections nor an electron event. We subdivide the period of a long-duration event only when more than one injection are clearly identified. The third column is the $^{3}$He/$^{4}$He ratio in the 0.5–2.0 MeV nucleon$^{-1}$ range, which shows a wide variation from 0.05 to 15. In the next column we show whether more than a factor of two increase in the $^{3}$He/$^{4}$He ratio is observed, measuring how strongly the $^{3}$He increase is correlated with $^{4}$He. Most events have this attribute. The $^{3}$He spectral shape in the range of 0.1–1.0 MeV nucleon$^{-1}$ is given in the fifth column. It is nearly evenly distributed between power-law (PL) and curved (C) spectra. Figure 1 shows examples of the two types. The sixth column shows whether the $^{3}$He-rich event was observed at higher energies ($>$4.5 MeV nucleon$^{-1}$) by the Solar Isotope Spectrometer [SIS; @1998SSRv...86..357S] on ACE. About one half of our events were also SIS events. The last column shows the Fe/O ratio, which ranges from 0.4 to 3, confirming that our $^{3}$He-rich events are also Fe-rich [@1975ApJ...201L..95H; @1986ApJ...303..849M]. Analysis – Finding Solar Sources of $^{3}$He-rich Events ======================================================== ![$^{3}$He-rich SEP event that started on 2014 May 16. (a) Time profiles of $^{4}$He, $^{3}$He, O and Fe ions in 0.23–0.32 MeV nucleon$^{-1}$. (b) Mass spectrogram for elements from He to Fe for ions with energies 0.4–10 MeV nucleon$^{-1}$. (c) Plot of 1/ion speed vs time of arrival for ions whose mass ranges are 10–70 AMU. The two lines in cyan show the time range for Figure 3. The two oblique lines indicate the arrival times assuming the path length of 1.2 AU.](f2_arxiv.eps) In this work we follow almost the same technique as [@2006ApJ...650..438N], who connected $^{3}$He-rich events in solar cycle 23 with solar activities using type III bursts and electron events. We use Event 25 in Table 1 to illustrate how we identify the solar sources of $^{3}$He-rich events. This event has the highest $^{3}$He/$^{4}$He ratio in our sample (Table 1). The $^{3}$He and Fe spectra are characteristically curved in the 0.1–1 MeV nucleon$^{-1}$ range (see Figure 1(b)). We first find when $^{3}$He ions started to increase significantly. Figure 2 is a ULEIS multi-panel plot, which consists of (a) hourly average intensities for $^{3}$He, $^{4}$He, O and Fe ions for the energy range 0.23–0.32 MeV nucleon$^{-1}$, (b) individual ion masses and arrival times for ions of energy 0.4–10 MeV nucleon$^{-1}$, including ions from He to Fe and indicating clear separation of $^{3}$He from $^{4}$He, and (c) individual ion reciprocal speed (1/$v$) and arrival time for ions with mass of 10–70 AMU. Four-day plots in this format are available at the ACE Science Center site[^1]. In this event, $^{3}$He ions started to increase around 10:00 UT on 2014 May 16 (Figure 2(b)). We also note velocity dispersion (Figure 2(c)), which could be used to determine the particle injection time and path length, assuming that (i) the first-arriving particles of all energies are injected at the same time and that (ii) they are scatter-free en route to 1 AU. The upper envelope of the apparent dispersion indicates the particle injection around 05 UT on May 16 . The calculated path length is $\sim$1.8 AU, considerably longer than 1.2 AU, which is indicated by the two oblique lines. In this work, we do not utilize the information from ion velocity dispersion. First, the uncertainty in the injection time may be more than a few hours. Second, neither of the assumptions (i) and (ii) may be generally valid. Furthermore, only four other $^{3}$He-rich periods listed in Table 1 have the velocity dispersion as clear as this one. We instead take a more conservative approach to find the solar activity that possibly accounts for the $^{3}$He-rich event. That is, we use a type III burst and electron event in a long enough time window before the $^{3}$He onset in such a way that the presence of an electron event gives preference to the associated type III burst if there are more than one of them [see @2006ApJ...650..438N]. Then we can determine the time of the solar activity related to the $^{3}$He-rich event to the accuracy of one minute. We set a window 1–7 hours prior to the $^{3}$He onset as indicated by the two lines in cyan. In this event, no major type III bursts are seen if we expand the window by five more hours (i.e., start of the window at 22:00 UT on May 15). Longer time windows are used for a majority of events without clear velocity dispersions. ![Six-hour period preceding the $^{3}$He-rich event shown in Figure 2 (a) [*GOES*]{} 1–8 Å and 0.5–4 Å X-ray light curves. (b) Radio dynamic spectrum from WAVES. (c) Electron flux at 1 AU as observed by the solid state telescope of 3DP.](f3.eps) In Figure 3 we show [*GOES*]{} soft X-ray light curves, radio dynamic spectrum and electron time profiles for the time window indicated in Figure 2. The radio dynamic spectrum below 14 MHz is obtained by the Radio and Plasma Wave Experiment [WAVES; @1995SSRv...71..231B], and the electron time profiles from the Three-dimensional Plasma and Energetic Particles instrument [3DP; @1995SSRv...71..125L], both on the [*Wind*]{} spacecraft. Although Figure 3(c) is limited to electrons above $\sim$30 keV as detected by the solid-state telescope (SST), the 3DP consists also of the electron electrostatic analyzer (EESA) that observes 3 eV–30 keV electrons. The combined SST and EESA data sometimes produce a clear velocity dispersion over a wide energy range [@2006GeoRL..33.3106W; @2012ApJ...759...69W], which may be used to argue for the scatter-free nature of electron events. Although the EESA data did not rise above background in the interval of Figure 3 and therefore are not shown, their availability is one of the reasons we use 3DP data in this work. In Figure 3(b) we find type III bursts between 04:00–04:16 UT. Figure 3(c) shows an electron event with velocity dispersion. It is delayed with respect to the start time of the type III by $\sim$15 minutes at the highest-energy channel (108 keV) that observes it. Figure 3(a) shows a C1.3[^2] flare that peaks at 03:32 UT, followed by a smaller flare that is better seen in the 0.5–4 Å channel. At first, we do not discard the possibility that these flares may be related to the type III bursts even though they are too widely separated in time. We note that the NOAA event list shows the C1.3 flare coming from AR 12053, which is clearly wrong because the region was already 20$\arcdeg$ behind the west limb. Therefore we need to examine full-disk images to locate them. AIA images in the 94 Å channel that has the peak temperature response around 7–8 MK show that they come primarily from AR 12063 (N10 E28) and secondarily from AR 12057 (N17 W40). Despite the effort to locate the minor flares, however, we find a distinctly new pattern after 04:00 UT, much closer in time to the type III bursts, which is a jet-like ejection in a quiescent region at S12 W44. This is clearly seen in AIA images in multiple channels for the next 15 minutes. We therefore consider this jet to be associated with the type III bursts and the $^{3}$He-rich event. ![(a) AIA 193 Å image around the time of the type III burst associated with the $^{3}$He-rich event in Figure 2. The area encircled in cyan contains a jet. The box shows the field of view of the image shown in (b). (b) AIA 193 Å difference image expanded in a limited field of view. The grid of heliographic longitude and latitude is overlaid in yellow at a spacing of 10$\arcdeg$. The leg of the jet is located around W44. (c) Difference image in the 195 Å channel around the same time as taken by the EUVI on [*STEREO-A*]{}. The arrow in yellow points to the same jet. (d) LASCO C2 difference image taken shortly after the jet, confirming a narrow CME.](f4_arxiv.eps) Figure 4(a) shows an AIA 193 Å full-disk image on which the location of the jet is encircled. An enlarged view of the jet is shown as a difference image in Figure 4(b). Its field-of-view is marked in Figure 4(a). The jet is extended both in time and space, and it would probably have been detected by EIT. As in previous studies [@2006ApJ...639..495W; @2006ApJ...650..438N; @2008ApJ...675L.125N], this jet is later identified as a relatively narrow CME (Figure 4(d)). The same jet also appears on the east limb in a [*STEREO-A*]{} EUVI difference image (Figure 4(c)). The footpoint of the jet should be at longitude of E114 as viewed from [*STEREO-A*]{}, which was then located at 158$\arcdeg$ west of the Sun-Earth line. In other events, the coronal signatures can be much weaker in the 193 Å channel than those in this event. Therefore we examine images in different channels and various forms, that is, intensity, running difference and ratio, and base (pre-event) difference and ratio, applying wide ranges of normalization factors. ![Difference images in the 304 Å channel of (a) EUVI-A and (b) EUVI-B, showing the jet extended to higher altitudes as indicated by the arrows.](f5_arxiv.eps) The lower part of this jet was more clearly observed later in 304 Å images (Figure 5) that primarily represent chromospheric temperatures. It is interesting to note that the jet is seen even from [ *STEREO-B*]{}, for which the region’s longitude was E151 (see Figure 6). The corresponding occultation height is 1.08 R$_{\sun}$ from the photosphere. This example demonstrates that [*STEREO*]{} data are not only useful to confirm the locations of the solar activities associated with $^{3}$He-rich events, but also to determine their extensions in height. ![[*STEREO*]{} angular separation (indicated in red) during the period our $^{3}$He-rich events occurred (2010 October 17–2014 May 29). The blue dots and green arrow show, respectively, the positions of [*STEREO*]{} and the source region for Event 25 (2014 May 16). ](f6.eps) To further illustrate the unique information coming from [*STEREO*]{} observations, Figure 6 shows the locations of [*STEREO-A (STA)*]{} and [*STEREO-B (STB)*]{} during the survey period for the $^{3}$He-rich events studied in this paper. EUVI on [*STEREO-A*]{} unambiguously observed as disk events those that were behind the west limb as seen from Earth. Early in the period, events near central meridian in Earth view were observed as limb events by both [*STEREO*]{}. Toward the end of the period, [*STEREO-B*]{} observed western-hemisphere events as occulted by the east limb as was the case for Figure 5(b). ![image](f7.eps) Now we study the source region of the $^{3}$He-rich event in terms of the global magnetic field that becomes part of solar wind sampled at 1 AU. In particular, we are interested in the relation of the source region with the open field lines that are connected to [*ACE*]{}, because the observed particles propagate along them. In Figure 7 we show the relation of the source region with the magnetic footprint of the observer at [*ACE*]{}, marked L1, in terms of computed open field lines. The grayscale image is a synoptic magnetic map, not in the usual Carrington coordinates, but in the Stonyhurst (Earth-view) coordinate systems, in which zero of the X-axis corresponds to zero heliographic longitude. The source region is plotted in its photospheric location, i.e. S12 W44, whereas the L1 symbol is in the projected location on the source surface (at 2.5 R$_{\sun}$ from the Sun center) of the foot-point of the field line that crosses L1. In order to obtain the latter, we assume the Parker spiral that corresponds to the observed solar wind speed averaged over 5 hours around the type III burst. The magnetic field is extrapolated from the photosphere, using the potential field source surface (PFSS) model. Specifically, we base this study on the PFSS package in SolarSoft as implemented by [@2003SoPh..212..165S], which includes updates of synoptic magnetic maps every 6 hours. They have equal-area pixels with the resolution of 1 deg$^{2}$ at the equator. These maps are used as the lower boundary conditions for the PFSS extrapolation. They are constructed from (1) the longitudinal magnetograms taken by the Helioseismic and Magnetic Imager [HMI; @2012SoPh..275..207S; @2012SoPh..275..229S] for the area 60$\arcdeg$ from disk center, and (2) a flux transport model [@2001ApJ...547..475S] for the remainder of the solar surface. Here we overplot coronal holes or contiguous open field regions as filled areas. They are color-coded depending on the polarity of the photospheric footpoint (green for positive and pink for negative). We use the same color code to show open field lines that reach the source surface at the ecliptic and $\pm$7$\arcdeg$ latitudes. We include the latter field lines to show the latitudinal uncertainties of the PFSS results [cf. @2008ApJ...673L.207N]. In Event 25, the photospheric location of the jet is close to the coronal hole of positive polarity, and to open field lines that are connected to L1. This is discussed more quantitatively in the next section. The positive polarity of these open field lines is consistent with the observed polarity of the Interplanetary Magnetic Field (IMF) during the whole period of the $^{3}$He-rich event, and also with that indicated by the anisotropy of electrons observed in 3DP pitch-angle data. In this work, we examine solar wind data from [*Wind*]{} and [*ACE*]{}, not only to compare the polarities of the IMF and the source regions of the $^{3}$He-rich events, but also to identify interplanetary CMEs (ICMEs), corotating interaction regions and other non-steady structures that may modify the Sun-Earth magnetic field connections. Results ======= [clccccccrrccrrcrc]{} 1 & 2010 Oct 17 & & Oct 17 08:55 & 09:10 & 08:52 & & C1.7 & S18 W33 & C 2 & E & & 54 & 304 & & 383 & $+$\ & & & Oct 19 06:48 & 07:05 & 06:45 & & C1.3 & S18 W57 & C 4 & E & & 77 & 385 & & 406 & $-$\ 2 & 2010 Nov 02 & & Nov 02 07:28 & 07:55 & 07:26 & & B1.9 & N20 W90 & $-$ 2 & L & & 67 & 253 & & 337 & $+$\ 3 & 2010 Nov 14 & & Nov 13 23:52 & N & 23:50 & & C1.1 & S23 W26 & $-$ 1 & E & & 63 & 442 & & 465 & $-$\ & & & Nov 17 08:09 & 08:20 & 08:07 & & B3.4 & S22 W72 & C 3 & J & & 41 & 639 & & 498 & $-$\ 4 & 2011 Jan 27 & & Jan 27 08:45 & 08:40 & 08:40 & & B6.6 & N14 W80 & $-$ 1 & L & & 52 & 316 & & 304 & $-$\ 5 & 2011 Jul 07 & & Jul 07 14:27 & 14:50 & 14:25 & & B7.6 & N15 W91 & $-$ 1 & J & & 33 & 715 & & 348 & $-$\ 6 & 2011 Jul 31 & & Jul 31 19:01 & 19:15 & 19:01 & & C1.7 & N18 W51 & $-$ 1 & L & & 47 & 280 & & 659 & $-$\ 7 & 2011 Aug 26 & & Aug 26 00:42 & 01:05 & 00:41 & & B4.4 & N18 W62 & $-$ 1 & J & & 26 & 305 & & 422 & $-$\ 8 & 2011 Dec 14 & & Dec 14 03:11 & 03:45 & 03:01 & & C3.5 & S18 W86 & C 3 & E & & 40 & 576 & & 441 & $-$\ 9 & 2011 Dec 24 & & Dec 24 11:58 & 12:15 & 11:20 & & C4.9 & N16 W92 & $-$ 1 & E & & 42 & 536 & & 352 & $-$\ 10 & 2012 Jan 03 & & Jan 03 01:51 & 02:10 & 01:40 & & B5.0 & S20 W63 & $+$ 1 & J & & 29 & 670 & & 411 & $+$\ 11 & 2012 Jan 13 & & Jan 13 09:08 & 09:20 & 09:10 & & N & N16 W111 & $-$ 4 & J & & 62 & 350 & & 501 & $+$\ 12 & 2012 May 14 & & May 14 09:35 & 09:55 & 09:35 & & C2.5 & N08 W45 & $-$ 1 & E & & 48 & 551 & & 438 & $-$\ 13 & 2012 Jun 08 & & Jun 08 07:15 & 07:45 & 07:11 & & C4.8 & N13 W40 & $-$ 1 & L & & 34 & 308 & & 593 & $-$\ 14 & 2012 Jul 03 & & Jul 02 18:04 & 18:45 & 18:03 & & C4.5 & N16 W09 & $-$ 1 & J & & N & N & & 638 & $-$\ 15 & 2012 Nov 18 & & Nov 18 04:00 & 04:30 & 03:55 & & C5.7 & N08 W07 & C 4 & E & & 29 & 49 & & 407 & $+$\ & & & Nov 20 01:30 & N & 01:31 & & N & S17 W60 & C 2 & J & & N & N & & 390 & $+$\ 16 & 2013 May 02 & & May 02 04:55 & 05:25 & 04:58 & & M1.1 & N10 W25 & $-$ 1 & E & & 99 & 671 & & 460 & $-$\ 17 & 2013 Jul 17 & & Jul 16 20:21 & 20:35 & 20:18 & & B1.8 & N21 W70 & C 4 & J & & 21 & 270 & & 349 & $+$\ 18 & 2013 Dec 24 & & Dec 24 12:45 & 20:35 & 12:42 & & C1.2 & S17 W92 & $+$ 1 & J & & 21 & 270 & & 285 & $+$\ 19 & 2014 Jan 01 & & Jan 01 07:24 & 07:45 & 07:21 & & C3.2 & S13 W47 & $+$ 1 & J & & 185 & 465 & & 375 & $+$\ 20 & 2014 Feb 06 & & Feb 05 22:55 & 23:10 & 22:55 & & N & S13 W84 & C 3 & J & & N & N & & 385 & $+$\ 21 & 2014 Mar 29 & & Mar 28 21:00 & 21:15 & 20:57 & & C1.0 & S17 W66 & C 3 & J & & 10 & 314 & & 427 & $-$\ 22 & 2014 Apr 17 & & Apr 17 21:58 & 22:20 & 21:50 & & C3.2 & S15 W24 & $+$ 1 & E & & 119 & 824 & & 388 & $+$\ 23 & 2014 Apr 24 & & Apr 24 00:40 & 01:35 & 00:50 & & N & S18 W102 & C 2 & E & & 90 & 601 & & 414 & $-$\ 24 & 2014 May 03 & & May 03 20:16 & 21:00 & 20:18 & & C1.8 & S11 W36 & $+$ 1 & E & & 218 & 494 & & 353 & $-$\ 25 & 2014 May 16 & & May 16 03:57 & 04:25 & 20:18 & & N & S12 W44 & $+$ 1 & J & & 43 & 592 & & 327 & $+$\ 26 & 2014 May 29 & & May 29 02:37 & 03:00 & 02:50 & & N & S14 W105 & $+$ 1 & E & & 35 & 418 & & 330 & $+$\ We perform the analysis described in the previous section for all the selected $^{3}$He-rich events. Table 2 gives the results. First of all, we always found at least one type III burst in a time window up to 10 hours before the observed $^{3}$He onset, confirming the excellent correlation between the two phenomena [see @2006ApJ...650..438N]. Moreover, except for a few cases, there is no ambiguity finding the type III burst that appears to be associated with the $^{3}$He-rich event. Even when multiple type III bursts are present within the time window preceding a $^{3}$He-rich event (as identified by the subscript e in Table 1), they often share the same source region [e.g. Event 5, see @2014ApJ...786...71B]. It is possible that they are all associated with the $^{3}$He-rich event. In such cases, Table 2 lists only a representative type III burst for each $^{3}$He-rich interval in Table 1. Almost all the type III bursts accompany an electron event as in the fourth column, which may appear contradictory to [@2006ApJ...650..438N], who found that $^{3}$He-rich events are much less frequently associated with electron events than with type III bursts (62% vs 95%.) However, this largely results from one of our selection criteria. Note that not all the electron events exhibit the velocity dispersion that points to a release time close to the type III burst. Some of them start long (e.g. $>$30 m) after the type III burst or flare, have gradual time profiles, or show no velocity dispersion. Although these properties may suggest their different origins, it is beyond the scope of this work to study electron events in detail. ![image](f8.eps) The magnitude of the associated flare is given in the sixth column. Most events are associated with flares that are the [*GOES*]{} C-class or below. Some of the type III bursts have no associated [*GOES*]{} flare, labeled N (no flare). Events in this category tend to have high $^{3}$He/$^{4}$He ratios as shown in Table 1 (e.g. Event 25 featured in § 3), which is consistent with earlier findings [see @1988ApJ...327..998R]. They also include some events with curved $^{3}$He spectrum. The ninth column shows the characteristic motion of the solar activity possibly responsible for $^{3}$He-rich events. It is one of the three types (see Figure 8). If it contains linear features (see Figure 8(a)), we label it a jet (J). If it shows larger angular expanse, probably involving closed loop structures like CMEs that are not narrow (Figure 8(b)), it is labeled an eruption (E). The distinction between these two can be subjective and dependent on projection and observed temperatures. Lastly the motion may reach large distances (Figure 8(c)), which we classify as the EIT wave or large-scale coronal propagating front [LCPF, see @2013ApJ...776...58N]. We label such an event L. We note that most events belong to either E or J with only a few showing large-scale motions. There is no clear correlation of these motions with the basic properties of $^{3}$He-rich events in Table 1, except that events with high $^{3}$He/$^{4}$He ratios and curved $^{3}$He spectra are more often associated with jets. Now we look at the association of $^{3}$He-rich events with CMEs whose angular width and velocity (in the plane of the sky) are given in the tenth and eleventh columns. They are mostly taken from the CDAW catalog[^3], but we also made independent identification and measurement for unclear cases. In addition, we examined data from the [*STEREO*]{} COR-1 and COR-2 coronagraphs [@2008SSRv..136...67H] when no CMEs were found in LASCO data (e.g. Events 15 and 21). At first we expect CMEs to be influenced by the three types of coronal motions. For example, CMEs associated with an eruption may be wider than those associated with a jet. The same may be true for CMEs associated with a LCPF as compared with those that are associated with an eruption or a jet. Such expectations are not supported by the observed CME width. The CME velocity is also uncorrelated with the coronal motions. In short, we still do not understand the relation of $^{3}$He-rich events with CMEs, which often, but not always, accompany them and whose properties vary. ![Images for the $^{3}$He-rich event on 2012 January 13. (a) AIA 304 Å images. The encircled area appears to contain an elongated structure. The box defines the field of view of the cutout image in (b). (b) AIA 193 Å difference image barely revealing the jet. (c) 195 Å image from EUVI on [*STEREO-A*]{} (107$\arcdeg$ west of the Sun-Earth line), showing the brightening associated with the jet, and confirming its backside origin. (d) LASCO C2 image showing the associated CME toward the northwest.](f9_arxiv.eps) ![The source regions are plotted in terms of the heliographic longitude and the associated solar wind speed. The results from this work are compared with the earlier ones. The curve shows the relation between the solar wind speed and the longitude of the nominal Parker spiral.](f10.eps) Next, we discuss where the $^{3}$He-rich events come from. The location of the source region is shown in the seventh column of Table 2. The $^{3}$He-rich events occurred predominantly in the northern hemisphere up to 2013, but all of the more recent events are in the southern hemisphere. This may simply reflect a larger number of active regions that emerged in the southern hemisphere during the period in question. An important result comes from the direct observation of regions behind the limb made possible by [*STEREO*]{}. We find three cases where the source region is more than 10$\arcdeg$ behind the limb. An example is shown in Figure 9 (Event 11). This event is actually preceded by a less intense $^{3}$He-rich period that lasts for $\sim$15 hours, but we study only the later event, which is more intense. About five hours before the $^{3}$He onset, we find a strong type III burst, which is observed at the highest frequency of WAVES even though it is limb-occulted as shown below. Around the time of the type III burst, we see a diffuse linear feature stick out of the limb (Figure 9(b)). Its associated brightening is readily located in an EUVI-A image to be N16 W04 (Figure 9(c)). Since [*STEREO-A*]{} was 107$\arcdeg$ west of the Sun-Earth line, we determine the source location to be N16 W111 as viewed from the Earth. With reference to the type III burst, the delay of the electron event is only $\sim$12 minutes. However, there is no velocity dispersion, and the time profiles are gradual. The existence of such events naturally adds to the broad longitudinal distribution of $^{3}$He-rich events, which was already found by [@2006ApJ...639..495W] and [@2006ApJ...650..438N]. In Figure 10, we show a scatter plot of the longitude of the source regions and the solar wind speed (the five hour average around the type III burst as shown in the twelfth column), in comparison with the earlier works. We confirm that the longitudinal distribution is broad, often far from the longitude of the nominal Parker spiral, which is farther from the west limb for faster solar wind as indicated by the curve in Figure 10. In this work, examples of the source longitude close to and behind the west limb are established for the first time by direct observations without extrapolating the longitudes found earlier. Now we compare the polarities of the source regions (the eighth column) with those in-situ at L1 around the times of $^{3}$He-rich events (the thirteenth column). The latter is determined with respect to the Parker spiral for the observed solar wind speeds. We assign either positive (away) or negative (toward) polarity as long as the azimuth angle of the magnetic field measured at L1 is more than 15$\arcdeg$ from the normal to the Parker spiral. In reality we find that the observed field is often far from the Parker spiral [cf. @1993JGR....98.5559L]. The polarity of the source region is less straightforward, since the region may not have a dominant polarity. We conduct PFSS extrapolations to find if the region contains open field lines with the same polarity. In several regions we find no open field lines within a 20$\arcdeg$ heliographic distance, in which case we put C (closed) in the eighth column. When open field lines are identified in or near the source region, the polarities generally match at the Sun and L1. Following [@2006ApJ...650..438N], we also grade the performance of field line tracing on the basis of the PFSS model and Parker spiral, respectively, within and beyond the source surface. We trace field lines from the source surface to the photosphere. Given the uncertainty in the IMF, we include field lines that are within $\pm$7.5$\arcdeg$ and $\pm$2.5$\arcdeg$, respectively, from the longitude and latitude of the footpoint of the Parker spiral that intersects L1. We trace about 2000 field lines downward that are uniformly distributed in the above longitudinal and latitudinal ranges on the source surface. Our grading is 1 (best) to 4 (worst) depending on the minimum distance ($d_{min}$) of the footpoints of the traced field lines to the source region: 1 if $d_{min} \lesssim 10\arcdeg$, 2 if $10\arcdeg < d_{min} \lesssim 20\arcdeg$, 3 if $20\arcdeg < d_{min} \lesssim 30\arcdeg$ and 4 if $40\arcdeg < d_{min}$. The results shown in the eighth column are consistent with those by [@2006ApJ...650..438N]. This simply represents the status of how we model the Sun-Earth magnetic field connection. The results may not improve drastically with state-of-the-art numerical models rather than the simple PFSS $+$ Parker spiral model [@2011SpWea...910003M], as long as the lower boundary conditions are set by the photospheric magnetic maps, only about 1/3 of which reflect direct observations. Discussion ========== In this paper we use [*SDO*]{}/AIA data to identify the solar sources of $^{3}$He-rich SEP events, which have been known to be weak in terms of coronal signatures. The significantly improved quality of AIA images do reveal a much larger number of weak transients than did previous instruments (e.g. EIT). Although it is possible that EIT missed many weak and short-lived transients, AIA data present another challenge of identifying the activity responsible for a given $^{3}$He-rich SEP event from among the numerous other activities that are occurring here and there all the time. Fortunately, we know that type III bursts are highly associated with $^{3}$He-rich SEP events [@1986ApJ...308..902R; @2006ApJ...650..438N]. They let us narrow down the time range in which we need to examine images, and it turns out to be relatively easy to find the solar activity associated with the type III burst. We also use the association of electron events as a selection criterion because of their known correlation with $^{3}$He-rich events [@1985ApJ...292..716R]. We need to ask how valid is our approach that assumes that type III bursts (and electron events to some extent) are a progenitor of $^{3}$He-rich events. Type III bursts often accompany large gradual SEP events [@2002JGRA..107.1315C]. Many of them are better seen at low frequencies, e.g. below 1 MHz, and delayed with respect to the associated gradual flare. But there are others that start at the beginning of the associated impulsive flare and are seen at the highest frequency of WAVES. It is true that the appearances of type III bursts and electron events alone do not distinguish between impulsive ($^{3}$He-rich) and gradual SEP events. However, the regions that produce large CMEs responsible for gradual events tend to be different from the regions that produce smaller activities responsible for $^{3}$He-rich events. It is usually trivial to separate them in EUV images. A lingering puzzle in the sources of $^{3}$He-rich events is the height of the acceleration process in the corona. There is strong evidence that the ionization state of the ions is caused by passage through coronal material, and this requires a low coronal source [e.g. @2006SSRv..123..217K; @2007ApJ...671..947K; @2006ApJ...645.1516D]. However, simple models have shown that the low energy ions in these events are injected about 1 hour later than the electrons, suggesting a high coronal source. We have examined the five events in Table 1 (Nos. 6, 9, 20, 25 and 26) that have clear velocity dispersion such as shown in Figure 2, and found that fits to the onset profiles yield ion injection times about 2 hours later than the electron onset times shown in Table 2, roughly consistent with earlier work. Such simple fits assume that there is no scattering during transport from the Sun, and this may not be the case for these ions. [@2005ApJ...626.1131S] have discussed how interplanetary scattering can be present and yet still produce a clear velocity dispersion pattern at 1 AU where the path length traveled is not the IMF length but rather the IMF length divided by the average pitch cosine for the particles. Their model investigation showed how this can produce inferred injection times significantly later than the actual injection time. In this study we find $^{3}$He-rich events produced by solar eruptions that are not necessarily jets. However, none of these eruptions are very energetic CMEs like those that often accompany large gradual SEP events. Furthermore, it is possible that the jet, even though it is present, can be overlooked when the primary activity is a wider eruption. Models have been developed to explain particle escape, which accommodate both jets and flux ropes [@2013ApJ...771...82M]. One of the surprises may be the involvement of EUV waves or LCPFs (Figure 8(c)), which used to be connected to large CMEs. For larger, gradual SEP events attempts have been made to reproduce the SEP onset behavior in terms of the injection as the LCPF intersects the footpoint of the field line that crosses the observer [e.g. @1999ApJ...519..864K; @2011ApJ...735....7R; @2012ApJ...752...44R]. Here we suggest the possibility that LCPFs may play a role in $^{3}$He-rich events observed at widely separate longitudes [@2013ApJ...762...54W] because several examples in Wiedenbeck et al. also accompany LCPFs including the solar minimum event (on 2008 November 4) discussed by [@2009ApJ...700L..56M] and our event 2. LCPFs may also lead to injection of particles in an open field region away from the flare site [@1999ApJ...519..864K], which may be closed. Again, we point out that even those LCPFs could start off as jets. The event shown in Figure 8(c) is such an example. Figure 10 shows the longitude and solar wind speed distributions for the events in Table 1, along with previous recent studies by [@2006ApJ...639..495W] and [@2006ApJ...650..438N]. The distribution is very wide, covering more than the entire western hemisphere. This contrasts with the earlier view [@1999SSRv...90..413R] that these events had western hemisphere source locations peaked near $\sim$W60 that varied due to solar wind speed, with some additional broadening from magnetic field line random walk. The more sensitive observations of recent years show that this is not the case: there is little evidence in Figure 10 of the expected correlation between source longitude and solar wind speed. For the events in Table 2, the correlation coefficient between source location and solar wind speed is -0.36. The p-value of 0.05 indicates that this is a statistically significant correlation for 29 events (including a second injection in three of the 26 events), but it is clear that solar wind speed does not dominate the variations in source longitude. Recent multi-spacecraft studies have shown that some $^{3}$He-rich events are observed over very wide longitude ranges [@2013ApJ...762...54W]. [@2012ApJ...751L..33G] have shown that perpendicular transport combined with field line meandering may produce significant transport of particles over wide longitudinal ranges, but it is not clear if such models can reproduce the observed onset timing. These problems combined with the limited success of the PFSS model in predicting other properties such as field polarity or even connection to known $^{3}$He-rich event sources [e.g. @2009ApJ...700L..56M] suggest that the difficulties encountered here are due at least partly to an insufficiently realistic model of the coronal magnetic field. Recent models (S-web) with more sophisticated approaches [e.g. @2011ApJ...731..112A; @2011ApJ...731..110L] regarding solar wind sources have shown that relatively small regions on the Sun can be magnetically connected to large regions in the inner heliosphere, as would be required to explain the energetic particle observations discussed here. These new models require large computational resources so it is not a simple matter to test them against a set of events such as studied here. Comparison of Tables 1 and 2 shows that $^{3}$He-rich events with high $^{3}$He/$^{4}$He ratios and curved $^{3}$He spectrum tend to be associated with weak of no GOES flares. Moreover, they are more frequently associated with jets than those that have power-law spectra. However, the correlation is not strong, and we need a larger sample to verify it. Conclusions =========== Using high-cadence AIA images in combination with EUVI images for areas not viewed from Earth, we have identified the source regions of 26 $^{3}$He-rich SEP events in solar cycle 24, and classified the activity at their source. Combined with prior work the basic properties of these sources can be summarized as follows: 1. We confirm the previously identified association of these events with type III radio bursts and electron events. 2. Solar activity at the source locations is generally weak in terms of soft X-ray flux, and can be associated with a variety of motions such as small eruptions, jets, and large-scale coronal propagating fronts. 3. The broad longitudinal distribution of sources is not consistent with simple Parker spiral, nor with Parker spiral plus PFSS modeling of the coronal magnetic field. The actual magnetic connection between the sources and IMF may be much broader. 4. Besides these clear associations the correlation is not strong between the properties of the solar sources and the energetic ions such as $^{3}$He/$^{4}$He ratio or spectral form. Given the extremely high sensitivity of the AIA and ACE measurements, it is likely that inner solar system measurements from upcoming missions will be needed to settle many of the observational ambiguities not resolved by the currently available data. We thank the referee for letting us find and correct some problems in the original manuscript. This work has been supported by the NSF grant AGS-1259549, NASA AIA contract NNG04EA00C and the NASA STEREO mission under NRL Contract No. N00173-02-C-2035. GMM acknowledges NASA grant NNX10AT75G, 44A-1089749, and NSF grant 1156138/112111. CMSC and MEW acknowledge support at Caltech and JPL from subcontract SA2715-26309 from UC Berkeley under NASA contract NAS5-03131T, and by NASA grants NNX11A075G and NNX13AH66G. natexlab\#1[\#1]{} , S. K., [Miki[ć]{}]{}, Z., [Titov]{}, V. S., [Lionello]{}, R., & [Linker]{}, J. A. 2011, , 731, 112 , J.-L., [Kaiser]{}, M. L., [Kellogg]{}, P. J., [et al.]{} 1995, , 71, 231 , J. L., [Goetz]{}, K., [Kaiser]{}, M. L., [et al.]{} 2008, , 136, 487 , G. E., [Howard]{}, R. A., [Koomen]{}, M. J., [et al.]{} 1995, , 162, 357 , R., [Innes]{}, D. E., [Mall]{}, U., [et al.]{} 2014, , 786, 71 , H. V., [Erickson]{}, W. C., & [Prestage]{}, N. P. 2002, JGRA, 107, 1315 , E., & [Kahler]{}, S. 1991, , 366, L91 , J.-P., [Artzner]{}, G. E., [Brunaud]{}, J., [et al.]{} 1995, , 162, 291 , W., [Kartavykh]{}, Y. Y., [Klecker]{}, B., & [Mason]{}, G. M. 2006, , 645, 1516 , J., & [Jokipii]{}, J. R. 2012, , 751, L33 , R. E., [Krimigis]{}, S. M., [Hawkins]{}, III, S. E., [et al.]{} 1998, , 86, 541 , R. A., [Moses]{}, J. D., [Vourlidas]{}, A., [et al.]{} 2008, , 136, 67 , K. C., & [Simpson]{}, J. A. 1970, , 162, L191 , G. J., [Mewaldt]{}, R. A., [Stone]{}, E. C., & [Vogt]{}, R. E. 1975, , 201, L95 , S., [Reames]{}, D. V., [Sheeley]{}, Jr., N. R., [et al.]{} 1985, , 290, 742 , S. W., [Lin]{}, R. P., [Reames]{}, D. V., [Stone]{}, R. G., & [Liggett]{}, M. 1987, , 107, 385 , S. W., [Reames]{}, D. V., & [Sheeley]{}, Jr., N. R. 2001, , 562, 558 , S. W., [Sheeley]{}, Jr., N. R., [Howard]{}, R. A., [et al.]{} 1984, , 89, 9683 , Y. Y., [Dr[ö]{}ge]{}, W., [Klecker]{}, B., [et al.]{} 2007, , 671, 947 , B., [Kunow]{}, H., [Cane]{}, H. V., [et al.]{} 2006, , 123, 217 , L., [Laivola]{}, J., [Mason]{}, G. M., [Didkovsky]{}, L., & [Judge]{}, D. L. 2008, , 176, 497 , S., [Larson]{}, D. E., [Lin]{}, R. P., & [Thompson]{}, B. J. 1999, , 519, 864 , J. R., [Title]{}, A. M., [Akin]{}, D. J., [et al.]{} 2012, , 275, 17 , R. P., [Anderson]{}, K. A., [Ashford]{}, S., [et al.]{} 1995, , 71, 125 , J. A., [Lionello]{}, R., [Miki[ć]{}]{}, Z., [Titov]{}, V. S., & [Antiochos]{}, S. K. 2011, , 731, 110 , S., [Petrosian]{}, V., & [Mason]{}, G. M. 2006, , 636, 462 , J. G., [Zhang]{}, T.-L., [Petrinec]{}, S. M., [et al.]{} 1993, , 98, 5559 , P., [Elliott]{}, B., & [Acebal]{}, A. 2011, Space Weather, 9, 10003 , G. M., [Nitta]{}, N. V., [Cohen]{}, C. M. S., & [Wiedenbeck]{}, M. E. 2009, , 700, L56 , G. M., [Reames]{}, D. V., [von Rosenvinge]{}, T. T., [Klecker]{}, B., & [Hovestadt]{}, D. 1986, , 303, 849 , G. M., [Gold]{}, R. E., [Krimigis]{}, S. M., [et al.]{} 1998, , 86, 409 , G. M., [Wiedenbeck]{}, M. E., [Miller]{}, J. A., [et al.]{} 2002, , 574, 1039 , S., [Antiochos]{}, S. K., & [DeVore]{}, C. R. 2013, , 771, 82 , N. V., & [DeRosa]{}, M. L. 2008, , 673, L207 , N. V., [Mason]{}, G. M., [Wiedenbeck]{}, M. E., [et al.]{} 2008, , 675, L125 , N. V., [Reames]{}, D. V., [De Rosa]{}, M. L., [et al.]{} 2006, , 650, 438 , N. V., [Schrijver]{}, C. J., [Title]{}, A. M., & [Liu]{}, W. 2013, , 776, 58 , D. W., [Lin]{}, R. P., & [Anderson]{}, K. A. 1980, , 236, L97 , D. V. 1999, , 90, 413 —. 2013, , 175, 53 , D. V., [Dennis]{}, B. R., [Stone]{}, R. G., & [Lin]{}, R. P. 1988, , 327, 998 , D. V., & [Stone]{}, R. G. 1986, , 308, 902 , D. V., [von Rosenvinge]{}, T. T., & [Lin]{}, R. P. 1985, , 292, 716 , A. P., [Odstr[č]{}il]{}, D., [Sheeley]{}, N. R., [et al.]{} 2011, , 735, 7 , A. P., [Sheeley]{}, N. R., [Tylka]{}, A., [et al.]{} 2012, , 752, 44 , A., [Evenson]{}, P., [Ruffolo]{}, D., & [Bieber]{}, J. W. 2005, , 626, 1131 , P. H., [Schou]{}, J., [Bush]{}, R. I., [et al.]{} 2012, , 275, 207 , J., [Scherrer]{}, P. H., [Bush]{}, R. I., [et al.]{} 2012, , 275, 229 , C. J. 2001, , 547, 475 , C. J., & [DeRosa]{}, M. L. 2003, , 212, 165 , E. C., [Cohen]{}, C. M. S., [Cook]{}, W. R., [et al.]{} 1998, , 86, 357 , L., [Lin]{}, R. P., [Krucker]{}, S., & [Gosling]{}, J. T. 2006, , 33, 3106 , L., [Lin]{}, R. P., [Krucker]{}, S., & [Mason]{}, G. M. 2012, , 759, 69 , Y.-M., [Pick]{}, M., & [Mason]{}, G. M. 2006, , 639, 495 , M. E., [Mason]{}, G. M., [Cohen]{}, C. M. S., [et al.]{} 2013, , 762, 54 , J.-P., [Lemen]{}, J. R., [Tarbell]{}, T. D., [et al.]{} 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 5171, Telescopes and Instrumentation for Solar Astrophysics, ed. S. [Fineschi]{} & M. A. [Gummin]{}, 111–122 [^1]: <http://www.srl.caltech.edu/ACE/ASC/DATA/level3/index.html> [^2]: The peak flux $I_{peak}$ of 1.3$\cdot 10^{-6}$ W m$^{-2}$ in the 1–8 Å channel of the GOES X-ray Spectrometer. A C-class flare has $I_{peak}$ of 10$^{-6}$$\leq$$I_{peak}$$<$10$^{-5}$ (W m$^{-2}$). The M-class (B-class) is an order of magnitude higher (lower). [^3]: <http://cdaw.gsfc.nasa.gov/CME_list/>
{ "pile_set_name": "ArXiv" }
--- abstract: | De Alfaro and Henzinger’s Interface Automata (IA) and Nyman et al.’s recent combination IOMTS of IA and Larsen’s Modal Transition Systems (MTS) are established frameworks for specifying interfaces of system components. However, neither IA nor IOMTS consider conjunction that is needed in practice when a component shall satisfy multiple interfaces, while Larsen’s MTS-conjunction is not closed and Beneš et al.’s conjunction on disjunctive MTS does not treat internal transitions. In addition, IOMTS-parallel composition exhibits a compositionality defect. This article defines conjunction (and also disjunction) on IA and disjunctive MTS and proves the operators to be ‘correct’, i.e., the greatest lower bounds (least upper bounds) wrt. IA- and resp. MTS-refinement. As its main contribution, a novel interface theory called Modal Interface Automata (MIA) is introduced: MIA is a rich subset of IOMTS featuring explicit output-must-transitions while input-transitions are always allowed implicitly, is equipped with compositional parallel, conjunction and disjunction operators, and allows a simpler embedding of IA than Nyman’s. Thus, it fixes the shortcomings of related work, without restricting designers to deterministic interfaces as Raclet et al.’s modal interface theory does. address: - '[a]{}Software Technologies Research Group, University of Bamberg, 96045 Bamberg, Germany' - '[b]{}Institute for Computer Science, University of Augsburg, 86135 Augsburg, Germany' author: - 'Gerald L[ü]{}ttgena' - Walter Voglerb bibliography: - 'literature.bib' title: Modal Interface Automata --- Introduction {#sec:introduction} ============ Interfaces play an important role when designing complex software and hardware systems so as to be able to check interoperability of system components already at design stage. Early interface theories deal with types of data and operations only and have been successfully deployed in compilers. Over the past two decades, research has focused on more advanced interface theories for *sequential* and object-oriented software systems, where interfaces also comprise behavioural types. Such types are often referred to as *contracts* [@Mey92] and can express pre- and post-conditions and invariants of methods and classes. Much progress has been made on the design of contract languages and on automated verification techniques that can decide whether a system component meets its contract (cf. [@HatLeaLeinMuePar2012] for a survey). More recently, *behavioural interfaces* have also been proposed and are being investigated for the use in *concurrent* systems, with prominent application examples being embedded systems (e.g., [@MayGru2005]) and web services (e.g., [@BeyChaHenSes2007; @MerBjo2003]). In this context, behavioural interfaces are intended to capture protocol aspects of component interaction. One prominent example of such an interface theory is de Alfaro and Henzinger’s *Interface Automata* (IA) [@DeAHen2001; @DeAHen2005], which is based on labelled transition systems (LTS) but distinguishes a component’s input and output actions. The theory comes with an asymmetric parallel composition operator, where a component may wait on inputs but never on outputs. Thus, a component’s output must be consumed immediately, or an error occurs. In case no potential system environment may restrict the system components’ behaviour so that all errors are avoided, the components are deemed to be incompatible. Semantically, IA employs a refinement notion based on an alternating simulation, such that a component satisfies an interface if (a) it implements all input behaviour prescribed by the interface and (b) the interface permits all output behaviour executed by the implementing component. Accordingly and surprisingly, an output in a specification can always be ignored in an implementation. In particular, a component that consumes all inputs but never produces any output satisfies any interface. Since a specifier certainly wants to be able to prescribe at least some outputs, Larsen, Nyman and Wasowski have built their interface theory on Modal Transition Systems (MTS) [@Lar89] rather than LTS, which enables one to distinguish between may- and must-transitions and thus to express mandatory outputs. The resulting *IOMTS* interface theory [@LarNymWas2007], into which IA can be embedded, is equipped with an IA-style parallel composition and an MTS-style modal refinement. Unfortunately, IOMTS-modal refinement is not a precongruence (i.e., not compositional) for parallel composition; a related result in [@LarNymWas2007] has already been shown incorrect by Raclet et al. in [@RacBadBenCaiLegPas2011]. The present article starts from the observation that the above interface theories are missing one important operator, namely conjunction on interfaces. Conjunction is needed in practice since components are often designed to satisfy multiple interfaces simultaneously, each of which specifies a particular aspect of component interaction. Indeed, conjunction is a key operator when specifying and developing systems from different viewpoints as is common in modern software engineering. We thus start off by recalling the IA-setting and defining a conjunction operator ${\wedge}$ for IA; we prove that ${\wedge}$ is indeed conjunction, i.e., the greatest lower bound wrt. alternating simulation (cf. Sec. \[sec:ia\]). Essentially the same operator has recently and independently been defined in [@CheChiJonKwi2012], where it is shown that it gives the greatest lower bound wrt. a *trace-based* refinement relation. As an aside, we also develop and investigate the dual disjunction operator ${\vee}$ for IA. This is a natural operator for describing alternatives in loose specifications, thus leaving implementation decisions to implementors. Similarly, we define conjunction and disjunction operators for a slight extension of MTS (a subset of *Disjunctive MTS* [@LarXin90], cf. Sec. \[sec:dmts\]), which paves us the way for our main contribution outlined below. Although Larsen has already studied conjunction and disjunction for MTS, his operators do, in contrast to ours, not preserve the MTS-property of syntactic consistency, i.e., a conjunction or disjunction almost always has some required transitions (must-transitions) that are not allowed (missing may-transitions). An additional difficulty when compared to the IA-setting is that two MTS-interfaces may not have a common implementation; indeed, inconsistencies may arise when composing MTSs conjunctively. We handle inconsistencies in a two-stage definition of conjunction, adapting ideas from our prior work on conjunction in a CSP-style process algebra [@LueVog2010] that uses, however, a very different parallel operator and refinement preorder. In [@BenCerKre2011], a conjunction for Disjunctive MTS (DMTS) is introduced in a two-stage style, too. Our construction and results for conjunction significantly extend the ones of [@BenCerKre2011] in that we also treat internal transitions that, e.g., result from communication. Note also that our setting employs event-based communication via handshake and thus differs substantially from the one of shared-memory communication studied by Abadi and Lamport in their paper on conjoining specifications [@AbaLam95]. The same comment applies to Doyen et al. [@DoyHenJobPet2008], who have studied a conjunction operator for an interface theory involving shared-variable communication. Our article’s main contribution is a novel interface theory, called *Modal Interface Automata* (MIA), which is essentially a rich subset of IOMTS that still allows one to express output-must-transitions. In contrast to IOMTS, must-transitions can also be disjunctive, and input-transitions are either required (i.e., must-transitions) or allowed implicitly. MIA is equipped with an MTS-style conjunction ${\wedge}$, disjunction ${\vee}$ and an IOMTS-style parallel composition operator, as well as with a slight adaptation of IOMTS-refinement. We show that (i) MIA-refinement is a precongruence for all three operators; (ii) ${\wedge}$ (${\vee}$) is indeed conjunction (disjunction) for this preorder; and (iii) IA can be embedded into MIA in a much cleaner, homomorphic fashion than into IOMTS [@LarNymWas2007] (cf. Sec. \[sec:mia\]). Thereby, we remedy the shortcomings of related work while, unlike the language-based modal interface theory of [@RacBadBenCaiLegPas2011], still permitting nondeterminism in specifications. Conjunction and Disjunction for Interface Automata {#sec:ia} ================================================== *Interface Automata* (IA) were introduced by de Alfaro and Henzinger [@DeAHen2001; @DeAHen2005] as a *reactive type* theory that abstractly describes the communication behaviour of software or hardware components in terms of their inputs and outputs. IAs are labelled transition systems where visible actions are partitioned into inputs and outputs. The idea is that interfaces interact with their environment according to the following rules. An interface cannot block an incoming input in any state but, if an input arrives unexpectedly, it is treated as a catastrophic system failure. This means that, if a state does not enable an input, this is a requirement on the environment not to produce this input. Vice versa, an interface guarantees not to produce any unspecified outputs, which are in turn inputs to the environment. This intuition is reflected in the specific refinement relation of *alternating simulation* between IA and in the *parallel composition* on IA, which have been defined in [@DeAHen2005] and are recalled in this section. Most importantly, however, we introduce and study a *conjunction operator* on IA, which is needed in practice to reason about components that are expected to satisfy multiple interfaces. An *Interface Automaton* (IA) is a tuple $Q = (Q, I, O, {\stackrel{}{\longrightarrow}})$, where (1) $Q$ is a set of states, (2) $I$ and $O$ are disjoint input and output alphabets, resp., not containing the special, silent action $\tau$, (3) ${\stackrel{}{\longrightarrow}} \,\subseteq Q \times (I \cup O \cup \{\tau\}) \times Q$ is the *transition relation*. The transition relation is required to be *input-deterministic*, i.e., $a \in I$, $q {\stackrel{a}{\longrightarrow}} q'$ and $q {\stackrel{a}{\longrightarrow}} q''$ implies $q' = q''$. In the remainder, we write $q \!{\stackrel{a}{\longrightarrow}}$ if $q {\stackrel{a}{\longrightarrow}} q'$ for some $q'$, as well as $q \,\not\!{\stackrel{a}{\longrightarrow}}$ for its negation. \[def:ia\] In contrast to [@DeAHen2005] we do not distinguish internal actions and denote them all by $\tau$, as is often done in process algebras. We let $A$ stand for $I \cup O$, let $a$ ($\alpha$) range over $A$ ($A \cup \{\tau\}$), and introduce the following weak transition relations: $q {\stackrel{{\varepsilon}}{\Longrightarrow}} q'$ if $q ({\stackrel{\tau}{\longrightarrow}})^{\ast} q'$, and $q {\stackrel{o}{\Longrightarrow}} q'$ for $o \in O$ if $\exists q''.\, q {\stackrel{{\varepsilon}}{\Longrightarrow}} q'' {\stackrel{o}{\longrightarrow}} q'$; note that there are no $\tau$-transitions after the $o$-transition. Moreover, we define $\hat{\alpha} = {\varepsilon}$ if $\alpha = \tau$, and $\hat{\alpha} = \alpha$ otherwise. Let $P$ and $Q$ be IAs with common input and output alphabets. Relation ${\mathcal{R}} \subseteq P \times Q$ is an *alternating simulation relation* if for all ${({p},{q})} \in {\mathcal{R}}$: 1. $q {\stackrel{a}{\longrightarrow}} q'$ and $a \in I$ implies $\exists p'.\, p {\stackrel{a}{\longrightarrow}} p'$ and ${({p'},{q'})} \in {\mathcal{R}}$, 2. $p {\stackrel{\alpha}{\longrightarrow}} p'$ and $\alpha \in O \cup \{\tau\}$ implies $\exists q'.\, q {\stackrel{\hat{\alpha}}{\Longrightarrow}} q'$ and ${({p'},{q'})} \in {\mathcal{R}}$. We write $p {\sqsubseteq_{\textrm{IA}}}q$ and say that $p$ *IA-refines* $q$ if there exists an alternating simulation relation ${\mathcal{R}}$ such that ${({p},{q})} \in {\mathcal{R}}$. \[def:iasim\] According to the basic idea of IA, if specification $Q$ in state $q$ allows some input $a$ delivered by the environment, then the related implementation state $p$ of $P$ must allow this input immediately in order to avoid system failure. Conversely, if $P$ in state $p$ produces output $a$ to be consumed by the environment, this output must be expected by the environment even if $q {\stackrel{a}{\Longrightarrow}}$; this is because $Q$ could have moved unobservedly from state $q$ to some $q'$ that enables $a$. Since inputs are not treated in Def. \[def:iasim\] (ii), they are always allowed for $p$. It is easy to see that IA-refinement ${\sqsubseteq_{\textrm{IA}}}$ is a preorder on IA and the largest alternating simulation relation. Given input and output alphabets $I$ and $O$, resp., the IA $$\textit{BlackHole}_{I,O} \,{=_{\text{df}}}\, (\{ \textit{blackhole} \}, I, O, \{ (\textit{blackhole},a,\textit{blackhole}) \;|\; a \in I \})$$ IA-refines any other IA over $I$ and $O$. Conjunction on IA {#subsec:iaconj} ----------------- Two IAs with common alphabets are always logically consistent in the sense that they have a common implementation, e.g., the respective blackhole IA as noted above. This makes the definition of conjunction on IA relatively straightforward. Here and similarly later, we index a transition by the system’s name to make clear from where it originates, in case this is not obvious from the context. Let $P = (P, I, O, {\stackrel{}{\longrightarrow}}_P)$ and $Q = (Q, I,$ $O, {\stackrel{}{\longrightarrow}}_Q)$ be IAs with common input and output alphabets and disjoint state sets $P$ and $Q$. The conjunction $P {\wedge}Q$ is defined by $(\{ p {\wedge}q \;|\; p \in P,\, q \in Q \} \cup P \cup Q, I, O, {\stackrel{}{\longrightarrow}})$, where ${\stackrel{}{\longrightarrow}}$ is the least set satisfying ${\stackrel{}{\longrightarrow}}_P \subseteq {\stackrel{}{\longrightarrow}}$, ${\stackrel{}{\longrightarrow}}_Q \subseteq {\stackrel{}{\longrightarrow}}$, and the following operational rules: ---------- --------------------------------------------------------------- ---- ------------------------------------------------------------------------------------------------------- [(I1)]{} $p {\wedge}q {\stackrel{a}{\longrightarrow}} p'$ if $p {\stackrel{a}{\longrightarrow}}_P p'$, $q \,\not\!{\stackrel{a}{\longrightarrow}}_Q$ and $a \in I$ [(I2)]{} $p {\wedge}q {\stackrel{a}{\longrightarrow}} q'$ if $p \,\not\!{\stackrel{a}{\longrightarrow}}_P$, $q {\stackrel{a}{\longrightarrow}}_Q q'$ and $a \in I$ [(I3)]{} $p {\wedge}q {\stackrel{a}{\longrightarrow}} p' {\wedge}q'$ if $p {\stackrel{a}{\longrightarrow}}_P p'$, $q {\stackrel{a}{\longrightarrow}}_Q q'$ and $a \in I$ [(O)]{} $p {\wedge}q {\stackrel{a}{\longrightarrow}} p' {\wedge}q'$ if $p {\stackrel{a}{\longrightarrow}}_P p'$, $q {\stackrel{a}{\longrightarrow}}_Q q'$ and $a \in O$ [(T1)]{} $p {\wedge}q {\stackrel{\tau}{\longrightarrow}} p' {\wedge}q$ if $p {\stackrel{\tau}{\longrightarrow}}_P p'$ [(T2)]{} $p {\wedge}q {\stackrel{\tau}{\longrightarrow}} p {\wedge}q'$ if $q {\stackrel{\tau}{\longrightarrow}}_Q q'$ ---------- --------------------------------------------------------------- ---- ------------------------------------------------------------------------------------------------------- \[def:iaandop\] ![Example illustrating IA-conjunction.[]{data-label="fig:iaandopex"}](iaandopex.png) Intuitively, conjunction is the synchronous product over actions (cf. Rules (I3), (O), (T1) and (T2)). Since inputs are always implicitly present, this also explains Rules (I1) and (I2); for example, in Rule (I1), $q$ does not impose any restrictions on the behaviour after input $a$ and is therefore dropped from the target state. Moreover, the conjunction operator is commutative and associative. As an aside, note that the rules with digit 2 in their names are the symmetric cases of the respective rules with digit 1; this convention will hold true throughout this article. Fig. \[fig:iaandopex\] applies the rules above to an illustrating example; here and in the following figures, we write $a?$ for an input $a$ and $a!$ for an output $a$. Essentially the same conjunction operator is defined by Chen et al. in [@CheChiJonKwi2012], where a non-standard variant of IA is studied that employs *explicit* error states and uses a trace-based semantics and refinement preorder (going back to Dill [@Dil89]). The difference between their conjunction and Def. \[def:iaandop\] is that error states are explicitly used in the clauses that correspond to Rules (I1) and (I2) above, which renders our definition arguably more elegant. In [@CheChiJonKwi2012], an analogue theorem to Thm. \[thm:iaandisand\] below is shown, but its statement is different as it refers to a different refinement preorder. Also note that, deviating from the IA-literature, error states are called inconsistent in [@CheChiJonKwi2012], but this is not related to logic inconsistency as studied by us. Our first result states that an implementation satisfies the conjunction of interfaces exactly if it satisfies each of them. This is a desired property in system design where each interface describes one aspect (or view) of the overall specification. Let $P, Q, R$ be IAs with states $p$, $q$, $r$, resp. Then, $r {\sqsubseteq_{\textrm{IA}}}p$ and $r {\sqsubseteq_{\textrm{IA}}}q$ if and only if $r {\sqsubseteq_{\textrm{IA}}}p {\wedge}q$. \[thm:iaandisand\] Technically, this result states that ${\wedge}$ gives the greatest lower-bound wrt. ${\sqsubseteq_{\textrm{IA}}}$ (up to equivalence), and its proof uses the input-determinism property of IA. The theorem also implies compositional reasoning; from universal algebra one easily gets: For IAs $P, Q, R$ with states $p$, $q$ and $r$: $\,p {\sqsubseteq_{\textrm{IA}}}q$ $\;\Longrightarrow\;$ $p {\wedge}r{\sqsubseteq_{\textrm{IA}}}q {\wedge}r$. \[cor:iaandopcomp\] Disjunction on IA {#subsec:iadisj} ----------------- In analogy to conjunction we develop a disjunction operator on IA and discuss its properties; in particular, this operator should give the least upper bound. Let $P = (P, I, O, {\stackrel{}{\longrightarrow}}_P)$ and $Q = (Q, I,$ $O, {\stackrel{}{\longrightarrow}}_Q)$ be IAs with common input and output alphabets and disjoint state sets $P$ and $Q$. The disjunction $P {\vee}Q$ is defined by $(\{ p {\vee}q \;|\; p \in P,\, q \in Q \} \cup P \cup Q, I, O, {\stackrel{}{\longrightarrow}})$, where ${\stackrel{}{\longrightarrow}}$ is the least set satisfying ${\stackrel{}{\longrightarrow}}_P \subseteq {\stackrel{}{\longrightarrow}}$, ${\stackrel{}{\longrightarrow}}_Q \subseteq {\stackrel{}{\longrightarrow}}$ and the following operational rules: ----------- --------------------------------------------------------- ---- -------------------------------------------------------------------------------------------------- [(I)]{} $p {\vee}q {\stackrel{a}{\longrightarrow}} p' {\vee}q'$ if $p {\stackrel{a}{\longrightarrow}}_P p'$, $q {\stackrel{a}{\longrightarrow}}_Q q'$ and $a \in I$ [(OT1)]{} $p {\vee}q {\stackrel{\alpha}{\longrightarrow}} p'$ if $p {\stackrel{\alpha}{\longrightarrow}}_P p'$ and $\alpha \in O \cup \{\tau\}$ [(OT2)]{} $p {\vee}q {\stackrel{\alpha}{\longrightarrow}} q'$ if $q {\stackrel{\alpha}{\longrightarrow}}_Q q'$ and $\alpha \in O \cup \{\tau\}$ ----------- --------------------------------------------------------- ---- -------------------------------------------------------------------------------------------------- \[def:iaorop\] Note that this definition preserves the input-determinism required of IA. The definition is roughly dual to the one of IA-conjunction, i.e., we take the ‘intersection’ of initial input behaviour and the ‘union’ of initial output behaviour. Strictly speaking, this would require the following additional rule for outputs $o \in O$: ---------- --------------------------------------------------------- ---- --------------------------------------------------------------------------------------- [(O3)]{} $p {\vee}q {\stackrel{o}{\longrightarrow}} p' {\vee}q'$ if $p {\stackrel{o}{\longrightarrow}}_P p'$ and $q {\stackrel{o}{\longrightarrow}}_Q q'$ ---------- --------------------------------------------------------- ---- --------------------------------------------------------------------------------------- However, the addition of this rule would in general result in disjunctions $p {\vee}q$ that are larger than the least upper bound of $p$ and $q$ wrt. ${\sqsubseteq_{\textrm{IA}}}$. The following theorem shows that our ${\vee}$-operator properly characterizes the least upper bound: Let $P, Q, R$ be IAs with states $p$, $q$ and $r$, resp. Then, $p {\vee}q {\sqsubseteq_{\textrm{IA}}}r$ if and only if $p {\sqsubseteq_{\textrm{IA}}}r$ and $q {\sqsubseteq_{\textrm{IA}}}r$. \[thm:iaorisor\] Compositionality of disjunction can now be derived dually to the proof of Corollary \[cor:iaandopcomp\] but using Thm. \[thm:iaorisor\] instead of Thm. \[thm:iaandisand\]: For IAs $P, Q, R$ with states $p$, $q$ and $r$: $\,p {\sqsubseteq_{\textrm{IA}}}q$ $\;\Longrightarrow\;$ $p {\vee}r{\sqsubseteq_{\textrm{IA}}}q {\vee}r$. \[cor:iaoropcomp\] ![Example illustrating IA-disjunction’s different treatment of inputs and outputs.[]{data-label="fig:iaoropex"}](iaoropex.png) The two examples of Fig. \[fig:iaoropex\] round off our investigation of IA disjunction by illustrating the operator’s different treatment of inputs and outputs. Regarding $p {\vee}q$ on the figure’s left-hand side, the choice of which disjunct to implement is taken with the first action $o \in O$ if both disjuncts are implemented; this meets the intuition of an inclusive-or. In the analogous situation of $r {\vee}s$ on the figure’s right-hand side, a branching on $i \in I$ is not allowed due to input-determinism, and the resulting IA is thus intuitively unsatisfactory. The root cause for this is that the IA-setting does not include sufficiently many automata and, therefore, the least upper bound is ‘too large’. The shortcoming can be remedied by introducing disjunctive transitions, as we will do below in the dMTS- and MIA-settings. Then, we will have more automata and, indeed, will get a smaller least upper bound.\[fromtena\] Parallel Composition on IA {#subsec:iaparop} -------------------------- We recall the parallel composition operator ${|}$ on IA of [@DeAHen2005], which is defined in two stages: first a standard product ${\otimes}$ between two IAs is introduced, where common actions are synchronized and hidden. Then, error states are identified, and all states are pruned from which reaching an error state is unavoidable. IAs $P_1$ and $P_2$ are called *composable* if $A_1 \cap A_2 = (I_1 \cap O_2) \cup (O_1 \cap I_2)$, i.e., each common action is input of one IA and output of the other IA. For such IAs we define the *product* $P_1 {\otimes}P_2 = (P_1 \times P_2, I, O, {\stackrel{}{\longrightarrow}})$, where $I = (I_1 \cup I_2) \setminus (O_1 \cup O_2)$ and $O = (O_1 \cup O_2) \setminus (I_1 \cup I_2)$ and where ${\stackrel{}{\longrightarrow}}$ is given by the following operational rules: ------------ ------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------- [(Par1)]{} ${({p_1},{p_2})} {\stackrel{\alpha}{\longrightarrow}} {({p'_1},{p_2})}$ if $p_1 {\stackrel{\alpha}{\longrightarrow}} p'_1$ and $\alpha \notin A_2$ [(Par2)]{} ${({p_1},{p_2})} {\stackrel{\alpha}{\longrightarrow}} {({p_1},{p'_2})}$ if $p_2 {\stackrel{\alpha}{\longrightarrow}} p'_2$ and $\alpha \notin A_1$ [(Par3)]{} ${({p_1},{p_2})} {\stackrel{\tau}{\longrightarrow}} {({p'_1},{p'_2})}$ if $p_1 {\stackrel{a}{\longrightarrow}} p'_1$ and $p_2 {\stackrel{a}{\longrightarrow}} p'_2$ for some $a$. ------------ ------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------- \[def:iaparprod\] Note that, in case of synchronization and according to Rule (Par3), one only gets internal $\tau$-transitions. A state ${({p_1},{p_2})}$ of a parallel product $P_1 {\otimes}P_2$ is an *error state* if there is some $a \in A_1 \cap A_2$ such that (a) $a \in O_1$, $p_1 \!{\stackrel{a}{\longrightarrow}}$ and $p_2 \,\not\!{\stackrel{a}{\longrightarrow}}$, or (b) $a \in O_2$, $p_2 \!{\stackrel{a}{\longrightarrow}}$ and $p_1 \,\not\!{\stackrel{a}{\longrightarrow}}$. A state of $P_1 {\otimes}P_2$ is *incompatible* if it may reach an error state autonomously, i.e., only by output or internal actions that are, intuitively, locally controlled. Formally, the set $E \subseteq P_1 \times P_2$ of incompatible states is the least set such that ${({p_1},{p_2})} \in E$ if (i) ${({p_1},{p_2})}$ is an error state or (ii) ${({p_1},{p_2})} {\stackrel{\alpha}{\longrightarrow}} {({p'_1},{p'_2})}$ for some $\alpha \in O \cup \{\tau\}$ and ${({p'_1},{p'_2})} \in E$. The *parallel composition* $P_1 {|}P_2$ of $P_1, P_2$ is obtained from $P_1 {\otimes}P_2$ by *pruning*, i.e., removing all states in $E$ and all transitions involving such states as source or target. If ${({p_1},{p_2})} \in P_1 {|}P_2$, we write $p_1 {|}p_2$ and call $p_1$ and $p_2$ *compatible*. \[def:iaparop\] Parallel composition is well-defined since input-determinism is preserved. Let $P_1$, $P_2$ and $Q_1$ be IAs with $p_1 \in P_1$, $p_2 \in P_2$, $q_1 \in Q_1$ and $p_1 {\sqsubseteq_{\textrm{IA}}}q_1$. Assume that $Q_1$ and $P_2$ are composable; then, (a) $P_1$ and $P_2$ are composable and (b) if $q_1$ and $p_2$ are compatible, then so are $p_1$ and $p_2$ and $p_1 {|}p_2 {\sqsubseteq_{\textrm{IA}}}q_1 {|}p_2$. \[thm:iaparopcomp\] This result relies on the fact that IAs are input-deterministic. While the theorem is already stated in [@DeAHen2005], its proof is only sketched therein. Here, it is a simple corollary of Thm. \[thm:miaparopcomp\] in Sec. \[subsec:miaparop\] and Thms. \[thm:iaembeddingmia\] and \[thm:miaembedding\](b) in Sec. \[subsec:embedding\] below. ![Example illustrating IA-parallel composition, where IA *TryOnce* has inputs $\{\textit{send, ack, nack}\}$ and outputs $\{\textit{trnsmt, ok, reset, retry}\}$, while IA *Client* has inputs $\{\textit{ok, retry}\}$ and outputs $\{\textit{send}\}$.[]{data-label="fig:iaparopex"}](iaparopex.png) We conclude by presenting a small example of IA-parallel composition in Fig. \[fig:iaparopex\], which is adapted from [@DeAHen2005]. *Client* does not accept its input *retry*. Thus, if the environment of $\textit{Client} {\otimes}\textit{TryOnce}$ would produce *nack*, the system would autonomously produce *reset* and run into a catastrophic error. To avoid this, the environment of $\textit{Client} \,{|}\textit{TryOnce}$ is required not to produce *nack*. This view is called optimistic: there exists an environment in which *Client* and *TryOnce* can cooperate without errors, and $\textit{Client} \,{|}\textit{TryOnce}$ describes the necessary requirements for such an environment. In the pessimistic view as advocated in [@BauHenWir2011], *Client* and *TryOnce* are regarded as incompatible due to the potential error. Conjunction and Disjunction for Modal Transition Systems {#sec:dmts} ======================================================== *Modal Transition Systems* (MTS) were investigated by Larsen [@Lar89] as a specification framework based on labelled transition systems but with two kinds of transitions: must-transitions specify required behaviour, may-transitions specify allowed behaviour, and absent transitions specify forbidden behaviour. Any refinement of an MTS-specification must preserve required and forbidden behaviour and may turn allowed behaviour into required or forbidden behaviour. Technically, this is achieved via an alternating-style simulation relation, called *modal refinement*, where any must-transition of the specification must be simulated by an implementation, while any may-transition of the implementation must be simulated by the specification. Our aim in this section is to extend MTS with conjunction and also disjunction. Larsen [@Lar89] first defined conjunction and disjunction on MTS (without $\tau$), but the resulting systems often violate syntactic consistency (they are not really MTSs) and are hard to understand. This construction was subsequently generalized by Larsen and Xinxin to Disjunctive MTS (DMTS) [@LarXin90], again ignoring syntactic consistency. This shortcoming was recently fixed by Beneš et al. [@BenCerKre2011] by exploiting the fact that an $a$-must-transition in a DMTS may have several alternative target states. However, this work does still not consider a weak setting, i.e., systems with $\tau$. Below, we will define conjunction and disjunction on a syntactically consistent subclass of DMTS, called *dMTS*, but more generally in a weak setting as defined in [@DeAHen2005; @LarNymWas2007]; this subclass is sufficient for the purposes of the present article, and we leave the extension of our results to DMTS for future work. Since the treatment of $\tau$-transitions is non-trivial and non-standard, we will motivate and explain it in detail. Note that this section will not consider parallel composition for (d)MTS. This is because we are working towards the MIA-setting that will be introduced in the next section, which like IA and unlike (d)MTS distinguishes between inputs and outputs. (d)MTS parallel composition can simply be defined in the style similar to Def. \[def:iaparprod\]; in particular, it does not have error states and thus fundamentally differs from conjunction as defined below. Disjunctive Modal Transition Systems {#subsec:dmts} ------------------------------------ We extend standard MTS only as far as needed for defining conjunction and disjunction, by introducing disjunctive must-transitions that are disjunctive wrt. exit states only (see Fig. \[fig:dmtsandopex\]). The following extension also has no $\tau$-must-transitions since these are not considered in the definition of the observational modal refinement of [@LarNymWas2007]. A *disjunctive Modal Transition System* (dMTS) is a tuple $Q = (Q, A, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$, where (1) $Q$ is a set of states, (2) $A$ is an alphabet not containing the special, silent action $\tau$, (3) ${\stackrel{}{\longrightarrow}} \,\subseteq Q \times A \times {({\mathcal{P}({Q})} \setminus \emptyset)}$ is the *must-transition* relation, (4) ${\stackrel{}{\dashrightarrow}} \,\subseteq Q \times (A \cup \{\tau\}) \times Q$ is the *may-transition* relation. We require *syntactic consistency*, i.e., $q {\stackrel{a}{\longrightarrow}} Q'$ implies $\forall q' {\in} Q'.\, q {\stackrel{a}{\dashrightarrow}} q'$. \[def:dmts\] More generally, the must-transition relation in a standard DMTS [@LarXin90] may be a subset of $Q \times {({\mathcal{P}({A \times Q})} \setminus \emptyset)}$. For notational convenience, we write $q {\stackrel{a}{\longrightarrow}} q'$ whenever $q {\stackrel{a}{\longrightarrow}} {\{{q'}\}}$; all must-transitions in standard MTS have this form. Our refinement relation on dMTS abstracts from internal computation steps in the same way as [@LarNymWas2007], i.e., by considering the following *weak may-transitions* for $\alpha \in A \cup \{\tau\}$: $q {\,\raisebox{1.0ex}{$\stackrel{{\varepsilon}}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}\,} q'$ if $q {\stackrel{\tau}{\dashrightarrow}}^{\ast}\! q'$, and $q {\,\raisebox{1.0ex}{$\stackrel{\alpha}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}\,} q'$ if $\exists q''.\, q {\,\raisebox{1.0ex}{$\stackrel{{\varepsilon}}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}\,} q'' {\stackrel{\alpha}{\dashrightarrow}} q'$. Let $P, Q$ be dMTSs. Relation ${\mathcal{R}} \subseteq P \times Q$ is an *(observational) modal refinement relation* if for all ${({p},{q})} \in {\mathcal{R}}$: 1. $q {\stackrel{a}{\longrightarrow}} Q'$ implies $\exists P'.\, p {\stackrel{a}{\longrightarrow}} P'$ and $\forall p' {\in} P'\,\exists q' {\in} Q'.\; {({p'},{q'})} \in {\mathcal{R}}$, 2. $p {\stackrel{\alpha}{\dashrightarrow}} p'$ implies $\exists q'.\, q {\,\raisebox{1.0ex}{$\stackrel{\hat{\alpha}}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}\,} q'$ and ${({p'},{q'})} \in {\mathcal{R}}$. We write $p {\sqsubseteq_{\textrm{dMTS}}}q$ and say that $p$ *dMTS-refines* $q$ if there exists an observational modal refinement relation ${\mathcal{R}}$ such that ${({p},{q})} \in {\mathcal{R}}$. \[def:dmtssim\] Again, ${\sqsubseteq_{\textrm{dMTS}}}$ is a preorder and the largest observational modal refinement relation. Except for disjunctiveness, dMTS-refinement is exactly defined as for MTS in [@LarNymWas2007]. In the following figures, any (disjunctive) must-transition drawn also represents implicitly the respective may-transition(s), unless explicitly stated otherwise. Conjunction on dMTS {#subsec:dmtsconj} ------------------- Technically similar to parallel composition for IA, conjunction will be defined in two stages. State pairs can be logically inconsistent due to unsatisfiable must-transitions; in the second stage, we remove such pairs incrementally. Let $P = (P, A, {\stackrel{}{\longrightarrow}}_P,$ ${\stackrel{}{\dashrightarrow}}_P)$ and $Q = (Q, A, {\stackrel{}{\longrightarrow}}_Q, {\stackrel{}{\dashrightarrow}}_Q)$ be dMTSs with common alphabet. The conjunctive product $P {\&}Q {=_{\text{df}}}(P \times Q, A, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$ is defined by its operational transition rules as follows: ------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [(Must1)]{} ${({p},{q})} {\stackrel{a}{\longrightarrow}} if $p {\stackrel{a}{\longrightarrow}}_P P'$ and $q {\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,}$ {\{{{({p'},{q'})}}\,|\,{p' \in P',\, q {\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,} q'}\}}$ [(Must2)]{} ${({p},{q})} {\stackrel{a}{\longrightarrow}} if $p {\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,}$ and $q {\stackrel{a}{\longrightarrow}}_Q Q'$ {\{{{({p'},{q'})}}\,|\,{p {\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,} p',\, q' \in Q'}\}}$ [(May1)]{} ${({p},{q})} {\stackrel{\tau}{\dashrightarrow}} {({p'},{q})}$ if $p {\,\raisebox{1.0ex}{$\stackrel{\tau}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,} p'$ [(May2)]{} ${({p},{q})} {\stackrel{\tau}{\dashrightarrow}} {({p},{q'})}$ if $q {\,\raisebox{1.0ex}{$\stackrel{\tau}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,} q'$ [(May3)]{} ${({p},{q})} {\stackrel{\alpha}{\dashrightarrow}} {({p'},{q'})}$ if $p {\,\raisebox{1.0ex}{$\stackrel{\alpha}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,} p'$ and $q {\,\raisebox{1.0ex}{$\stackrel{\alpha}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,} q'$ ------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[def:dmtsconjprod\] It might be surprising that a single transition in the product might stem from a transition sequence in one of the components (cf. the first four items above) and that the components can also synchronize on $\tau$ (cf. Rule (May3)). The necessity of this is discussed below; we only repeat here that conjunction is inherently different from parallel composition where, for instance, there is no synchronization on $\tau$. Given a conjunctive product $P {\&}Q$, the set ${F}\subseteq P \times Q$ of *(logically) inconsistent states* is defined as the least set satisfying the following rules: ---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------- ----------------------- [(F1)]{} $p \!{\stackrel{a}{\longrightarrow}}_P$, $q \not\!\!\!{\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,}$ implies ${({p},{q})} \in {F}$ [(F2)]{} $p \not\!\!\!{\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,}$, $q \!{\stackrel{a}{\longrightarrow}}_Q$ implies ${({p},{q})} \in {F}$ [(F3)]{} ${({p},{q})} {\stackrel{a}{\longrightarrow}} R'$ and $R' \subseteq {F}$ implies ${({p},{q})} \in {F}$ ---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------- ----------------------- The conjunction $P {\wedge}Q$ of dMTSs $P, Q$ is obtained by deleting all states ${({p},{q})} \in {F}$ from $P {\&}Q$. This also removes any may- or must-transition exiting a deleted state and any may-transition entering a deleted state; in addition, deleted states are removed from targets of disjunctive must-transitions. We write $p {\wedge}q$ for the state ${({p},{q})}$ of $P {\wedge}Q$; these are the consistent states by construction, and $p {\wedge}q$ is only defined for such a state. \[def:dmtsandop\] Regarding well-definedness, first observe that $P {\&}Q$ is a dMTS, where syntactic consistency follows from Rule (May3). Now, $P {\wedge}Q$ is a dMTS, too: if $R'$ becomes empty for some ${({p},{q})} {\stackrel{a}{\longrightarrow}} R'$, then also ${({p},{q})}$ is deleted when constructing $P {\wedge}Q$ from $P {\&}Q$ according to (F3). Finally, our conjunction operator is also commutative and associative. ![Examples motivating the rules of Def. \[def:dmtsconjprod\].[]{data-label="fig:exdmtsconj"}](exdmtsconj.png) Before we formally state that operator ${\wedge}$ is indeed conjunction on dMTS, we present several examples depicted in Fig. \[fig:exdmtsconj\], which motivate the rules of Def. \[def:dmtsconjprod\]. In each case, $r$ is a common implementation of $p$ and $q$ (but not $r'$ in Ex. I), whence these must be logically consistent. Thus, Ex. I explains Rule (Must1). If we only had ${\stackrel{\tau}{\dashrightarrow}}$ in the precondition of Rule (May1), $p {\wedge}q$ of Ex. II would just consist of a $c$-must- and an $a$-may-transition; the only $\tau$-transition would lead to a state in ${F}$ due to $b$. This would not allow the $\tau$-transition of $r$, explaining Rule (May1). In Ex. III and with only ${\stackrel{\alpha}{\dashrightarrow}}$ in the preconditions of Rule (May3), $p {\wedge}q$ would just have three $\tau$-transitions to inconsistent states (due to $b$, $c$, resp.). This explains the weak transitions for $\alpha \not= \tau$ in Rule (May3). According to Rules (May1) and (May2), $p {\wedge}q$ in Ex. IV has four $\tau$-transitions to states in ${F}$ (due to $d$). With preconditions based on at least one ${\stackrel{\tau}{\dashrightarrow}}$ instead of ${\,\raisebox{1.0ex}{$\stackrel{\tau}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}\,}$ in the $\tau$-case of Rule (May3), there would be three more $\tau$-transitions to states in ${F}$ (due to $b$ or $c$). Thus, it is essential that Rule (May3) also allows the synchronization of two weak $\tau$-transitions, which in this case gives $p {\wedge}q {\stackrel{\tau}{\dashrightarrow}} p' {\wedge}q'$. ![Example illustrating dMTS-conjunction.[]{data-label="fig:dmtsandopex"}](dmtsandopex.png) Fig. \[fig:dmtsandopex\] shows a small example illustrating the treatment of disjunctive must-transitions in the presence of inconsistency. In $P {\&}Q$, the $a$-must-transition of $Q$ combines with the three $a$-transitions of $P$ to a truly disjunctive must-transition with a three-element target set. The inconsistency of state $(4,6)$ due to $b$ propagates back to state $(3,5)$. The inconsistent states are then removed in $P {\wedge}Q$. Let $P, Q, R$ be dMTSs. Then, (i) $(\exists r \in R.\, r {\sqsubseteq_{\textrm{dMTS}}}p$ and $r {\sqsubseteq_{\textrm{dMTS}}}q)$ if and only if $p {\wedge}q$ is defined. In addition, in case $p {\wedge}q$ is defined: (ii) $r {\sqsubseteq_{\textrm{dMTS}}}p$ and $r {\sqsubseteq_{\textrm{dMTS}}}q \text{ if and only if } r {\sqsubseteq_{\textrm{dMTS}}}p {\wedge}q$. \[thm:dmtsandisand\] This key theorem states in Item (ii) that conjunction behaves as it should, i.e., ${\wedge}$ on dMTSs is the greatest lower bound wrt. ${\sqsubseteq_{\textrm{dMTS}}}$. Item (i) concerns the intuition that two specifications $p$ and $q$ are logically inconsistent if they do not have a common implementation; formally, $p {\wedge}q$ is undefined in this case. Alternatively, we could have added an explicit inconsistent element ${\textit{ff}}$ to our setting, so that $p {\wedge}q = {\textit{ff}}$. This element ${\textit{ff}}$ would be defined to be a refinement of every $p'$ and equivalent to any ${({p'},{q'})} \in {F}$ of some $P {\&}Q$. Additionally, ${\textit{ff}}{\wedge}p'$ and $p' {\wedge}{\textit{ff}}$ would be defined as ${\textit{ff}}$, for any $p'$. The proof of the above theorem requires us to first introduce the following concept for formally reasoning about inconsistent states: A *dMTS-witness* $W$ of $P {\&}Q$ is a subset of $P \times Q$ such that the following conditions hold for all ${({p},{q})} \in W$: ---------- -------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------------------- [(W1)]{} $p \!{\stackrel{a}{\longrightarrow}}_P$ implies $q {\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,}$ [(W2)]{} $q \!{\stackrel{a}{\longrightarrow}}_Q$ implies $p {\,\raisebox{1.0ex}{$\stackrel{a}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,}$ [(W3)]{} ${({p},{q})} {\stackrel{a}{\longrightarrow}} R'$ implies $R' \cap W \not= \emptyset$ ---------- -------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------------------- \[def:dmtswitness\] Conditions (W1)–(W3) correspond to the negations of the premises of Conditions (F1)–(F3) in Def. \[def:dmtsandop\]. This implies Part (i) of the following lemma, while Part (ii) is essential for proving Thm. \[thm:dmtsandisand\](i): Let $P {\&}Q$ be a conjunctive product of dMTSs and $R$ be a dMTS. 1. For any dMTS-witness $W$ of $P {\&}Q$, we have ${F}\cap W = \emptyset$. 2. The set $\{ {({p},{q})} \in P \times Q \;|\; \exists r \in R.\, \text{$r {\sqsubseteq_{\textrm{dMTS}}}p$}$ and $r {\sqsubseteq_{\textrm{dMTS}}}q \}$ is a dMTS-witness of $P {\&}Q$. \[lem:dmtswitness\] We are now able to prove Thm. \[thm:dmtsandisand\]: The following corollary of Thm. \[thm:dmtsandisand\] now easily follows: dMTS-refinement is compositional wrt. conjunction, i.e., if $p {\sqsubseteq_{\textrm{dMTS}}}q$ and $p {\wedge}r$ is defined, then $q {\wedge}r$ is defined and $p {\wedge}r {\sqsubseteq_{\textrm{dMTS}}}q {\wedge}r$. \[cor:dmtsandopcomp\] Thus, we have succeeded in our ambition to define a syntactically consistent conjunction for MTS, for a weak MTS-variant with disjunctive must-transitions. ![Example illustrating Larsen’s MTS-conjunction; ${\stackrel{a}{\dashrightarrow}}$ drawn separately.[]{data-label="fig:larsen"}](larsen.png) Larsen [@Lar89] also defines a conjunction operator on MTS, but almost always the result violates syntactic consistency. A simple example is shown in Fig. \[fig:larsen\] where $q$ refines $p$ in Larsen’s setting as well as in our dMTS-setting; in this figure, may-transitions are drawn explicitly, i.e, a must- is not necessarily also a may-transition. Since Larsen’s $p \land q$ is not syntactically consistent, this $p \land q$ and $q$ are, contrary to the first impression, equivalent. In our dMTS-setting, $P {\wedge}Q$ is isomorphic to $Q$ which will also hold for our MIA-setting below (with action $b$ read as output and where $a$ could be either an input or an output). ![Example showing that conjunction cannot be defined on MTS. (A similar example is given in [@BenCerKre2011] without proof.)[]{data-label="fig:mtsand"}](mtsand.png) Indeed, conjunction cannot be defined on MTS in general, e.g., for the $P$ and $Q$ in Fig. \[fig:mtsand\](a). The states $p$ and $q$ have $r$ as well as $s$ as common implementations; thus, $r$ and $s$ must be implementations of $p {\wedge}q$. An MTS $P {\wedge}Q$ would need in state $p {\wedge}q$ (i) an immediate $a$-must-transition (due to $q$) followed by (ii) a must-$b$ and no $c$ or a must-$c$ and no $b$ (due to $p$). In the first (second) case, $s$ ($r$) is not an implementation of $p {\wedge}q$, which is a contradiction. Using dMTS, the conjunction $P {\wedge}Q$ is as shown in Fig. \[fig:mtsand\](b). The above shortcoming of MTS has been avoided by Larsen et al. in [@LarSteWei95] by limiting conjunction to so-called *independent* specifications that make inconsistencies obsolete; this restriction also excludes the above example. Recently, Bauer et al. [@BauJuhLarLegSrb2012] have defined conjunction for a version of MTS extended by partially ordered labels; when refining an MTS, also the labels can be refined, and this has various applications. However, the conjunction operator is only defined under some restriction, which corresponds to requiring determinism in the standard MTS-setting. Another MTS-inspired theory including a conjunction operator has been introduced by Raclet et al. [@RacBadBenCaiLegPas2011]. While their approach yields the desired $p {\wedge}q$ as in our dMTS-setting, it is language-based and thus deals with deterministic systems only. Disjunction on dMTS {#subsec:dmtsdisj} ------------------- We will see in Sec. \[subsec:iaembeddingdmts\] that input-transitions (output-transitions) in IA correspond to must-transitions (may-transitions) in dMTS. In this light, the following definition of disjunction corresponds closely to the one for IA. In particular, initial must-transitions are also combined, but this time the choice between disjuncts is not delayed. Let $P = (P, A, {\stackrel{}{\longrightarrow}}_P,$ ${\stackrel{}{\dashrightarrow}}_P)$ and $Q = (Q, A, {\stackrel{}{\longrightarrow}}_Q,$ ${\stackrel{}{\dashrightarrow}}_Q)$ be dMTSs with common alphabet. The disjunction $P {\vee}Q$ is defined as the tuple $(\{ p {\vee}q \;|\; p \in P,\, q \in Q \} \cup P \cup Q, A, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$, where ${\stackrel{}{\longrightarrow}}$ and ${\stackrel{}{\dashrightarrow}}$ are the least sets satisfying ${\stackrel{}{\longrightarrow}}_P \subseteq {\stackrel{}{\longrightarrow}}$, ${\stackrel{}{\dashrightarrow}}_P \subseteq {\stackrel{}{\dashrightarrow}}$, ${\stackrel{}{\longrightarrow}}_Q \subseteq {\stackrel{}{\longrightarrow}}$, ${\stackrel{}{\dashrightarrow}}_Q \subseteq {\stackrel{}{\dashrightarrow}}$ and the following operational rules: ------------ -------------------------------------------------------- ---- ------------------------------------------------------------------------------------ [(Must)]{} $p {\vee}q {\stackrel{a}{\longrightarrow}} P' \cup Q'$ if $p {\stackrel{a}{\longrightarrow}}_P P'$, $q {\stackrel{a}{\longrightarrow}}_Q Q'$ [(May1)]{} $p {\vee}q {\stackrel{\alpha}{\dashrightarrow}} p'$ if $p {\stackrel{\alpha}{\dashrightarrow}}_P p'$ [(May2)]{} $p {\vee}q {\stackrel{\alpha}{\dashrightarrow}} q'$ if $q {\stackrel{\alpha}{\dashrightarrow}}_Q q'$ ------------ -------------------------------------------------------- ---- ------------------------------------------------------------------------------------ \[def:dmtsorop\] This definition clearly yields well-defined dMTSs respecting syntactic consistency. It also gives us the desired least-upper-bound property: Let $P$, $Q$, and $R$ be dMTSs with states $p$, $q$ and $r$, resp. Then, $p {\vee}q {\sqsubseteq_{\textrm{dMTS}}}r$ if and only if $p {\sqsubseteq_{\textrm{dMTS}}}r$ and $q {\sqsubseteq_{\textrm{dMTS}}}r$. \[thm:dmtsorisor\] Analogously to the IA-setting we may obtain the following corollary to the above theorem: dMTS-refinement is compositional wrt. disjunction. \[cor:dmtsoropcomp\] Embedding of IA into dMTS {#subsec:iaembeddingdmts} ------------------------- We can now adopt the embedding of IA into MTS from [@LarNymWas2007] to our setting: Let $P$ be an IA with $A = I \cup O$. Then, the embedding ${[{P}]_{\text{dMTS}}}$ of $P$ into (d)MTS is defined as the (d)MTS $(P\cup\{{u_{P}}\}, A, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$, where ${u_{P}} \notin P$ and: --------------------------------------------------- ---- --------------------------------------------------------------------------------- $p {\stackrel{\alpha}{\dashrightarrow}} p'$ if $p {\stackrel{\alpha}{\longrightarrow}}_P p'$ and $\alpha \in A \cup \{\tau\}$; $p {\stackrel{a}{\longrightarrow}} p'$ if $p {\stackrel{a}{\longrightarrow}}_P p'$ and $a \in I$; $p {\stackrel{a}{\dashrightarrow}} {u_{P}}$ if $p \,\not\!{\stackrel{a}{\longrightarrow}}_P$ and $a \in I$; ${u_{P}} {\stackrel{a}{\dashrightarrow}} {u_{P}}$ if $a \in A$. --------------------------------------------------- ---- --------------------------------------------------------------------------------- \[def:iaembeddingdmts\] For the remainder of this section we simply write ${[{p}]_{\text{}}}$ for $p \in {[{P}]_{\text{dMTS}}}$. Observe that ${[{P}]_{\text{dMTS}}}$ does not have truly disjunctive transitions; hence, it is an MTS. In [@LarNymWas2007], it is shown that this embedding respects refinement, i.e., $p {\sqsubseteq_{\textrm{IA}}}q$ if and only if ${[{p}]_{\text{}}} {\sqsubseteq_{\textrm{dMTS}}}{[{q}]_{\text{}}}$. Since conjunction (disjunction) on IA and dMTS is the greatest lower bound (least upper bound) wrt. ${\sqsubseteq_{\textrm{IA}}}$ and ${\sqsubseteq_{\textrm{dMTS}}}$ (up to equivalence), resp., we have by general order theory: For all IAs $P$ and $Q$ with $p \in P$ and $q \in Q$: 1. ${[{p {\wedge}q}]_{\text{}}} \,{\sqsubseteq_{\textrm{dMTS}}}\, {[{p}]_{\text{}}} {\wedge}{[{q}]_{\text{}}}$; 2. ${[{p {\vee}q}]_{\text{}}} \,{\sqsupseteq_{\textrm{dMTS}}}\, {[{p}]_{\text{}}} {\vee}{[{q}]_{\text{}}}$. \[prop:iaembeddingdmts\] ![Example refuting the reverse refinement in Prop. \[prop:iaembeddingdmts\](a). All non-labelled transitions depict $i$-may-transitions.[]{data-label="fig:conjiaembeddingdmts"}](conjiaembeddingdmts.png) ![Example refuting the reverse refinement in Prop. \[prop:iaembeddingdmts\](b) ($a \in A = \{i,j,k\}$).[]{data-label="fig:disjiaembeddingdmts"}](disjiaembeddingdmts.png) The reverse refinements do not hold due to the additional dMTSs that are not embeddings of IA. To see this for conjunction, consider the example in Fig. \[fig:conjiaembeddingdmts\], where $P$ and $Q$ are IAs. State $r$ in dMTS $R$ is a common implementation of state ${[{p}]_{\text{}}}$ and state ${[{q}]_{\text{}}}$, i.e., their conjunction is sufficiently large to cover $r$. However, $r$ does not refine ${[{p {\wedge}q}]_{\text{}}}$ since the initial $i$-must-transition of the latter cannot be matched by the former. Hence, ${[{p {\wedge}q}]_{\text{}}}$ and ${[{p}]_{\text{}}} {\wedge}{[{q}]_{\text{}}}$ cannot be equivalent. To see this for disjunction, consider $r$ and $s$ in Fig. \[fig:iaoropex\] on the right. Fig. \[fig:disjiaembeddingdmts\] shows all relevant dMTSs, and ${[{r {\vee}s}]_{\text{}}}$ does not refine ${[{r}]_{\text{}}} {\vee}{[{s}]_{\text{}}}$ since it does not have a must-transition after $i$. Modal Interface Automata {#sec:mia} ======================== An essential point of Larsen, Nyman and Wasowski’s paper [@LarNymWas2007] is to enrich IA with modalities to get a flexible specification framework where inputs and outputs can be prescribed, allowed or prohibited. To do so, they consider IOMTS, i.e., MTS where visible actions are partitioned into inputs and outputs, and define parallel composition in IA-style. ![Example demonstrating the compositionality flaw of IOMTS.[]{data-label="fig:iomtsflaw"}](iomtsflaw.png) Our example of Fig. \[fig:iomtsflaw\] shows that their approach has a serious flaw, namely observational modal refinement is not a precongruence for the parallel composition of [@LarNymWas2007]. In this example, the IOMTS $P$ has input alphabet $\{a\}$ and empty output alphabet, while $Q$ and $Q'$ have input alphabet $\{i\}$ and output alphabet $\{a\}$. Obviously, $q' {\sqsubseteq_{\textrm{dMTS}}}q$. When composing $P$ and $Q$ in parallel, $p|q$ would reach an error state after an $i$-must-transition in [@LarNymWas2007] since the potential output $a$ of $Q$ is not expected by $P$. In contrast, $p|q'$ has an $i$-must- and $i$-may-transition not allowed by $P|Q$, so that $p|q' \not{\sqsubseteq_{\textrm{dMTS}}}p|q$. This counterexample also holds for (strong) modal refinement as defined in [@LarNymWas2007] and is particularly severe since all systems are deterministic and all must-transitions concern inputs only. The problem is that $p|q$ forbids input $i$. In [@LarNymWas2007], precongruence of parallel composition is not mentioned. Instead, a theorem relates the parallel composition of two IOMTSs to a different composition on two refining implementations, where an implementation in [@LarNymWas2007] is an IOMTS in which may- and must-transitions coincide. This theorem is incorrect as is pointed out in [@RacBadBenCaiLegPas2011] and repaired in the deterministic setting of that paper; the repair is again not a precongruence result, but still compares the results of two different composition operators. However, a natural solution to the precongruence problem can be adopted from the IA-framework [@DeAHen2005] where inputs are always allowed implicitly. Consequently, if an input transition is specified, it will always be a must. In the remainder, we thus define and study a new specification framework, called *Modal Interface Automata* (MIA), that takes the dMTS-setting for an alphabet consisting of input and output actions, requires input-determinism, and demands that every input-may-transition is also an input-must-transition. The advantage over IA is that outputs can be prescribed via output-must-transitions, which precludes trivial implementations like *BlackHole* discussed in Sec. \[sec:ia\]. A *Modal Interface Automaton* (MIA) is a tuple $Q = (Q, I, O, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$, where $(Q, I \cup O, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$ is a dMTS with disjoint alphabets $I$ for inputs and $O$ for outputs and where for all $i \in I$: (a) $q {\stackrel{i}{\longrightarrow}} Q'$ and $q {\stackrel{i}{\longrightarrow}} Q''$ implies $Q' = Q''$; (b) $q {\stackrel{i}{\dashrightarrow}} q'$ implies $\exists Q'.\, q {\stackrel{i}{\longrightarrow}} Q'$ and $q' \in Q'$. \[def:mia\] In the conference version of this article, we have considered truly disjunctive must-transitions only for outputs, so as to satisfy input determinism; this suffices for developing MIA-conjunction. However, for disjunction we have seen that such transitions are also needed for inputs. The above definition of MIA therefore permits one disjunctive must-transition for each input. This allows some choice on performing an input but, surprisingly, it is input-deterministic enough to support compositionality for parallel composition (cf. Thm. \[thm:miaparopcomp\]). Let $P, Q$ be MIAs with common input and output alphabets. Relation ${\mathcal{R}} \subseteq P \times Q$ is an *(observational) MIA-refinement relation* if for all ${({p},{q})} \in {\mathcal{R}}$: 1. $q {\stackrel{a}{\longrightarrow}} Q'$ implies $\exists P'.\, p {\stackrel{a}{\longrightarrow}} P'$ and $\forall p' {\in} P'\,\exists q' {\in} Q'.\; {({p'},{q'})} \in {\mathcal{R}}$, 2. $p {\stackrel{\alpha}{\dashrightarrow}} p'$ with $\alpha \in O \cup {\{{\tau}\}}$ implies $\exists q'.\, q {\,\raisebox{1.0ex}{$\stackrel{\hat{\alpha}}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}\,} q'$ and ${({p'},{q'})} \in {\mathcal{R}}$. We write $p {\sqsubseteq_{\textrm{MIA}}}q$ and say that $p$ *MIA-refines* $q$ if there exists an observational MIA-refinement relation ${\mathcal{R}}$ such that ${({p},{q})} \in {\mathcal{R}}$. Moreover, we also write $p {=_{\textrm{MIA}}}q$ in case $p {\sqsubseteq_{\textrm{MIA}}}q$ and $q {\sqsubseteq_{\textrm{MIA}}}p$ (which is an equivalence weaker than ‘bisimulation’). \[def:miasim\] One can easily check that ${\sqsubseteq_{\textrm{MIA}}}$ is a preorder and the largest observational MIA-refinement relation. Its definition coincides with dMTS-refinement except that Cond. (ii) is restricted to outputs and the silent action $\tau$. Thus, inputs are always allowed implicitly and, in effect, treated just like in IA-refinement. Due to the output-must-transitions in the MIA-setting, MIA-refinement can model, e.g., STG-bisimilarity [@VogWol2002] for systems without internal actions; this is a kind of alternating simulation refinement used for digital circuits. Conjunction on MIA {#subsec:miaconj} ------------------ Similar to conjunction on dMTS, we define conjunction on MIA by first constructing a conjunctive product and then eliminating all inconsistent states. Let $P = (P, I, O, {\stackrel{}{\longrightarrow}}_P,$ ${\stackrel{}{\dashrightarrow}}_P)$ and $Q = (Q, I, O, {\stackrel{}{\longrightarrow}}_Q, {\stackrel{}{\dashrightarrow}}_Q)$ be MIAs with common input and output alphabets and disjoint state sets $P$ and $Q$. The conjunctive product $P {\&}Q {=_{\text{df}}}((P \times Q) \cup P \cup Q, I, O, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$ inherits the transitions of $P$ and $Q$ and has additional transitions as follows, where $i \in I$, $o \in O$ and $\alpha \in O \cup \{\tau\}$: -------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [(OMust1)]{} ${({p},{q})} {\stackrel{o}{\longrightarrow}} if $p {\stackrel{o}{\longrightarrow}}_P P'$ and $q {\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,}$ {\{{{({p'},{q'})}}\,|\,{p' \in P',\, q {\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,} q'}\}}$ [(OMust2)]{} ${({p},{q})} {\stackrel{o}{\longrightarrow}} if $p {\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,}$ and $q {\stackrel{o}{\longrightarrow}}_Q Q'$ {\{{{({p'},{q'})}}\,|\,{p {\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,} p',\, q' \in Q'}\}}$ [(IMust1)]{} ${({p},{q})} {\stackrel{i}{\longrightarrow}} P'$ if $p {\stackrel{i}{\longrightarrow}}_P P'$ and $q \,\not\!{\stackrel{i}{\longrightarrow}}_Q$ [(IMust2)]{} ${({p},{q})} {\stackrel{i}{\longrightarrow}} Q'$ if $p \,\not\!{\stackrel{i}{\longrightarrow}}_P$ and $q {\stackrel{i}{\longrightarrow}}_Q Q'$ [(IMust3)]{} ${({p},{q})} {\stackrel{i}{\longrightarrow}} P' \times Q'$ if $p {\stackrel{i}{\longrightarrow}}_P P'$ and $q {\stackrel{i}{\longrightarrow}}_Q Q'$ [(May1)]{} ${({p},{q})} {\stackrel{\tau}{\dashrightarrow}} {({p'},{q})}$ if $p {\,\raisebox{1.0ex}{$\stackrel{\tau}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,} p'$ [(May2)]{} ${({p},{q})} {\stackrel{\tau}{\dashrightarrow}} {({p},{q'})}$ if $q {\,\raisebox{1.0ex}{$\stackrel{\tau}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,} q'$ [(May3)]{} ${({p},{q})} {\stackrel{\alpha}{\dashrightarrow}} {({p'},{q'})}$ if $p {\,\raisebox{1.0ex}{$\stackrel{\alpha}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,} p'$ and $q {\,\raisebox{1.0ex}{$\stackrel{\alpha}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,} q'$ [(IMay1)]{} ${({p},{q})} {\stackrel{i}{\dashrightarrow}} p'$ if $p {\stackrel{i}{\dashrightarrow}}_P p'$ and $q \,\not\!{\stackrel{i}{\dashrightarrow}}_Q$ [(IMay2)]{} ${({p},{q})} {\stackrel{i}{\dashrightarrow}} q'$ if $p \,\not\!{\stackrel{i}{\dashrightarrow}}_P$ and $q {\stackrel{i}{\dashrightarrow}}_Q q'$ [(IMay3)]{} ${({p},{q})} {\stackrel{i}{\dashrightarrow}} {({p'},{q'})}$ if $p {\stackrel{i}{\dashrightarrow}}_P p'$ and $q {\stackrel{i}{\dashrightarrow}}_Q q'$ -------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[def:miaconjprod\] This product is defined analogously to IA-conjunction for inputs (plus the corresponding ‘may’ rules) and to the dMTS-product for outputs and $\tau$. Thus, it combines the effects shown in Fig. \[fig:iaandopex\] (where all outputs are treated as may) and Fig. \[fig:dmtsandopex\] (where all actions are outputs). Given a conjunctive product $P {\&}Q$, the set ${F}\subseteq P \times Q$ of (logically) *inconsistent states* is defined as the least set satisfying the following rules: ---------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------- ----------------------- [(F1)]{} $p \!{\stackrel{o}{\longrightarrow}}_P$, $q \not\!\!\!{\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,}$, $o \in O$ implies ${({p},{q})} \in {F}$ [(F2)]{} $p \not\!\!\!{\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,}$, $q \!{\stackrel{o}{\longrightarrow}}_Q$, $o \in O$ implies ${({p},{q})} \in {F}$ [(F3)]{} ${({p},{q})} {\stackrel{a}{\longrightarrow}} R'$ and $R' \subseteq {F}$ implies ${({p},{q})} \in {F}$ ---------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------- ----------------------- The conjunction $P {\wedge}Q$ of MIAs $P, Q$ with common input and output alphabets is obtained by deleting all states ${({p},{q})} \in {F}$ from $P {\&}Q$ as for dMTS in Def. \[def:dmtsandop\]. We write $p {\wedge}q$ for state ${({p},{q})}$ of $P {\wedge}Q$; all such states are defined – and consistent – by construction. \[def:miaandop\] The conjunction $P {\wedge}Q$ is a MIA and is thus well-defined. This can be seen by a similar argument as we have used above in the context of dMTS-conjunction, while input-determinism can be established by an argument similar to that in the IA-setting. Note that, in contrast to the dMTS-situation, Rules (F1) and (F2) only apply to outputs. Fig. \[fig:dmtsandopex\] is also an example for conjunction in the MIA-setting if all actions are read as outputs. To reason about inconsistency we use a notion of witness again. This may be defined analogously to the witness notion for dMTS but replacing $a \in A$ in Def. \[def:dmtswitness\](W1) and (W2) by $a \in O$. We then obtain the analogous lemma to Lemma \[lem:dmtswitness\], which is needed in the proof of the analogue theorem to Thm. \[thm:dmtsandisand\]: A *MIA-witness* $W$ of $P {\&}Q$ is a subset of $(P \times Q) \cup P \cup Q$ such that the following conditions hold for all ${({p},{q})} \in W$: ---------- ------------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------------------- [(W1)]{} $p \!{\stackrel{o}{\longrightarrow}}_P$ with $o\in O$ implies $q {\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{Q}\,}$ [(W2)]{} $q \!{\stackrel{o}{\longrightarrow}}_Q$ with $o\in O$ implies $p {\,\raisebox{1.0ex}{$\stackrel{o}{\underset{\text{\normalsize $\dashrightarrow$}}{\raisebox{-1.0ex}[0ex][0ex]{$\dashrightarrow$}}}$}_{P}\,}$ [(W3)]{} ${({p},{q})} {\stackrel{a}{\longrightarrow}} R'$ implies $R' \cap W \not= \emptyset$ ---------- ------------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------------------- \[def:miawitness\] Let $P {\&}Q$ be a conjunctive product of MIAs. Then, for any MIA-witness $W$ of $P {\&}Q$, we have (i) ${F}\cap W = \emptyset$. Moreover, (ii) the set $W {=_{\text{df}}}\{ {({p},{q})} \in P \times Q \;|\; \exists\,\text{MIA}\,R$ and $r \in R.\, r {\sqsubseteq_{\textrm{MIA}}}p \text{ and } r {\sqsubseteq_{\textrm{MIA}}}q \} \cup P \cup Q$ is a MIA-witness of $P {\&}Q$. \[lem:miawitness\] We can now state and prove the desired largest-lower-bound theorem, from which compositionality of ${\sqsubseteq_{\textrm{MIA}}}$ wrt. ${\wedge}$ follows in analogy to the IA- and dMTS-settings: Let $P, Q$ be MIAs. We have *(i)* $(\exists\,\text{MIA}\,R$ and $r \in R.\, r {\sqsubseteq_{\textrm{MIA}}}p$ and $r {\sqsubseteq_{\textrm{MIA}}}q)$ if and only if $p {\wedge}q$ is defined. Further, in case $p {\wedge}q$ is defined and for any MIA $R$ and $r \in R$: *(ii)* $r {\sqsubseteq_{\textrm{MIA}}}p \text{ and } r {\sqsubseteq_{\textrm{MIA}}}q \text{ if and only if } r {\sqsubseteq_{\textrm{MIA}}}p {\wedge}q$. \[thm:miaandisand\] In analogy to Corollary \[cor:dmtsandopcomp\] we obtain: MIA-refinement is compositional wrt. conjunction. \[cor:miaandopcomp\] Disjunction on MIA {#subsec:miadisj} ------------------ The disjunction of two MIAs $P$ and $Q$ can be defined in the same way as for dMTS, except for the special treatment of inputs in the may-rules which guarantees that $P {\vee}Q$ is a MIA and, especially, that Def. \[def:mia\](b) is satisfied: Let $P = (P, I, O, {\stackrel{}{\longrightarrow}}_P,$ ${\stackrel{}{\dashrightarrow}}_P)$, $Q = (Q, I, O, {\stackrel{}{\longrightarrow}}_Q,$ ${\stackrel{}{\dashrightarrow}}_Q)$ be MIAs with common input and output alphabets and disjoint state sets $P$ and $Q$. The disjunction $P {\vee}Q$ is defined by $(\{ p {\vee}q \;|\; p \in P,\, q \in Q \} \cup P \cup Q, I, O, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$, where ${\stackrel{}{\longrightarrow}}$ and ${\stackrel{}{\dashrightarrow}}$ are the least sets satisfying ${\stackrel{}{\longrightarrow}}_P \subseteq {\stackrel{}{\longrightarrow}}$, ${\stackrel{}{\dashrightarrow}}_P \subseteq {\stackrel{}{\dashrightarrow}}$, ${\stackrel{}{\longrightarrow}}_Q \subseteq {\stackrel{}{\longrightarrow}}$, ${\stackrel{}{\dashrightarrow}}_Q \subseteq {\stackrel{}{\dashrightarrow}}$ and the following operational rules: ------------ -------------------------------------------------------- ---- ------------------------------------------------------------------------------------------------------------------------------ [(Must)]{} $p {\vee}q {\stackrel{a}{\longrightarrow}} P' \cup Q'$ if $p {\stackrel{a}{\longrightarrow}}_P P'$ and $q {\stackrel{a}{\longrightarrow}}_Q Q'$ [(May1)]{} $p {\vee}q {\stackrel{\alpha}{\dashrightarrow}} p'$ if $p {\stackrel{\alpha}{\dashrightarrow}}_P p'$ and, in case $\alpha \in I$, also $q \!{\stackrel{\alpha}{\dashrightarrow}}_Q$ [(May2)]{} $p {\vee}q {\stackrel{\alpha}{\dashrightarrow}} q'$ if $q {\stackrel{\alpha}{\dashrightarrow}}_Q q'$ and, in case $\alpha \in I$, also $p \!{\stackrel{\alpha}{\dashrightarrow}}_P$ ------------ -------------------------------------------------------- ---- ------------------------------------------------------------------------------------------------------------------------------ \[def:miaorop\] It is easy to see that this definition is well-defined, i.e., the resulting disjunctions are indeed MIAs, and we additionally have: Let $P$, $Q$ and $R$ be MIAs with states $p$, $q$ and $r$, resp. Then, $p {\vee}q {\sqsubseteq_{\textrm{MIA}}}r$ if and only if $p {\sqsubseteq_{\textrm{MIA}}}r$ and $q {\sqsubseteq_{\textrm{MIA}}}r$. \[thm:miaorisor\] The theorem’s proof is as for dMTS (cf. Thm. \[thm:dmtsorisor\]) but, in the (ii)-cases, only $\alpha \in O \cup \{\tau\}$ has to be considered. Analogously to dMTS we obtain the following corollary to Thm. \[thm:miaorisor\]: MIA-refinement is compositional wrt. disjunction. \[cor:miaoropcomp\] ![MIA-disjunction is more intuitive than IA-disjunction.[]{data-label="fig:miadisjintuitive"}](miadisjintuitive.png) To conclude this section we argue that MIA-disjunction is more intuitive than IA-disjunction. The example in Fig. \[fig:miadisjintuitive\] shows MIAs $P$, $Q$, $P {\vee}Q$ as well as a MIA $R$, where state $r$ corresponds to the IA-disjunction of states $p$ and $q$ when we understand $P$ and $Q$ as IAs. As expected (cf. p. ), $p {\vee}q$ is a refinement of $r$, but not vice versa. MIA-disjunction can now be considered to be more intuitive since the first transition in the disjunction decides which disjunct has to be satisfied afterward, in contrast to IA-disjunction. ![MIA-disjunction is an inclusive-or.[]{data-label="fig:miainclusiveor"}](miainclusiveor.png) Moreover, Fig. \[fig:miainclusiveor\] shows that MIA-disjunction is an inclusive-or: an implementation of $p {\vee}q$ can have an $o1$-transition followed by $i$ and another $o1$-transition followed by $j$; interestingly, $r {\sqsubseteq_{\textrm{MIA}}}p {\vee}q$ satisfies ‘half’ of $p$ and ‘half’ of $q$. In general, for each action  separately, a refinement of some disjunction has to satisfy at least all initial $a$-must-transitions of one of its disjuncts. Parallel Composition on MIA {#subsec:miaparop} --------------------------- In analogy to the IA-setting [@DeAHen2005] we provide a parallel operator on MIA. Here, error states are identified, and all states are removed from which reaching an error state is unavoidable in some implementation, as is done for IOMTS in [@LarNymWas2007]. MIAs $P_1$ and $P_2$ are *composable* if $A_1 \cap A_2 = (I_1 \cap O_2) \cup (O_1 \cap I_2)$, as in IA. For such MIAs we define the *product* ${\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$, where $I = (I_1 \cup I_2) \setminus (O_1 \cup O_2)$ and $O = (O_1 \cup O_2) \setminus (I_1 \cup I_2)$ and where ${\stackrel{}{\longrightarrow}}$ and ${\stackrel{}{\dashrightarrow}}$ are defined as follows: ------------- --------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------- [(Must1)]{} ${({p_1},{p_2})} {\stackrel{a}{\longrightarrow}} P'_1 \times {\{{p_2}\}}$ if $p_1 {\stackrel{a}{\longrightarrow}} P'_1$ and $a \notin A_2$ [(Must2)]{} ${({p_1},{p_2})} {\stackrel{a}{\longrightarrow}} {\{{p_1}\}} \times P'_2$ if $p_2 {\stackrel{a}{\longrightarrow}} P'_2$ and $a \notin A_1$ [(May1)]{} ${({p_1},{p_2})} {\stackrel{\alpha}{\dashrightarrow}} {({p'_1},{p_2})}$ if $p_1 {\stackrel{\alpha}{\dashrightarrow}} p'_1$ and $\alpha \notin A_2$ [(May2)]{} ${({p_1},{p_2})} {\stackrel{\alpha}{\dashrightarrow}} {({p_1},{p'_2})}$ if $p_2 {\stackrel{\alpha}{\dashrightarrow}} p'_2$ and $\alpha \notin A_1$ [(May3)]{} ${({p_1},{p_2})} {\stackrel{\tau}{\dashrightarrow}} {({p'_1},{p'_2})}$ if $p_1 {\stackrel{a}{\dashrightarrow}} p'_1$ and $p_2 {\stackrel{a}{\dashrightarrow}} p'_2$ for some $a$. ------------- --------------------------------------------------------------------------- ---- --------------------------------------------------------------------------------------------------------- \[def:miaparprod\] Recall that there are no $\tau$-must-transitions since they are irrelevant for refinement. Given a parallel product $P_1 {\otimes}P_2$, a state ${({p_1},{p_2})}$ is an *error state* if there is some $a \in A_1 \cap A_2$ such that (a) $a \in O_1$, $p_1 \!{\stackrel{a}{\dashrightarrow}}$ and $p_2 \,\not\!{\stackrel{a}{\longrightarrow}}$, or (b) $a \in O_2$, $p_2 \!{\stackrel{a}{\dashrightarrow}}$ and $p_1 \,\not\!{\stackrel{a}{\longrightarrow}}$. Again we define the set $E \subseteq P_1 \times P_2$ of *incompatible* states as the least set such that ${({p_1},{p_2})} \in E$ if (i) ${({p_1},{p_2})}$ is an error state or (ii) ${({p_1},{p_2})} {\stackrel{\alpha}{\dashrightarrow}} {({p'_1},{p'_2})}$ for some $\alpha \in O \cup \{\tau\}$ and ${({p'_1},{p'_2})} \in E$. The *parallel composition* $P_1 {|}P_2$ of $P_1$ and $P_2$ is now obtained from $P_1 {\otimes}P_2$ by *pruning*, namely removing all states in $E$ and every transition that involves such states as its source, its target or one of its targets; all may-transitions underlying a removed must-transition are deleted, too. If ${({p_1},{p_2})} \in P_1 {|}P_2$, we write $p_1 {|}p_2$ and call $p_1$ and $p_2$ *compatible*. \[def:miaparop\] Parallel products and parallel compositions are well-defined MIAs. Syntactic consistency is preserved, as is input-determinism since input-transitions are directly inherited from one of the *composable* systems. In particular, Cond. (b) in Def. \[def:mia\] holds due to the additional clause regarding the deletion of may-transitions. In addition, targets of disjunctive must-transitions are never empty since all must-transitions that remain after pruning are taken from the product without modification. As an example why pruning is needed, consider Fig. \[fig:iaparopex\] again and read the $\tau$-transitions as may-transitions and all other transitions as must-transitions. Further observe that pruning is different from removing inconsistent states in conjunction. For truly disjunctive transitions ${({p_1},{p_2})} {\stackrel{a}{\longrightarrow}} P'$ of the product $P_1 {\otimes}P_2$, the state ${({p_1},{p_2})}$ is removed already if $P' \cap E \not= \emptyset$, i.e., there exists some ${({p'_1},{p'_2})} \in P' \cap E$, and not only if $P' \subseteq E$. This is clear for $a \in O$ since ${({p_1},{p_2})} {\stackrel{a}{\dashrightarrow}} {({p'_1},{p'_2})}$ by syntactic consistency and, therefore, ${({p_1},{p_2})}$ is deleted itself by Cond. (ii) above. Note that Cond. (ii) corresponds directly to the IA-case since output-transitions there correspond to may-transitions here (see Sec. \[subsec:iaembeddingdmts\]). For $a \in I$, reaching the error state can only be prevented if the environment does not provide $a$; intuitively, this is because $P'$ has w.l.o.g. the form $P'_1 \times \{p_2\}$ in the product of $P_1$ and $P_2$ (i.e., $p'_2 = p_2$). The implementor of $P_1$ might choose to implement $p_1 {\stackrel{a}{\longrightarrow}} p'_1$ such that – when $P_1$’s implementation is composed with $P_2$’s – the error state is reached. To express the requirement on the environment not to exhibit $a$, must-transition ${({p_1},{p_2})} {\stackrel{a}{\longrightarrow}} P'$ and all underlying may-transitions have to be deleted. Let $P_1$, $P_2$ and $Q_1$ be MIAs with $p_1 \in P_1$, $p_2 \in P_2$, $q_1 \in Q_1$ and $p_1 {\sqsubseteq_{\textrm{MIA}}}q_1$. Assume that $Q_1$ and $P_2$ are composable; then: 1. $P_1$ and $P_2$ are composable. 2. If $q_1$ and $p_2$ are compatible, then so are $p_1$, $p_2$ and $p_1 {|}p_2 {\sqsubseteq_{\textrm{MIA}}}q_1 {|}p_2$. \[thm:miaparopcomp\] ![Example illustrating the need of input-determinism for MIA.[]{data-label="fig:miainputdet"}](miainputdet.png) This precongruence property of MIA-refinement would not hold if we would do away with input-determinism in MIA. To see this, consider the example of Fig. \[fig:miainputdet\] for which $p {\sqsubseteq_{\textrm{MIA}}}q$; however, $p {|}r {\sqsubseteq_{\textrm{MIA}}}q {|}r$ does not hold since $q$ and $r$ are compatible while $p$ and $r$ are not. An analogue reasoning applies to IA, although we do not know of a reference in the IA literature where this has been observed. Embedding of IA into MIA {#subsec:embedding} ------------------------ To conclude, we provide an embedding of IA into MIA in the line of [@LarNymWas2007]: Let $P$ be an IA. The embedding ${[{P}]_{\text{MIA}}}$ of $P$ into MIA is defined as the MIA $(P, I, O, {\stackrel{}{\longrightarrow}}, {\stackrel{}{\dashrightarrow}})$, where (i) $p {\stackrel{i}{\longrightarrow}} p'$ if $p {\stackrel{i}{\longrightarrow}}_P p'$ and $i \in I$, and (ii) $p {\stackrel{\alpha}{\dashrightarrow}} p'$ if $p {\stackrel{\alpha}{\longrightarrow}}_P p'$ and $\alpha \in I \cup O \cup \{\tau\}$. \[def:iaembeddingmia\] In the remainder of this section we simply write ${[{p}]_{\text{}}}$ for $p \in {[{P}]_{\text{MIA}}}$. This embedding is much simpler than the one of [@LarNymWas2007] since MIA more closely resembles IA than IOMTS does. In particular, the following theorem is obvious: For IAs $P, Q$ with $p \in P$, $q \in Q$: $\,p {\sqsubseteq_{\textrm{IA}}}q$ if and only if ${[{p}]_{\text{}}} {\sqsubseteq_{\textrm{MIA}}}{[{q}]_{\text{}}}$. \[thm:iaembeddingmia\] Our embedding respects operators ${\wedge}$ and ${|}$, unlike the one in [@LarNymWas2007]: For IAs $P, Q$ with $p \in P$, $q \in Q$: 1. ${[{p}]_{\text{}}} {\wedge}{[{q}]_{\text{}}}$ ${=_{\textrm{MIA}}}$ ${[{p {\wedge}q}]_{\text{}}}$; 2. ${[{p}]_{\text{}}} \,{|}\, {[{q}]_{\text{}}}$ ${=_{\textrm{MIA}}}$ ${[{p {|}q}]_{\text{}}}$. \[thm:miaembedding\] We observe that the IA-embedding into MIA is ‘better’ wrt. conjunction than that into dMTS since refinement holds in both directions. The reason is that MIA-refinement is coarser (i.e., larger) than dMTS-refinement applied to MIAs (which are dMTSs after all): input may-transitions do not have to be matched in the former. Thus, there can be more lower bounds wrt. MIA-refinement and the greatest lower bound can be larger. For IAs $P, Q$ with $p \in P$, $q \in Q$, we have: ${[{p}]_{\text{}}} {\vee}{[{q}]_{\text{}}} \,{\sqsubseteq_{\textrm{MIA}}}\, {[{p {\vee}q}]_{\text{}}}$. \[prop:thm:miaembeddingdisj\] This result holds by general order theory due to Thm. \[thm:iaembeddingmia\]. The reverse refinement for disjunction is not valid as we have already seen in Fig. \[fig:miadisjintuitive\], and this difference repairs a shortcoming of IA-disjunction as discussed on p. . Conclusions and Future Work {#sec:conclusions} =========================== We introduced *Modal Interface Automata* (MIA), an interface theory that is more expressive than *Interface Automata* (IA) [@DeAHen2005]: it allows one to mandate that a specification’s refinement must implement some output, thus excluding trivial implementations, e.g., one that accepts all inputs but never emits any output. This was also the motivation behind *IOMTS* [@LarNymWas2007] that extends *Modal Transition Systems* (MTS) [@Lar89] by inputs and outputs; however, the IOMTS-parallel operator in the style of IA is not compositional. Apart from having disjunctive must-transitions, MIA is a subset of IOMTS, but it has a different refinement relation that is a precongruence for parallel composition. Most importantly and in contrast to IA and IOMTS, the MIA theory is equipped with a conjunction operator for reasoning about components that satisfy multiple interfaces simultaneously. Along the way, we also introduced conjunction on IA and a disjunctive extension of MTS – as well as disjunction on IA, MTS and MIA – and proved these operators to be the desired greatest lower bounds (resp., least upper bounds) and thus compositional. Compared to the language-based modal interface theory of [@RacBadBenCaiLegPas2011], our formalism supports nondeterministic specifications and allows limited nondeterminism (in the sense of deterministic *disjunctive* transitions) even for inputs. Hence, MIA establishes a theoretically clean and practical interface theory that fixes the shortcomings of related work. ![In Logic LTS [@LueVog2010], disjunction is internal choice.[]{data-label="fig:intchoice"}](intchoice.png) From a technical perspective, our MIA-theory borrows from our earlier work on Logic LTS [@LueVog2010]. There, we started from a very different conjunction operator appropriate for a deadlock-sensitive CSP-like process theory, and then derived a ‘best’ suitable refinement relation. In [@LueVog2010], disjunction is simply internal choice $\sqcap$, as sketched in Fig. \[fig:intchoice\]. For MIA, $p \sqcap q$ is not suited at all since both $p$ and $q$ require that input $i$ is performed immediately. Future work shall follow both theoretical and practical directions. On the theoretical side, we firstly wish to study MIA’s expressiveness in comparison to other theories via thoroughness [@FecFruLueSch2009]. More substantially, however, we intend to enrich MIA with temporal-logic operators, in the spirit of truly mixing operational and temporal-logic styles of specification in the line of our *Logic LTS* in [@LueVog2011]. Important guidance for this will be the work of Feuillade and Pinchinat [@FeuPin2007], who have introduced a temporal logic for modal interfaces that is equally expressive to MTS. In contrast to [@LueVog2011], their setting is not mixed, does not consider nondeterminism, and does not include a refinement relation. Indeed, a unique feature of Logic LTS is that its refinement relation subsumes the standard temporal-logic satisfaction relation. On the practical side, we plan to study the algorithmic complexity implied by MIA-refinement, on the basis of existing literature for MTS. For example, Antonik et al. [@AntHutLarNymWas2010] discuss related decision problems such as the existence of a common implementation; Fischbein and Uchitel [@FisUch2008] generalize the conjunction of [@LarSteWei95] and study its algorithmic aspects; Beneš et al. [@BenCerKre2011] show that refinement problems for DMTS are not harder than in the case of MTS and also consider conjunction; Raclet et al. [@RacBadBenCaiLegPas2011] advocate deterministic automata for modal interface theories in order to reduce complexity. In addition, we wish to adapt existing tool support for interface theories to MIA, e.g., the *MIO Workbench* [@BauMaySchHen2010]. Acknowledgement {#acknowledgement .unnumbered} =============== We thank the anonymous reviewers for their constructive comments and for pointing out additional related work. Part of this research was supported by the DFG (German Research Foundation) under grant nos. LU 1748/3-1 and VO 615/12-1 (“Foundations of Heterogeneous Specifications Using State Machines and Temporal Logic”).
{ "pile_set_name": "ArXiv" }
--- abstract: 'As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.' author: - 'Charith Perera,  Arkady Zaslavsky,  Peter Christen, and Dimitrios Georgakopoulos,  [^1][^2][^3]' bibliography: - 'IEEEabrv.bib' - 'Bibliography.bib' title: | Context Aware Computing for\ The Internet of Things: A Survey --- [Shell : Bare Demo of IEEEtran.cls for Journals]{} Internet of things, context awareness, sensor networks, sensor data, context life cycle, context reasoning, context modelling, ubiquitous, pervasive, mobile, middleware. Introduction {#Introduction} ============ awareness, as a core feature of ubiquitous and pervasive computing systems, has existed and been employed since the early 1990s. The focus on context-aware computing evolved from desktop applications, web applications, mobile computing, pervasive/ubiquitous computing to the Internet of Things (IoT) over the last decade. However, context-aware computing became more popular with the introduction of the term ‘*ubiquitous computing*’ by Mark Weiser [@P506] in his ground-breaking paper *The Computer for the 21st Century* in 1991. Then the term ‘*context-aware*’ was first used by Schilit and Theimer [@P173] in 1994. Since then, research into context-awareness has been established as a well known research area in computer science. Many researchers have proposed definitions and explanations of different aspects of context-aware computing, as we will discuss briefly in Section \[chapter2:CAF\]. The definitions for *‘context*’ and ‘*context-awareness*’ that are widely accepted by the research community today were proposed by Abowd et al. [@P104] in 1999. During the last two decades, researchers and engineers have developed a significant amount of prototypes, systems, and solutions using context-aware computing techniques. Even though the focus varied depending on each project, one aspect remained fairly unchanged: that is the number of data sources (e.g. software and hardware sources). For example, most of the proposed solutions collect data from a limited number of physical (hardware) and virtual (software) sensors. In these situations, collecting and analysing sensor data from all the sources is possible and feasible due to limited numbers. In contrast, IoT envisions an era where billions of sensors are connected to the Internet, which means it is not feasible to process all the data collected by those sensors. Therefore, context-awareness will play a critical role in deciding what data needs to be processed and much more. Due to advances in sensor technology, sensors are getting more powerful, cheaper and smaller in size, which has stimulated large scale deployments. As a result, today we have a large number of sensors already deployed and it is predicted that the numbers will grow rapidly over the next decade [@P029]. Ultimately, these sensors will generate *big data* [@ZMP003]. The data we collect may not have any value unless we analyse, interpret, and understand it. Context-aware computing has played an important role in tackling this challenge in previous paradigms, such as mobile and pervasive, which lead us to believe that it would continue to be successful in the IoT paradigm as well. Context-aware computing allows us to store context[^4] information linked to sensor data so the interpretation can be done easily and more meaningfully. In addition, understanding context makes it easier to perform machine to machine communication as it is a core element in the IoT vision. When large numbers of sensors are deployed, and start generating data, the traditional application based approach (i.e. connect sensors directly to applications individually and manually) becomes infeasible. In order to address this inefficiency, significant amounts of middleware solutions are introduced by researchers. Each middleware solution focuses on different aspects in the IoT, such as device management, interoperability, platform portability, context-awareness, security and privacy, and many more. Even though, some solutions address multiple aspects, an ideal middleware solution that addresses all the aspects required by the IoT is yet to be designed. In this survey, we consider identifying the context-aware computing related features and functionalities that are required by an ideal IoT middleware solution as a key task. There have been several surveys conducted in relation to this field. We briefly introduce these surveys in chronological order. Chen and Kotz [@P431] (2000) have surveyed context awareness, focusing on applications, what context they use, and how contextual information is leveraged. In 2004, Strang and Linnhoff-Popien [@P184] compared the most popular context modelling techniques in the field. Middleware solutions for sensor networks are surveyed by Molla and Ahamed [@P417] in 2006. Two separate surveys were conducted by Kjaer [@P035] and Baldauf et al. [@P402] in 2007 on context-aware systems and middleware solutions using different taxonomies. Both surveys compared limited numbers, but different projects with very little overlap. c et al. [@P185] (2009) reviewed popular context representation and reasoning from a pervasive computing perspective. In 2010, Bettini et al. [@P216] also comprehensively surveyed context modelling and reasoning by focusing on techniques rather than projects. In the same year another survey was done by Saeed and Waheed [@P359] focusing on architectures in the context-aware middleware domain. Bandyopadhyay et al. [@P118] have conducted a survey on existing popular Internet of Things middleware solutions in 2011. The latest survey is done by Bellavista et al. [@P291] (2013) which is focused on context distribution for mobile ubiquitous systems. ![image](./Figures/33-Evolution_of_Internet.pdf) Our survey differs from the previous literature surveys mentioned above in many ways. Most of the surveys evaluated a limited number of projects. In contrast, we selected a large number of projects (50) covering a decade, based on the unique criteria that will be explained at the end of this section. We took a much broader viewpoint compared to some of the previous surveys, as they have focused on specific elements such as modelling, reasoning, etc. Finally and most importantly, our taxonomy formation and organisation is completely different. Rather than building a theoretical taxonomy and then trying to classify existing research projects, prototypes and systems according to it, we use a practical approach. We built our taxonomy based on past research projects by identifying the features, models, techniques, functionalities and approaches they employed at higher levels (e.g. we do not consider implementation/code level differences between different solutions). We consolidated this information and analysed the capabilities of each solution or the project. We believe this approach allows us to highlight the areas where researchers have mostly (priorities) and rarely (non-priorities) focused their attention and the reasons behind. Further, we have also used a non-taxonomical project based evaluation, where we highlight how the different combinations of components are designed, developed and used in each project. This allows to discuss their applicability from an IoT perspective. Our objectives in revisiting the literature are threefold: 1) to learn how context-aware computing techniques have helped to develop solutions in the past, 2) how can we apply those techniques to solve problems in the future in different paradigms such as the IoT, and 3) to highlight open challenges and to discuss future research directions. This paper is organised into sections as follows: Section \[chapter2:IoTP\] provides an introduction to the IoT. In this section, we briefly describe the history and evolution of the Internet. Then we explain what the IoT is, followed by a list of application domains and statistics that show the significance of the IoT. We also describe the relationship between sensor networks and the IoT. Comparisons of popular IoT middleware solutions are presented at the end of the section in order to highlight existing research gaps. In Section \[chapter2:CAF\], we present context awareness fundamentals such as context-aware related definitions, context types and categorisation schemes, features and characteristics, and context awareness management design principles. In Section \[chapter2:CDLC\], we conduct our main discussion based on context life cycle where we identify four stages: acquisition, modelling, reasoning, and distribution. Section \[chapter2:PRE\] briefly discusses the highlights of each project, which we use for the comparison later. Finally, Section \[chapter2:LL\] discusses the lessons learn from the literature and Section \[chapter2:LLFRD\] identifies future research directions and challenges. Conclusion remarks are presented in Section \[chapter2:Conclusions\]. For this literature review, we analyse, compare, classify a subset of both small scale and large scale projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing based on our own taxonomy. We selected the existing solutions to be reviewed based on different criteria. Mainly, we selected projects that were conducted over the last decade (2001-2011). We also considered main focus, techniques used, popularity, comprehensiveness, information availability, and the year of publication, in order to make sure that our review provides a balanced view on context-aware computing research. The Internet of Things Paradigm {#chapter2:IoTP} =============================== In this section, we briefly introduce the IoT paradigm. Our intention is not to survey the IoT, but to present some fundamental information (e.g. how Internet evolved, what is the IoT, statistics related to IoT, underline technologies, characteristics, and research gaps in IoT paradigm) that will help with understanding the historic movements and the direction into which technology is moving today. The IoT paradigm has its own concepts and characteristics. It also shares significant amounts of concepts with other computer fields. The IoT bundles different technologies (e.g. sensor hardware/firmware, semantic, cloud, data modelling, storing, reasoning, processing, communication technologies) together to build its vision. We apply the existing technologies in different ways based on the characteristics and demands of the IoT. The IoT does not revolutionise our lives or the field of computing. It is another step in the evolution of the Internet we already have. Evolution of Internet {#chapter2:IoTP:Evolution of Internet} --------------------- Before we investigate the IoT in depth, it is worthwhile to look at the evolution of the Internet. In the late 1960s, communication between two computers was made possible through a computer network [@P260]. In the early 1980s the TCP/IP stack was introduced. Then, commercial use of the Internet started in the late 1980s. Later, the World Wide Web (WWW) became available in 1991 which made the Internet more popular and stimulate the rapid growth. Web of Things (WoT) [@P575], which based on WWW, is a part of IoT. Later, mobile devices connected to the Internet and formed the mobile-Internet [@P018]. With the emergence of social networking, users started to become connected together over the Internet. The next step in the IoT is where objects around us will be able to connect to each other (e.g. machine to machine) and communicate via the Internet [@P006]. Figure \[Fig:Evolution\_of\_The\_Internet\] illustrates the five phases in the evolution of the Internet. What is the Internet of Things? {#chapter2:IoTP:What is Internet of Things?} ------------------------------- ![Definition of the Internet of Things: The Internet of Things allows people and things to be connected anytime, anyplace, with anything and anyone, ideally using any path/network and any service [@P019].[]{data-label="Fig:Definition_of_IoT"}](./Figures/34-IoT_Definition.pdf) During the past decade, the IoT has gained significant attention in academia as well as industry. The main reasons behind this interest are the capabilities that the IoT [@P007; @P003] will offer. It promises to create a world where all the objects (also called smart objects [@P041]) around us are connected to the Internet and communicate with each other with minimum human intervention [@P026]. The ultimate goal is to create ‘a better world for human beings’, where objects around us know what we like, what we want, and what we need and act accordingly without explicit instructions [@P040]. The term ‘Internet of Things’ was firstly coined by Kevin Ashton [@P065] in a presentation in 1998. He has mentioned *“The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so”*. Then, the MIT Auto-ID centre presented their IoT vision in 2001 [@P361]. Later, IoT was formally introduced by the International Telecommunication Union (ITU) by the *ITU Internet report* in 2005 [@P020]. The IoT encompasses a significant amount of technologies that drive its vision. In the document, *Vision and challenges for realising the Internet of Things*, by CERP-IoT [@P029], a comprehensive set of technologies was listed. IoT is a very broad vision. The research into the IoT is still in its infancy. Therefore, there aren’t any standard definitions for IoT. The following definitions were provided by different researchers. Definition by [@P002]: *“Things have identities and virtual personalities operating in smart spaces using intelligent interfaces to connect and communicate within social, environment, and user contexts.”* Definition by [@P006]:*“The semantic origin of the expression is composed by two words and concepts: Internet and Thing, where Internet can be defined as the world-wide network of interconnected computer networks, based on a standard communication protocol, the Internet suite (TCP/IP), while Thing is an object not precisely identifiable Therefore, semantically, Internet of Things means a world-wide network of interconnected objects uniquely addressable, based on standard communication protocols.”* Definition by [@P019]: *“The Internet of Things allows people and things[^5] to be connected Anytime, Anyplace, with Anything and Anyone, ideally using Any path/network and Any service.”* We accept the last definition provided by [@P019] for our research work, because we believe, this definition encapsulates the broader vision of IoT. Figure \[Fig:Definition\_of\_IoT\] illustrates the definition more clearly. The broadness of IoT can be identified by evaluating the application domains presented in Section \[chapter2:IoTP:IoT Application Domains\]. IoT Application Domains {#chapter2:IoTP:IoT Application Domains} ----------------------- The IoT, interconnection and communication between everyday objects, enables many applications in many domains. The application domain can be mainly divided in to three categories based on their focus [@P003; @P029]: industry, environment, and society. The magnitude of the applications can be seen in the statistics presented in Section \[chapter2:IoTP:IoT Related Statistics\]. Supply chain management [@P017], transportation and logistics [@P005], aerospace, aviation, and automotive are some of the industry focused applications of IoT. Telecommunication, medical technology [@P362], healthcare, smart building, home [@P363] and office, media, entertainment, and ticketing are some of the society focused applications of IoT. Agriculture and breeding [@P244; @P013], recycling, disaster alerting, environmental monitoring are some of the environment focused applications. Asin and Gascon [@P416] listed 54 application domains under twelve categories: smart cities, smart environment, smart water, smart metering, security and emergencies, retail, logistics, industrial control, smart agriculture, smart animal farming, domestic and home automation, and eHealth. IoT Related Statistics {#chapter2:IoTP:IoT Related Statistics} ---------------------- The vision of the IoT is heavily energised by statistics and predictions. We present the statistics to justify our focus on the IoT and to show the magnitude of the challenges. It is estimated that there about 1.5 billion Internet-enabled PCs and over 1 billion Internet-enabled mobile phones today. These two categories will be joined with Internet-enabled devices (smart objects [@P041])) in the future. By 2020, there will be 50 to 100 billion devices connected to the Internet [@P029]. According to BCC Research [@P255], the global market for sensors was around \$56.3 billion in 2010. In 2011, it was around \$62.8 billion. Global market for sensors is expected to increase to \$91.5 billion by 2016, at a compound annual growth rate of 7.8%. The Essential Component of IoT: Sensor Networks {#chapter2:IoT:The Backbone of IoT: Sensor Networks} ----------------------------------------------- We provide a brief introduction to sensor networks in this section as it is the most essential component of the IoT. A sensor network comprises one or more sensor nodes, which communicate between themselves using wired and wireless technologies. In sensor networks, sensors can be homogeneous or heterogeneous. Multiple sensor networks can be connected together through different technologies and protocols. One such approach is through the Internet. The components and the layered structure of a typical sensor network are discussed in Section \[chapter2:IoT:Layers in Sensor Networks\]. We discuss how sensor networks and the IoT work together in Section \[chapter2:IoT:Relationship Between Sensor Networks and IoT\]. However, there are other technologies that can complement the sensing and communication infrastructure in IoT paradigm such as traditional ad-hoc networks. These are clearly a different technology from sensor networks and have many weaknesses. The differences are comprehensively discussed in [@P009]. There are three main architectures in sensor networks: flat architecture (data transfers from static sensor nodes to the sink node using a multi-hop fashion), two-layer architecture (more static and mobile sink nodes are deployed to collect data from sensor nodes), and three-layer architecture (multiple sensor networks are connected together over the Internet). Therefore, IoT follows a three-layer architecture. Most of the sensors deployed today are wireless. There are several major wireless technologies used to build wireless sensor networks: wireless personal area network (WPAN) (e.g. Bluetooth), wireless local area network (WLAN) (e.g. Wi-Fi), wireless metropolitan area network (WMAN) (e.g. WiMAX), wireless wide area network (WWAN) (e.g. 2G and 3G networks), and satellite network (e.g. GPS). Sensor networks also use two types of protocols for communication: non-IP based (e.g: Zigbee and Sensor-Net) and IP-based protocols (NanoStack, PhyNet, and IPv6). The sensor network is not a concept that emerged with the IoT. The concept of a sensor network and related research existed a long time before the IoT was introduced. However, sensor networks were used in limited domains to achieve specific purposes, such as environment monitoring [@P193], agriculture [@P244], medical care [@P158], event detection [@P113], structural health monitoring [@P067], etc. Further, there are three categories of sensor networks that comprise the IoT [@P266]: body sensor networks (BSN), object sensor networks (OSN), and environment sensor networks (ESN). Molla and Ahamed [@P417] identified ten challenges that need to be considered when developing sensor network middleware solutions: abstraction support, data fusion, resource constraints, dynamic topology, application knowledge, programming paradigm, adaptability, scalability, security, and QoS support. A comparison of different sensor network middleware solutions is also provided based on the above parameters. Several selected projects are also discussed in brief in order to discover the approaches they take to address various challenges associated with sensor networks. Some of the major sensor network middleware approaches are IrisNet, JWebDust, Hourglass, HiFi, Cougar, Impala, SINA, Mate, TinyDB, Smart Object, Agilla, TinyCubus, TinyLime, EnviroTrack, Mires, Hood, and Smart Messages. A survey on web based wireless sensor architectures and applications is presented in [@P475]. Layers in Sensor Networks {#chapter2:IoT:Layers in Sensor Networks} ------------------------- We have presented a typical structure of a sensor network in Figure \[Fig:Layered Structure on a Sensor Network\]. It comprises the most common components in a sensor network. As we have shown, with the orange coloured arrows, data flows from right to left. Data is generated by the low-end sensor nodes and high-end sensor nodes. Then, data is collected by mobile and static sink nodes. The sink nodes send the data to low-end computational devices. These devices perform a certain amount of processing on the sensor data. Then, the data is sent to high-end computational devices to be processed further. Finally, data reaches the cloud where it will be shared, stored, and processed significantly. ![Layered structure of a sensor network: These layers are identified based on the capabilities posed by the devices. In IoT, this layered architecture may have additional number of sub layers as it is expected to comprises large verity of in sensing capabilities.[]{data-label="Fig:Layered Structure on a Sensor Network"}](./Figures/37-Sensor_Networks.pdf) Based on the capabilities of the devices involved in a sensor network, we have identified six layers. Information can be processed in any layer. Capability means the processing, memory, communication, and energy capacity. Capabilities increase from layer one to layer six. Based on our identification of layers, it is evident that an ideal system should understand the capability differences, and perform data management accordingly. It is all about efficiency and effectiveness. For example, perform processing in the first few layers could reduce data communication. However, devices in the first few layers do not have a sufficient amount of energy and processing power to do comprehensive data processing [@P318]. find more efficient and effective ways of data management, such as collecting, modelling, reasoning, distributing. Relationship Between Sensor Networks and IoT {#chapter2:IoT:Relationship Between Sensor Networks and IoT} -------------------------------------------- In earlier sections we introduced both IoT and sensor network concepts. In this section we explain the relationship between the two concepts. Previously, we argued that sensor networks are the most essential components of the IoT. Figure \[Fig:Relationship Between Sensor Networks and IoT\] illustrates the big picture. The IoT comprises sensors and actuators. The data is collected using sensors. Then, it is processed and decisions are made. Finally, actuators perform the decided actions. This process is further discussed in Section \[chapter2:CDLC\]. Further, integration between wireless sensor networks and the IoT are comprehensively discussed in [@P351]. The difference between sensor networks (SN) and the IoT is largely unexplored and blurred. We can elaborate some of the characteristics of both SN and IoT to identify the differences. SN comprises of the sensor hardware (sensors and actuators), firmware and a thin layer of software. The IoT comprises everything that SN comprises and further it comprises a thick layer of software such as middleware systems, frameworks, APIs and many more software components. The software layer is installed across computational devices (both low and high-end) and the cloud. From their origin, SNs were designed, developed, and used for specific application purposes, for example, detecting bush fire [@P266]. In the early days, sensor networks were largely used for monitoring purposes and not for actuation [@P277]. In contrast, IoT is not focused on specific applications. The IoT can be explained as a general purpose sensor network [@P285]. Therefore, the IoT should support many kinds of applications. During the stage of deploying sensors, the IoT would not be targeted to collect specific types of sensor data, rather it would deploy sensors where they can be used for various application domains. For example, company may deploy sensors, such as pressure sensors, on a newly built bridge to track its structural health. However, these sensors may be reused and connect with many other sensors in order to track traffic at a later stage. Therefore, middleware solutions, frameworks, and APIs are designed to provide generic services and functionalities such as intelligence, semantic interoperability, context-awareness, etc. that are required to perform communication between sensors and actuators effectively. Sensor networks can exist without the IoT. However, the IoT cannot exist without SN, because SN provides the majority of hardware (e.g. sensing and communicating) infrastructure support, through providing access to sensors and actuators. There are several other technologies that can provide access to sensor hardware, such as wireless ad-hoc networks. However, they are not scalable and cannot accommodate the needs of the IoT individually [@P009], though they can complement the IoT infrastructure. As is clearly depicted in Figure \[Fig:Relationship Between Sensor Networks and IoT\], SN are a part of the IoT. However, the IoT is not a part of SN. ![Relationship between sensor networks and IoT.[]{data-label="Fig:Relationship Between Sensor Networks and IoT"}](./Figures/40-Relationship_Between_IoT_and_SN2.pdf) Characteristics of the IoT {#chapter2:IoT:Characteristics of IoT} -------------------------- In Section \[chapter2:IoT:Relationship Between Sensor Networks and IoT\], we highlighted the differences between sensor networks and the IoT. Further, we briefly explore the characteristics of the IoT from a research perspective. Based on previous research efforts in the IoT [@P029]: *intelligence*, *architecture*, *complex system*, *size considerations*, *time considerations*, *space considerations*, and *everything-as-a-service*. These characteristics need to be considered when developing IoT solutions throughout all the phases from design, development, implement and evaluation. **Intelligence:** This means the application of knowledge. First the knowledge needs to be generated by collecting data and reasoning it. Transforming the collected raw data into knowledge (high-level information) can be done by collecting, modelling, and reasoning the context. Context can be used to fuse sensor data together to infer new knowledge. Once we have knowledge, it can be applied towards more intelligent interaction and communication. **Architecture:** IoT should be facilitated by a hybrid architecture which comprises many different architectures. Primarily there would be two architectures: event driven [@P038] and time driven. Some sensors produce data when an event occurs (e.g. door sensor); the rest produce data continuously, based on specified time frames (e.g. temperature sensor). Mostly, the IoT and SN are event driven [@P275]. Event-Condition-Action (ECA) rules are commonly used in such systems. **Complex system:** The IoT comprises a large number of objects (sensors and actuators) that interact autonomously. New objects will start communicating and existing ones will disappear. Currently, there are millions of sensors deployed around the world [@P069]. Interactions may differ significantly depending on the objects capabilities. Some objects may have very few capabilities, and as such store very limited information and do no processing at all. In contrast, some objects may have larger memory, processing, and reasoning capabilities, which make them more intelligent. **Size considerations:** It is predicted that there will be 50-100 billion devices connected to the Internet by 2020 [@P029]. The IoT needs to facilitate the interaction among these objects. The numbers will grow continuously and will never decrease. Similar to the number of objects, number of interactions may also increase significantly. **Time considerations:** The IoT could handle billions of parallel and simultaneous events, due to the massive number of interactions. Real-time data processing is essential. **Space considerations:** The precise geographic location of a object will be critical [@P083] as location plays a significant role in context-aware computing. When the number of objects get larger, tracking becomes a key requirement. Interactions are highly dependent on their locations, their surroundings, and presence of other entities (e.g. objects and people). **Everything-as-a-service:** Due to the popularity of cloud computing [@P498], consuming resources as a service [@P502] such as Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS), has become main stream. Everything-as-a-service [@P533] model is highly efficient, scalable, and easy to use. IoT demands significant amounts of infrastructure to be put in place in order to make its vision a reality, where it would follow a community or crowd based approach. Therefore, sharing would be essential, where an everything-as-a-service model would suit mostly sensing-as-a-service [@ZMP003]. Middleware Support for IoT {#chapter2:IoT:Middleware Support for IoT} -------------------------- As we mentioned at the beginning, the IoT needs to be supported by middleware solutions. *“Middleware is a software layer that stands between the networked operating system and the application and provides well known reusable solutions to frequently encountered problems like heterogeneity, interoperability, security, dependability [@P064].”* The functionalities required by IoT middleware solutions are explained in detail in [@P029; @P018; @P006; @P019; @P020]. In addition, challenges in developing middleware solutions for the IoT are discussed in [@P028]. We present the summary of a survey conducted by Bandyopadhyay et al. [@P118]. They have selected the leading middleware solutions and analyse them based on their functionalities, each one offers, *device management*, *interoperation*, *platform portability*, *context-awareness*, and *security and privacy*. Table \[Tbl:IoT Middleware Comparison\] shows the survey results. By the time we were preparing this survey, some of the middleware solutions listed (i.e. GSN and ASPIRE) were in the processing of extending towards next generation solutions (i.e. EU FP7 project OpenIoT (2012-2014) [@P377]) by combining each other’s strengths. [l m[0.85cm]{} m[0.85cm]{} m[0.85cm]{} m[0.85cm]{} m[0.85cm]{}]{} Middleware & DM & I &PP &CA &SP\ Hydra [@P105] &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\checkmark$\ ISMB [@P375] &$\checkmark$ &$\times$ &$\checkmark$ &$\times$ &$\times$\ ASPIRE [@P366] &$\checkmark$ &$\times$ &$\checkmark$ &$\times$ &$\times$\ UBIWARE [@P146] &$\checkmark$ &$\times$ &$\checkmark$ &$\checkmark$ &$\times$\ UBISOAP [@P367] &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\times$ &$\times$\ UBIROAD [@P119] &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\checkmark$\ GSN [@P050] &$\checkmark$ &$\times$ &$\checkmark$ &$\times$ &$\checkmark$\ SMEPP [@P371] &$\checkmark$ &$\times$ &$\checkmark$ &$\checkmark$ &$\checkmark$\ SOCRADES [@P373] &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\times$ &$\checkmark$\ SIRENA [@P368] &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\times$ &$\checkmark$\ WHEREX [@P370] &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\times$ &$\times$\ \[Tbl:IoT Middleware Comparison\] Research Gaps {#chapter2:IoT:Research Gaps to Improve} ------------- According to Table \[Tbl:IoT Middleware Comparison\], it can be seen that the majority of the IoT middleware solutions do not provide context-awareness functionality. In contrast, almost all the solutions are highly focused on device management, which involves connecting sensors to the IoT middleware. In the early days, context-awareness was strongly bound to pervasive and ubiquitous computing. Even though there were some middleware solutions that provided an amount of context-aware functionality, they did not satisfy the requirements that the IoT demands. We discuss the issues and drawbacks with existing solutions, in detail, in Section \[chapter2:PRE\]. We discuss some of the research directions in Section \[chapter2:LLFRD\]. In this section, we introduced the IoT paradigm and highlighted the importance of context-awareness for the IoT. We also learnt that context-awareness has not been addressed in existing IoT focused solutions, which motivates us to survey the solutions in other paradigms to evaluate the applicability of context-aware computing techniques toward IoT. In the next section we discuss context-aware fundamentals that helps us understand the in-depth discussions in the later sections. Context Awareness Fundamentals {#chapter2:CAF} ============================== This section discusses definitions of context and context awareness, context-aware features, types of context and categorisation schemes, different levels and characteristics of context-awareness, and finally, context management design principles in the IoT paradigm. Context-awareness Related Definitions {#chapter2:CAF:Context-awareness Related Definitions} ------------------------------------- ### Definition of Context {#chapter2:CAF:CARD:Definition of Context} The term context has been defined by many researchers. Dey et al. [@P143] evaluated and highlighted the weaknesses of these definitions. Dey claimed that the definition provided by Schilit and Theimer [@P173] was based on examples and cannot be used to identify new context. Further, Dey claimed that definitions provided by Brown [@P175], Franklin and Flachsbart [@P178], Rodden et al. [@P181], Hull et al. [@P179], and Ward et al. [@P183] used synonyms to refer to context, such as environment and situation. Therefore, these definitions also cannot be used to identify new context. Abowd and Mynatt [@P115] identified the five W’s (Who, What, Where, When, Why) as the minimum information that is necessary to understand context. Schilit et al. [@P116] and Pascoe [@P180] have also defined the term context. Dey claimed that these definitions were too specific and cannot be used to identify context in a broader sense and provided a definition for context as follows: *“Context is any information that can be used to characterise the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves [@P104].”* We accept the definition of context provided by Abowd et al. [@P104] to be used in this research work, because this definition can be used to identify context from data in general. If we consider a data element, by using this definition, we can easily identify whether the data element is context or not. A number of dictionaries have also defined and explained the word context: Synonyms [@P437]: *“Circumstance, situation, phase, position, posture, attitude, place, point; terms; regime; footing, standing, status, occasion, surroundings, environment, location, dependence.”* Definition by FOLDOC [@P438]: *“That which surrounds, and gives meaning to, something else.”* Definition by WordNet [@P439]: *“Discourse that surrounds a language unit and helps to determine its interpretation”* Definition by Longman [@P440]: *“The situation, events, or information that are related to something and that help you to understand it”* In addition, Sanchez et al. [@P344] explained the distinction between raw data and context information as follows: **Raw (sensor) data:** Is unprocessed and retrieved directly from the data source, such as sensors. **Context information:** Is generated by processing raw sensor data. Further, it is checked for consistency and meta data is added. For example, the sensor readings produced by GPS sensors can be considered as raw sensor data. Once we put the GPS sensor readings in such a way that it represents a geographical location, we call it context information. Therefore in general, the raw data values produced by sensors can be considered as data. If this data can be used to generate context information, we identify these data as context. Therefore, mostly what we capture from sensors are data not the context information. Ahn and Kim [@P278] define context (also called compound events) as a set of interrelated events with logical and timing relations among them. They also define an event as an occurrence that triggers a condition in a target area. There are two categories of events: discrete events and continuous events. If the sampling rate is *p*: **Discrete events:** An event that occurs at time t and t + *p*, there are considered to have been two separate event instances. (e.g. a door open, lights on, etc.) **Continuous events:** An event instance lasting for at least time *p*, where an event occurring at time t and t + *p*, cannot be considered as two separate events. (e.g. raining, having a shower, driving a car, etc.) ### Definition of Context-awareness {#chapter2:CAF:CARD:Definition of Context-awareness} The term context awareness, also called sentient, was first introduced by Schilit and Theimer [@P173] in 1994. Later, it was defined by Ryan et al. [@P182]. In both cases, the focus was on computer applications and systems. As stated by Abowd et al. [@P104], those definitions are too specific and cannot be used to identify whether a given system is a context-aware system or not. Therefore, Dey has defined the term context-awareness as follows: *“A system is context-aware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user’s task. [@P104]”* We accept the above definition on context-awareness to be used in our research work, because we can use this definition to identify context-aware systems from the rest. If we consider a system, by using this definition we can easily identify whether this system is a context-aware system or not. Context awareness frameworks typically should support acquisition, representation, delivery, and reaction [@P143]. In addition, there are three main approaches that we can follow to build context-aware applications [@P339]. **No application-level context model:** Applications perform all the actions, such as context acquisition, pre-processing, storing, and reasoning within the application boundaries. **Implicit context model:** Applications uses libraries, frameworks, and toolkits to perform context acquisition, pre-processing, storing, and reasoning tasks. It provides a standard design to follow that makes it easier to build the applications quickly. However, still the context is hard bound to the application. **Explicit context model:** Applications uses a context management infrastructure or middleware solution. Therefore, actions such as context acquisition, pre-processing, storing, and reasoning lie outside the application boundaries. Context management and application are clearly separated and can be developed and extend independently. ### Definition of Context Model and Context Attribute {#chapter2:CAF:CARD:Definition_of_Context_Model_and_Context_Attribute} We adopt the following interpretations of context model and context attributes provided by Henricksen [@P389] based on Abowd et al. [@P104] in our research work. *“A context model identifies a concrete subset of the context that is realistically attainable from sensors, applications and users and able to be exploited in the execution of the task. The context model that is employed by a given context-aware application is usually explicitly specified by the application developer, but may evolve over time [@P389].”* *“A context attribute is an element of the context model describing the context. A context attribute has an identifier, a type and a value, and optionally a collection of properties describing specific characteristics [@P389].”* ### Definition of Quality of Context {#chapter2:CAF:CARD:Definition of Quality of Context} There are number of definitions and parameters that have been proposed in the literature regarding quality of context (QoC). A survey on QoC is presented in [@P291]. QoC is defined using a set of parameters that expresses the quality of requirements and properties of the context data. After evaluating a number of different parameter proposals in the literature, [@P291] has defined QoC based on three parameters: context data validity, context data precision, and context data up-to-dateness. QoC are being used to resolve context data conflicts. Further, they claim that QoC is depend on quality of the physical sensor, quality of the context data, and quality of the delivery process. Context-aware Features {#chapter2:CAF:Context-aware Features} ---------------------- After analysing and comparing the two previous efforts conducted by Schilit et al. [@P116] and Pascoe [@P180], three features that a context-aware application can support: presentation, execution, and tagging. Even though, the IoT vision was not known at the time these features are identified, they are highly applicable to the IoT paradigm as well. We elaborate these features from an IoT perspective. **Presentation:** Context can be used to decide what information and services need to be presented to the user. Let us consider a smart [@P007] environment scenario. When a user enters a supermarket and takes their smart phone out, what they want to see is their shopping list. Context-aware mobile applications need to connect to kitchen appliances such as a smart refrigerator [@P352] in the home to retrieve the shopping list and present it to the user. This provides the idea of presenting information based on context such as location, time, etc. By definition, IoT promises to provide any service anytime, anyplace, with anything and anyone, ideally using any path/network. **Execution:** Automatic execution of services is also a critical feature in the IoT paradigm. Let us consider a smart home [@P007] environment. When a user starts driving home from their office, the IoT application employed in the house should switch on the air condition system and switch on the coffee machine to be ready to use by the time the user steps into their house. These actions need to be taken automatically based on the context. Machine-to-machine communication is a significant part of the IoT. **Tagging:** In the IoT paradigm, there will be a large number of sensors attached to everyday objects. These objects will produce large volumes of sensor data that has to be collected, analysed, fused and interpreted [@P109]. Sensor data produced by a single sensor will not provide the necessary information that can be used to fully understand the situation. Therefore, sensor data collected through multiple sensors needs to be fused together. In order to accomplish the sensor data fusion task, context needs to be collected. Context needs to be tagged together with the sensor data to be processed and understood later. Context annotation plays a significant role in context-aware computing research. We also call this *tagging* operation as *annotation* as well. Context Types and Categorisation Schemes {#chapter2:CAF:context Types} ---------------------------------------- Different researchers have identified context types differently based of different perspectives. Abowd et al. [@P104] introduced one of the leading mechanisms of defining context types. They identified location, identity, time, and activity as the primary context types. Further, they defined secondary context as the context that can be found using primary context. For example, given primary context such as a person’s identity, we can acquire many pieces of related information such as phone numbers, addresses, email addresses, etc. However, using this definition we are unable to identify the type of a given context. Let us consider two GPS sensors located in two different locations. We can retrieve to identify the position of each sensor. However, we can only find the distance between the two sensors by performing calculations based on the raw values generated by the two sensor. The question is, ‘what is the category that *distance* belongs to?’ ‘is it primary or secondary?’ The *distance* is not just a value that we sensed. We computed the *distance* by fusing two pieces of context. The above definition does not represent this accurately. Thus, we define a context categorisation scheme (i.e. primary and secondary) that can be used to classify a given data value (e.g. single data item such as current time) of context in terms of an operational perspective (i.e. how the data was acquired). However, the same data value can be considered as primary context in one scenario and secondary context in another. For example, if we collect the blood pressure level of a patient directly from a sensor attached to the patient, it could be identified as primary context. However, if we derive the same information from a patient’s health record by connecting to the hospital database, we call it secondary context. Therefore, the same information can be acquired using different techniques. It is important to understand that the quality, validity, accuracy, cost and effort of acquisition, etc. may varied significantly based on the techniques used. This would be more challenging in the IoT paradigm, because there would be a large amount of data sources that can be used to retrieve the same data value. To decide which source and technique to use would be a difficult task. We will revisit this challenge in Section VI. In addition, a similar type of context information can be classified as both primary and secondary. For example, location can be raw GPS data values or the name of the location (e.g. city, road, restaurant). Therefore, identifying a location as primary context without examining how the data has been collected is fairly inaccurate. Figure \[Fig:Context Types and Categories of Context\] depicts how the context can be identified using our context type definitions. [&gt;l@ p[0.35cm]{} p[0.40cm]{}p[0.40cm]{}p[0.40cm]{}p[0.40cm]{}p[0.40cm]{} c p[0.40cm]{}p[0.40cm]{}p[0.40cm]{}p[0.40cm]{}p[0.40cm]{} c p[0.38cm]{}p[0.38cm]{} c]{} Context Types & & & & & & & & & & & & & & & & \ User & & $\checkmark$ & & & $\checkmark$ & & & & & & & $\checkmark$ & & & $\checkmark$ & $\checkmark$\ Computing (System) & & $\checkmark$ & & & $\checkmark$ & & & & & & $\checkmark$ & $\checkmark$ & & & & $\checkmark$\ Physical (Environment) & & $\checkmark$ & $\checkmark$ & & $\checkmark$ & & & & & & $\checkmark$ & $\checkmark$ & & & $\checkmark$ &\ Historical & & & & & & & & & & & $\checkmark$ & & & & &\ Social & & & & & & & & & & & & $\checkmark$ & & & &\ Networking & & & & & & & & & & & & & & & $\checkmark$ &\ Things & & & & & & & & & & & & & & & & $\checkmark$\ Sensor & & & & & & & & & & & $\checkmark$ & & & & &\ Who (Identity) & $\checkmark$ & & $\checkmark$ & $\checkmark$ & & & & & & & & & & & &\ Where (Location) & $\checkmark$ & & $\checkmark$ & $\checkmark$ & & & & & & & & & & & &\ When (Time) & & & $\checkmark$ & $\checkmark$ & $\checkmark$ & & & & & & $\checkmark$ & $\checkmark$ & & & &\ What (Activity) & $\checkmark$ & & & $\checkmark$ & & & & & & & & & & & &\ Why & & & & $\checkmark$ & & & & & & & & & & & &\ Sensed & & & & & & $\checkmark$ & & & $\checkmark$ & & & & & & &\ Static & & & & & & $\checkmark$ & & & & & & & & & &\ Profiled & & & & & & $\checkmark$ & & & $\checkmark$ & & & & & & &\ Derived & & & & & & $\checkmark$ & & & $\checkmark$ & & & & & & &\ Operational & & & & & & & & $\checkmark$ & & & & & & & &\ Conceptual & & & & & & & & $\checkmark$ & & & & & & & &\ Objective & & & & & & & & & & & & & $\checkmark$ & & &\ Cognitive & & & & & & & & & & & & & $\checkmark$ & & &\ External (Physical) & & & & & & & $\checkmark$ & & & & & & & & &\ Internal (Logical) & & & & & & & $\checkmark$ & & & & & & & & &\ Low-level (Observable) & & & & & & & & & & $\checkmark$ & & & & $\checkmark$ & &\ High-level (Non-Observable) & & & & & & & & & & $\checkmark$ & & & & $\checkmark$ & &\ \[Tbl:Different\_Context\_Categorization\_Schemes\] **Primary context:** Any information retrieved without using existing context and without performing any kind of sensor data fusion operations (e.g. GPS sensor readings as location information). **Secondary context:** Any information that can be computed using primary context. The secondary context can be computed by using sensor data fusion operations or data retrieval operations such as web service calls (e.g. identify the distance between two sensors by applying sensor data fusion operations on two raw GPS sensor values). Further, retrieved context such as phone numbers, addresses, email addresses, birthdays, list of friends from a contact information provider based on a personal identity as the primary context can also be identified as secondary context. ![Context categorisation in two different perspectives: conceptual and operational. It shows why both operational and conceptual categorisation schemes are important in IoT paradigm as the capture different perspectives.[]{data-label="Fig:Context Types and Categories of Context"}](./Figures/44-Context_Types.pdf) [&gt;m[4cm]{}@ p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} p[0.07cm]{} c]{} & User & Computing (System) & Physical (Environment) & Historical & Social & Networking & Things & Sensor & Who (Identity) & Where (Location) & When (Time) & What (Activity) & Why & Sensed & Static & Profiled & Derived & Operational & Conceptual & Objective & Cognitive & External (Physical) & Internal (Logical) & Low-level (Observable) & High-level (Non-Observable) \ User &&&&&&&&&&&&&&&&&&&&&&&&&\ Computing (System) & 3&&&&&&&&&&&&&&&&&&&&&&&&\ Physical (Environment) & 3& 3&&&&&&&&&&&&&&&&&&&&&&&\ Historical & 3& 2& 2&&&&&&&&&&&&&&&&&&&&&&\ Social & 3& 2& 2&2&&&&&&&&&&&&&&&&&&&&&\ Networking & 3& 2& 3&2& 2&&&&&&&&&&&&&&&&&&&&\ Things & 3& 2& 2&2& 2& 2&&&&&&&&&&&&&&&&&&&\ Sensor & 3& 2& 1&2& 2& 2& 2& &&&&&&&&&&&&&&&&&\ Who (Identity) & 2& 2& 2&2& 2& 2& 2& 2&&&&&&&&&&&&&&&&&\ Where (Location) & 3& 3& 2&2& 2& 2& 2& 3& 3&&&&&&&&&&&&&&&&\ When (Time) & 3& 3& 3&2& 3& 3& 3& 3& 3& 3&&&&&&&&&&&&&&&\ What (Activity) & 3& 2& 2&2& 2& 2& 2& 2& 3& 3& 3&&&&&&&&&&&&&&\ Why & 3& 3& 3&2& 3& 3& 3& 3& 3& 3& 3& 3&&&&&&&&&&&&&\ Sensed & 1& 1& 1&2& 1& 1& 1& 1& 1& 1& 1& 1& 1&&&&&&&&&&&&\ Static & 2& 3& 3&2& 3& 3& 3& 3& 3& 3& 3& 3& 3& 3&&&&&&&&&&&\ Profiled & 2& 2& 2&2& 2& 2& 2& 2& 2& 2& 2& 2& 2& 3& 3&&&&&&&&&&\ Derived & 2& 2& 2&2& 2& 2& 2& 2& 2& 2& 2& 2& 2& 3& 3& 3&&&&&&&&&\ Operational & 3& 3& 3&2& 3& 3& 3& 3& 3& 3& 3& 3& 3& 2& 2& 2& 2&&&&&&&&\ Conceptual & 1& 1& 1&2& 1& 1& 1& 1& 1& 1& 1& 1& 1& 2& 2& 2& 2& 2&&&&&&&\ Objective & 2& 2& 2&2& 2& 2& 2& 2& 1& 1& 1& 1& 1& 2& 2& 2& 2& 3& 2&&&&&&\ Cognitive & 1& 3& 3&2& 3& 3& 3& 3& 3& 3& 3& 3& 1& 3& 2& 1& 1& 3& 2& 3&&&&&\ External (Physical) & 2& 2& 2&2& 2& 2& 2& 2& 2& 2& 2& 2& 2& 1& 2& 3& 3& 2& 2& 2& 3&&&&\ Internal (Logical) & 2& 2& 2&2& 2& 2& 2& 2& 2& 2& 2& 2& 2& 3& 2& 1& 1& 2& 2& 2& 1& 3&&&\ Low-level (Observable) & 2& 2& 2&2& 2& 2& 2& 2& 2& 2& 2& 2& 3& 1& 2& 3& 3& 2& 2& 2& 3& 1& 3&&\ High-level (Non-Observable) & 2& 2& 2&2& 2& 2& 2& 2& 2& 2& 2& 2& 2& 3& 2& 1& 1& 2& 2& 2& 1& & 1& 3&\ \[Tbl:Relationship\_Between\_Different\_Context\_Categories\] We acknowledge location, identity, time, and activity as important context information. The IoT paradigm needs to consider more comprehensive categorisation schemes in a hierarchical manner, such as major categories, sub categories and so on. Operational categorisation schemes allow us to understand the issues and challenges in data acquisition techniques, as well as quality and cost factors related to context. In contrast, conceptual categorisation allows an understanding of the conceptual relationships between context. We have to integrate perspective in order to model context precisely. We compare different context categorisation schemes in Table \[Tbl:Comparison of Context Categorization Schemes\]. In addition to the two categorisation schemes we discussed earlier there are several other schemes introduced by different researchers focusing on different perspectives. Further, we highlight relationships between different context categories (also called context types) in different perspectives in Table \[Tbl:Different\_Context\_Categorization\_Schemes\] and in Table \[Tbl:Relationship\_Between\_Different\_Context\_Categories\]. These context categories are not completely different from each other. Each category shares common characteristics with the others. The similarities and difference among categories are clearly presented in Table \[Tbl:Relationship\_Between\_Different\_Context\_Categories\]. Further, we have listed and briefly explained three major context categorisation schemes and their categories proposed by previous researchers. In Table \[Tbl:Different\_Context\_Categorization\_Schemes\], we present each categorisation effort in chronological order from left to right. Schilit et al. [@P116] (1994): They categorised context into three categories using a conceptual categorisation based technique on three common questions that can be used to determine the context. 1. Where you are: This includes all location related information such as GPS coordinates, common names (e.g. coffee shop, university, police), specific names (e.g. Canberra city police), specific addresses, user preferences (e.g. user’s favourite coffee shop). 2. Who you are with: The information about the people present around the user. 3. What resources are nearby: This includes information about resources available in the area where the user is located, such as machinery, smart objects, and utilities. [ |p[0.2cm]{}| p[2.9cm]{} p[6.7cm]{} p[6cm]{}| ]{} & Categorisation Schemes & Pros & Cons \ & Where, when, who, what, objective & Provide a broader guide that helps to identify the related context Less comprehensive & Do not provide information about operational aspects such as cost, time, complexity, techniques, and effort of data acquisition Do not provide information about frequency of update required \ & User, computing, physical, environmental, time, social, networking, things, sensors contexts & More clear and structured method to organise context More extensible and flexible More comprehensive & Do not provide information about operational aspects such as cost, time, complexity, techniques, and effort of data acquisition Do not provide information about frequency of update required \ & Why, cognitive & Allow to model mental reasoning behind context & Do not provide information about core context, relationships between context or operational aspects such as cost, time, complexity, techniques, and effort of data acquisition \ & Sensed, static, profiled, derived & Provide information about programming and coding level Provide information about context source and computational complexity Allow to track information such as frequency of update required, validation, quality, etc. Provide information about cost and effort of data acquisition & Weak in representing the relationship among context Difficult to classify context information due to ambiguity. Same piece of data can belong to different categories depending to the situation (e.g. location can be derived as well as sensed) \ & Internal (physical), internal (logical), low-level (observable), high-level (non-observable) & Provide information about context sources and the process of accessing data (e.g. whether more reasoning is required or not) Provide information about cost and effort of data acquisition Provide information about computational complexity & Weak in representing the relationship among context Difficult to classify context information due to ambiguity. Same piece of data can belong to different categories depending to the situation (e.g. temperature can be physical or virtual sensor) \ \[Tbl:Comparison of Context Categorization Schemes\] Henricksen [@P389] (2003): Categorised context into four categories based on an operational categorisation technique. 1. Sensed: Sensor data directly sensed from the sensors, such as temperature measured by a temperature sensor. Values will be changed over time with a high frequency. 2. Static: Static information which will not change over time, such as manufacturer of the sensor, capabilities of the sensor, range of the sensor measurements. 3. Profiled: Information that changes over time with a low frequency, such as once per month (e.g. location of sensor, sensor ID). 4. Derived: The information computed using primary context such as distance of two sensors calculated using two GPS sensors. Van Bunningen et al. [@P304] (2005): Instead of categorising context, they classified the context categorisation schemes into two broader categories: operational and conceptual. 1. Operational categorisation: Categorise context based on how they were acquired, modelled, and treated. 2. Conceptual categorisation: Categorise context based on the meaning and conceptual relationships between the context. Based on the evaluation of context categorisation, it is evident that no single categorisation scheme can accommodate all the demands in the IoT paradigm. We presented a comparison between conceptual and operational categorisation schemes in Table \[Tbl:Comparison of Context Categorization Schemes\]. To build an ideal context-aware middleware solution for the IoT, different categorisation schemes need to be combined together in order to complement their strengths and mitigate their weaknesses. Levels of Context Awareness and characteristics {#chapter2:CAF:Levels of Context Awareness} ----------------------------------------------- Context awareness can be identified in three levels based on the user interaction [@P430]. **Personalisation**: It allows the users to set their preferences, likes, and expectation to the system manually. For example, users may set the preferred temperature in a smart home environment where the heating system of the home can maintain the specified temperature across all rooms. **Passive context-awareness**: The system constantly monitors the environment and offers the appropriate options to the users so they can take actions. For example, when a user enters a super market, the mobile phone alerts the user with a list of discounted products to be considered. **Active context-awareness**: The system continuously and autonomously monitors the situation and acts autonomously. For example, if the smoke detectors and temperature sensors detect a fire in a room in a smart home environment, the system will automatically notify the fire brigade as well as the owner of the house via appropriate methods such as phone calls. In addition, Van Bunningen et al. [@P304] has identified comprehensively, and discussed, eight characteristics of context: context 1) is sensed though sensors or sensor networks, 2) is sensed by small and constrained devices, 3) originates from distributed sources, 4) is continuously changing, 5) comes from mobile objects 6) has a temporal character 7) has a spatial character, 8) is imperfect and uncertain. Context Awareness Management Design Principles {#chapter2:CAF:Context Awareness Management Design Principles} ---------------------------------------------- Martin et al. [@P294] have identified and comprehensively discussed six design principles related to context-aware management frameworks (middleware). Further, Ramparany et al. [@P340] and Bernardos et al. [@P302] have also identified several design requirements. We summarise the findings below with brief explanations. This list is not intended to be exhaustive. Only the most important design aspects are considered. **Architecture layers and components**: The functionalities need to be divided into layers and components in a meaningful manner. Each component should perform a very limited amount of the task and should be able to perform independently up to a large extent. **Scalability and extensibility**: The component should be able to added or removed dynamically. For example. new functionalities (i.e. components) should be able to be add without altering the existing components (e.g. Open Services Gateway initiative). The component needs to be developed according to standards across the solutions, which improves scalability and extensibility (e.g. plug-in architectures). **Application programming interface (API)**: All the functionalities should be available to be accessed via a comprehensive easy to learn and easy to use API. This allows the incorporation of different solutions very easily. Further, API can be used to bind context management frameworks to applications. Interoperability among different IoT solutions heavily depends on API and their usability. **Debugging mechanisms and tools**: Debugging is a critical task in any software development process. In the IoT paradigm, debugging would be difficult due to the exponential number of possible alternative interactions. In order to win the trust of the consumers, the IoT should prove its trustworthiness. Integrated debug mechanisms inbuilt into the framework will help to achieve this challenge. For example, the justifications behind the results produced by the reasoners should be available to be evaluated to find possible inaccuracies so further development can be carried out. Some initial work in this area is presented in the Intelligibility Toolkit [@P384]. **Automatic context life cycle management**: Context-aware frameworks should be able to be understand by the available context sources (i.e. physical and virtual sensors), their data structure, and automatically built internal data models to facilitate them. Further, raw context needs to be retrieved and transformed into appropriate context representation models correctly with minimum human intervention. **context model in-dependency**: Context needs to be modelled and stored separately from context-aware framework related code and data structures, which allows both parts to be altered independently. **Extended, rich, and comprehensive modelling**: Context models should be able to extend easily. The IoT will need to deal with enormous amount of devices, and will be required to handle vast amounts of domain specific context. It also needs to support complex relationships, constrains, etc. In an ideal context-aware framework for the IoT, multiple different context representation models should be incorporated together to improve their efficiency and effectiveness. **Multi-model reasoning**: No single reasoning model can accommodate the demands of the IoT. We will discuss reasoning in Section \[chapter2:CAF:Context Reasoning Decision Models\]. Each reasoning model has its own strengths and weaknesses. An ideal framework should incorporate multiple reasoning models together to complement each others’ strengths and mitigate their weaknesses. **Mobility support**: In the IoT, most devices would be mobile, where each one has a different set of hardware and software capabilities. Therefore, context-aware frameworks should be developed in multiple flavours (i.e. versions), which can run on different hardware and software configurations (e.g. more capabilities for server level software and less capabilities for mobile phones). **Share information (real-time and historic)**: In the IoT, there is no single point of control. The architecture would be distributed. Therefore, context sharing should happen at different levels: framework-to-framework and framework-to-application. Context model in-dependency has been discussed earlier and is crucial in sharing. **Resource optimisation**: Due to the scale (e.g. 50 billion devices), a small improvement in data structures or processing can make a huge impact in storage and energy consumption. This stays true for any type of resource used in the IoT. **Monitoring and detect event**: Events play a significant role in the IoT, which is complement by monitoring. Detecting an event triggers an action autonomously in the IoT paradigm. This is how the IoT will help humans carry out their day-to-day work easily and efficiently. Detecting events in real time is a major challenge for context-aware frameworks in the IoT paradigm. Context Life Cycle {#chapter2:CDLC} ================== A data life cycle shows how data moves from phase to phase in software systems (e.g. application, middleware). Specifically, it explains where the data is generated and where the data is consumed. In this section we consider movement of context in context-aware systems. Context-awareness is no longer limited to desktop, web, or mobile applications. It has already become a service: Context-as-a-Service (CXaaS) [@P024]. In other terms, context management has become an essential functionality in software systems. This trend will grow in the IoT paradigm. There are web-based context management services (WCXMS) that provide context information management throughout the context’s life cycle. Hynes et al. [@P024] have classified data life cycles into two categories: Enterprise Lifecycle Approaches (ELA) and Context Lifecycle Approaches (CLA). ELA are focused on context. However, these life cycles are robust and well-established, based on industry standard strategies for data management in general. In contrast, CLA are specialised in context management. However, they are not tested or standardised strategies as much as ELA. We have selected ten popular data life cycles to analyse in this survey. In the following list, 1-5 belong to ELA category and 6-10 belong to CLA category. Three dots (...) denotes reconnecting to the first phase by completing the cycle. The right arrow ($\rightarrow$) denotes data transfer form one phase to another. 1. *Information Lifecycle Management (ILM)* [@P516]: creation and receipt $\rightarrow$ distribution $\rightarrow$ use $\rightarrow$ maintenance $\rightarrow$ disposition $\rightarrow$ ... 2. *Enterprise Content Management (ECM)* [@P517]: capture $\rightarrow$ manage $\rightarrow$ store $\rightarrow$ preserve $\rightarrow$ deliver $\rightarrow$ ... 3. *Hayden’s Data Lifecycle* [@P515]: collection $\rightarrow$ relevance $\rightarrow$ classification $\rightarrow$ handling and storage $\rightarrow$ transmission and transportation $\rightarrow$ manipulate, conversion and alteration $\rightarrow$ release $\rightarrow$ backup $\rightarrow$ retention destruction $\rightarrow$ ... 4. *Intelligence Cycle* [@P170]: collection $\rightarrow$ processing $\rightarrow$ analysis$\rightarrow$ publication $\rightarrow$ feedback $\rightarrow$ ... 5. *Boyd Control Loop* (also called OODA loop) [@P171]: observe $\rightarrow$ orient $\rightarrow$ decide $\rightarrow$ act $\rightarrow$ ... 6. *Chantzara and Anagnostou Lifecycle* [@P114]: sense (context provider) $\rightarrow$ process (context broker) $\rightarrow$ disseminate (context broker) $\rightarrow$ use (service provider) $\rightarrow$ ... 7. *Ferscha et al. Lifecycle* [@P518]: sensing $\rightarrow$ transformation $\rightarrow$ representation $\rightarrow$ rule base $\rightarrow$ actuation $\rightarrow$ ... 8. *MOSQUITO* [@P519]: context information discovery $\rightarrow$ context information acquisition $\rightarrow$ context information reasoning $\rightarrow$ ... 9. *WCXMS Lifecycle* [@P024]: (context sensing $\rightarrow$ context transmission $\rightarrow$ context acquisition $\rightarrow$ ... ) $\rightarrow$ context classification $\rightarrow$ context handling $\rightarrow$ (context dissemination $\rightarrow$ context usage $\rightarrow$ context deletion $\rightarrow$ context request $\rightarrow$... ) $\rightarrow$ context maintenance $\rightarrow$ context disposition $\rightarrow$... 10. *Baldauf et al.* [@P402]: sensors $\rightarrow$ raw data retrieval $\rightarrow$ reprocessing $\rightarrow$ storage $\rightarrow$ application. In addition to the life cycles, Bernardos et al. [@P302] identified three phases in a typical context management system: context acquisition, information processing, and reasoning and decision. After reviewing the above life cycles, we derived an appropriate (i.e. minimum number of phases but includes all essential) context life cycle as depicted in Figure \[Fig:Context\_Data\_Life\_Cycle\]. ![This is the simplest form of a context life cycle. These four steps are essential in context management systems and middleware solutions. All the other functions that may offer by systems are value added services.[]{data-label="Fig:Context_Data_Life_Cycle"}](./Figures/09-Context_Data_Life_Cycle.pdf) This context life cycle consists of four phases. First, context needs to be acquired from various sources. The sources could be physical sensors or virtual sensors (context acquisition). Second, the collected data needs to be modelled and represent according to a meaningful manner (context modelling). Third, modelled data needs to be processed to derive high-level context information from low-level raw sensor data (context reasoning). Finally, both high-level and low-level context needs to be distributed to the consumers who are interested in context (context dissemination). The following discussion is based on these four phases. [ p[1.5cm]{} p[7.6cm]{} p[7.4cm]{} ]{} Criteria & Push & Pull \ Pros & Sensor hardware make the major decisions on sensing and communication Can be both instant or interval sensing and communication & Software of the sensor data consumer makes the major decisions on sensing and communication Decision on when to collect data is based on reasoning significant amount of data in software level Can be both instant or interval sensing and communication \ Cons & Decision on when to send data based on reasoning less amount of data Sensors are required to program when the requirements are changed & More communication bandwidth is required where software level has to send data requests to the sensors all the time \ Applicability & Can be used when sensors know about when to send the data and have enough processing power and knowledge to reason locally. (e.g. event detection where one or small number of sensors can reason and evaluate the conditions by their own without software level complex data processing and reasoning.) & Can be used when sensors do not have knowledge on when to send the data to the consumer. (e.g. event detection where large amount of data need to be collected, processed, and reasoned in order to recognize the event.)\ \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Responsibility\] [ p[1.5cm]{} p[7.5cm]{} p[7.5cm]{} ]{} Criteria & Instant & Interval \ Pros & Save energy due to no redundant network communications are involved More accurate data can be gather as the network transmission would be triggered as soon as the conditions are met & Either sensors can be configured to sense and communicate with data consumers in a predefined frequency or the sensor data consumers can retrieve data explicitly from the sensors in a predefined frequency Sensors do not need to be intelligent/knowledge or have significant processing and reasoning capabilities Allows to understand the trends or behaviour by collecting sensor data over time \ Cons & More knowledge is required to identify the conditions and the satisfaction of the conditions Hardware level (i.e. sensor) or software level should know exactly what to look for Difficult to detect events which require different types of data from number of different sensors Comparatively consume more energy for data processing & May waste energy due to redundant data communication Less accurate as the sensor readings can be change over the interval between two data communications Reasoning need to be done in software level by the data consumer which will miss some occurrence of events due to above inaccuracy \ Applicability & Can be used to detect frost events or heat events in agricultural domain. In smart home domain, this method can be used to detect some one entering to a room via door sensors. Ideally, applicable for the situations where expected outcome is well-known by either hardware level (i.e. sensors) or software level & Can be used to collect data from temperature sensors for controlling air condition or measure air pollution where actions are not event oriented but monitoring oriented. Ideally, applicable for the situations where expected outcome is not known by either hardware level (i.e. sensors) or software level \ \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Frequency\] [ p[1.5cm]{} p[5cm]{} p[5cm]{} p[5cm]{} ]{} Criteria & Direct Sensor Access & Through Middleware & Through Context Server \ Pros & Efficient as it allows direct communication with the sensors Have more control over sensor configuration and data retrieval process & Easy to manage and retrieve context as most of the management tasks are facilitated by the middleware. Can retrieve data faster with less effort and technical knowledge & Less resources required Can retrieve data faster with less effort and technical knowledge \ Cons & Significant technical knowledge is required including hardware level embedded device programming and configuring Significant amount of time, effort, cost involved Updating is very difficult due to tight bound between sensor hardware and consumer application & Require more resources (e.g. processing, memory, storage) as middleware solutions need to be employed Less control over sensor configuration Moderately efficient as data need to be retrieve through middleware & No control over sensor configuration Less efficient as the context need to be pulled from server over the network \ Applicability & Can be used for small scale scientific experiments. Can also be used for situation where limited number of sensors are involved & IoT application will use this methods in most cases. Can be used in situations where large number of heterogeneous sensors are involved & Can be used in situations where significant amount of context are required but have only limited resources (i.e. cannot employ context middleware solutions due to resource limitations) that allows run the consumer application\ \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Source\] [ p[1.5cm]{} p[5cm]{} p[5cm]{} p[5cm]{} ]{} Criteria & Physical Sensors & Virtual Sensors & Logical Sensors \ Pros & Error detection is possible and relatively easy Missing value identification is also relatively easy Have access to low-level sensor configuration therefore can be more efficient & Provide moderately meaningful data Provide high-level context information Provided data are less processed Do not need to deal with hardware level tasks & Provide highly meaningful data Provide high-level context information Usually more accurate Do not need to deal with hardware level tasks \ Cons & Hardware deployment and maintenance is costly Have to deal with sensor and hardware level programming, design, development, test, debug Provide less meaningful and low-level raw sensor data & Difficult to find errors in data Filling missing values is not easy as they are mostly non-numerical and unpredictable & Difficult to find error in data Filling missing values is not easy as they are mostly non-numerical Do not have control over data production process License fees and other restrictions may apply \ Applicability & Can be used to collect physically observable phenomenon such as light, temperature, humidity, gas, etc. & Can be used to collect information that cannot be measure physically such as calendar details, email, chat, maps, contact details, social networking related data, user preferences, user behaviour, etc. & Can be used to collect information that are costly and impossible to collect directly through single physical sensor where advance processing and fusing data from multiple sensors are required (e.g. weather information, activity recognition, location recognition, etc.). \ \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Sensor\_Types\] Context Acquisition {#chapter2:CDLC:Context Acquisition} ------------------- In this section we discuss five factors that need to be considered when developing context-aware middleware solutions in the IoT paradigm. The techniques used to acquire context can be varied based on responsibility, frequency, context source, sensor type, and acquisition process. ### Based on Responsibility Context (e.g. sensor data) acquisition can be primarily accomplished using two methods [@P334]: push and pull. A comparison is presented in Table \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Responsibility\]. Pull: The software component which is responsible for acquiring sensor data from sensors make a request (e.g. query) from the sensor hardware periodically (i.e. after certain intervals) or instantly to acquire data. Push: The physical or virtual sensor pushes data to the software component which is responsible to acquiring sensor data periodically or instantly. Periodical or instant pushing can be employed to facilitate a publish and subscribe model. ### Based on Frequency Further, in the IoT paradigm, context can be generated based on two different event types: instant events and interval events \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Frequency\]. Instant (also known as threshold violation): These events occur instantly. The events do not span across certain amounts of time. Open a door, switch on a light, or animal enters experimental crop field are some types of instant events. In order to detect this type of event, sensor data needs to be acquired when the event occurs. Both push and pull methods can be employed. Interval (also known as periodically): These events span a certain period of time. Raining, animal eating a plant, or winter are some interval events. In order to detect this type of event, sensor data needs to be acquired periodically (e.g. sense and send data to the software every 20 seconds). Both push and pull methods can be employed. ### Based on Source In addition, context acquisition methods can be categorised into three categories [@P419] based on where the context came from. A comparison is presented in Table \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Source\]. Acquire directly from sensor hardware: In this method, context is directly acquired from the sensor by communicating with the sensor hardware and related APIs. Software drivers and libraries need to be installed locally. This method is typically used to retrieve data from sensors attached locally. Most devices and sensors today require some amount of driver support and can be connected via USB, COM, or serial ports. However, wireless technologies are becoming popular in the sensor community, which allows data transmission without driver installations. In the IoT paradigm most objects will communicate with each other via a wireless means. Acquire through a middleware infrastructure: In this method, sensor (context) data is acquired by middleware solutions such as GSN. The applications can retrieve sensor data from the middleware and not from the sensor hardware directly. For example, some GSN instances will directly access sensor hardware and rest of the GSN instances will communicate with other GSN instances to retrieve data. Acquire from context servers: In this method, context is acquired from several other context storages (e.g. databases, RSS (Really Simple Syndication) feeds, web services) via different mechanisms such as web service calls. This mechanism is useful when the hosting device of the context-aware application has limited computing resources. Resource-rich context servers can be used to acquire and process context. ### Based on Sensor Types {#chapter2:CDLC:CA:Based_on_Sensor_Types} There are different types of sensors that can be employed to acquire context. In general usage, the term ‘sensor’ is used to refer to tangible sensor hardware devices. However, among the technical community, sensors are refer to as any data source that provides relevant context. Therefore, sensors can be divided into three categories [@P543]: physical, virtual, and logical. A comparison is presented in Table \[Tbl:Comparison\_of\_Context\_Acquisition\_Methods\_based\_on\_Sensor\_Types\]. Physical sensors: These are the most commonly used type of sensors and they are tangible. These sensors generate sensor data by themselves. Most of the devices we use today are equipped with a variety of sensor (e.g. temperature, humidity, microphone, touch). A discussion on commonly used sensor data types and sensors is presented in [@P544]. The data retrieved from physical sensors is called low-level context. They are less meaningful, trivial, and vulnerable to small changes. IoT solutions needs to understand the physical world using imperfect, conflicting and imprecise data. Virtual sensors: These sensors do not necessarily generate sensor data by themselves. Virtual sensors retrieve data from many sources and publish it as sensor data (e.g. calendar, contact number directory, twitter statuses, email and chat applications). These sensors do not have a physical presence. They commonly use web services technology to send and receive data. Logical sensors (also called software sensors): They combine physical sensors and virtual sensors in order to produce more meaningful information. A web service dedicated to providing weather information can be called a logical sensor. Weather stations use thousands of physical sensors to collect weather information. They also collect information from virtual sensors such as maps, calendars, and historic data. Finally, weather information is produced by combing both physical and virtual sensors. In addition, the android mobile operating system consists of a number of software sensors such as gravity, linear accelerometer, rotation vector, and orientation sensors. ### Based on Acquisition Process There are three ways to acquire context: sense, derive, and manually provided. Sense: The data is sensed through sensors, including the sensed data stored in databases (e.g. retrieve temperature from a sensor, retrieve appointments details from a calendar). Derive: The information is generated by performing computational operations on sensor data. These operations could be as simple as web service calls or as complex as mathematical functions run over sensed data (e.g. calculate distance between two sensors using GPS coordinates). The necessary data should be available to apply any numerical or logical reasoning technique. Manually provided: Users provide context information manually via predefined settings options such as preferences (e.g. understand that user doesn’t like to receive event notifications between 10pm to 6.00am). This method can be use to retrieve any type of information. Context Modelling {#chapter2:CAF:Context Modelling} ----------------- We discuss the basic definition of context modelling in Section \[chapter2:CAF:CARD:Definition\_of\_Context\_Model\_and\_Context\_Attribute\]. Context modelling is also widely refereed to as context representation. There are several popular context modelling in context-aware computing. Before we present the discussion on context modelling techniques, let’s briefly introduce context modelling fundamentals. Context models can be static or dynamic. Static models have a predefined set of context information that will be collected and stored [@P271]. The requirements that need to be taken into consideration when modelling context information are identified and explained in [@P216] as heterogeneity and mobility, relationships and dependencies, timeliness (also called freshness), imperfection, reasoning, usability of modelling formalisms, and efficient context provisioning. Typically, there are two steps in representing context according to a model: Context modelling process: In the first step, new context information needs to be defined in terms of attributes, characteristics, relationships with previously specified context, quality-of context attributes and the queries for synchronous context requests. Organize context according to the model: In the second step, the result of the context modelling step needs to be validated. Then the new context information needs to be merged and added to the existing context information repository. Finally, the new context information is made available to be used when required. The first step performs the actual modelling of context. However, the factors and parameters that are considered for the modelling context are very subjective. It varies from one solution to another. We use two examples to demonstrate the variance. Currently, there is no standard to specify what type of information needs to be considered in context modelling. We discussed context categories proposed by the researcher in Section \[chapter2:CAF:context Types\]. Even though these categories provide high-level guidelines towards choosing relevant context, choosing specific context attributes is a subjective decision. *Example 1:* MoCA [@P277] has used an object oriented approach to model context using XML. There are three sections in the proposed context model: structural information (e.g. attributes and dependencies among context types), behavioural information (e.g. whether the context attribute has a constant or variable value), and context-specific abstractions (e.g. contextual events and queries). *Example 2:* W4 Diary [@P287] uses a W4 (who, what, where, when) based context model to structure data in order to extract high-level information from location data. For example, W4 represents context as tuples (e.g. Who: John, What: walking:4km/h, Where: ANU, Canberra, When: 2013-01-05:9.30am). In the IoT paradigm, context information has six states [@P335]: ready, running, suspended, resumed, expired, and terminated. These states are also similar to the process states in an operating system. They align context to an event. An example scenario from the smart agriculture domain can be used to explain the state transition of context. Ready: Every context is in the ready state at the initial stage (e.g. possible event can be ‘an animal eating crop’). Suspended: When the context seems to be invalid temporally (e.g. sensors detect that animal stops eating crop temporarily). Resumed: When the context becomes valid from being suspended (e.g. sensors detect animal starts to eat crop again). Expired: When the context has expired and further information is not available (e.g. sensor data has not been received by the system for the last 60 seconds where all sensor data is considered to be expired (based on policy) within 20 seconds from the time it is collected). Terminated: When the context is no longer valid (i.e. inferred something else) and further information is not available (e.g. sensors detects that animal moves away from the crops). [ m[0.05cm]{} m[6.5cm]{} m[10cm]{} ]{} & RDF(S) & OWL(2) \ Pros & Provide basic elements to describe and organize knowledge. Further, OWL is build on top of RDFS Relatively simple Faster processing and reasoning & Improved version of RDFS. Therefore adaptability from RDF(S) to OWL is high Increasing number of tools are supported More expressive (e.g. larger vocabulary/constraints, rules, more meaningful) Higher machine interoperability (e.g. strong syntax) W3C approved standard for semantics (since 2004) Comes in three versions (i.e. OWL light, OWL DL, OWL Full) where each one has more expressive and reasoning power that previous \ Cons & Lack of inconsistency checking and reasoning Limited expressiveness (e.g. no cardinality support) & Relatively Complex Low performance (e.g. require more computation power and time) \ \[Tbl:Comparison of Semantic Technologies\] The most popular context modelling techniques are surveyed in [@P431; @P184]. These surveys discuss a number of systems that have been developed based on the following techniques. Each of the following techniques has its own strengths and weaknesses. We discuss context modelling techniques at a high-level. The actual implementations of these techniques can vary widely depending on application domain (e.g. implementation details may differ from embedded environments to mobile environments to cloud based environments). Therefore, our focus is on conceptual perspective of each modelling technique no on specific implementation. Our discussion is based on the six most popular context modelling techniques: *key-value, markup schemes, graphical, object based, logic based,* and *ontology based modelling*. A comparison of these models is presented in Table \[Tbl:Comparison\_of\_Context\_Modelling\_and\_Representation\_Techniques\]. ### Key-Value Modelling It models context information as key-value pairs in different formats such as text files and binary files. This is the simplest form of context representation among all the other techniques. They are easy to manage when they have smaller amounts of data. However, key-value modelling is not scalable and not suitable to store complex data structures. Further, hierarchical structures or relationships cannot be modelled using key-value pairs. Therefore, lack of data structuring capability makes it difficult to retrieve modelled information efficiently. Further, attaching meta information is not possible. The key-value technique is an application oriented and application bounded technique that suits the purpose of temporary storage such as less complex application configurations and user preferences. ### Markup Scheme Modelling (Tagged Encoding) It models data using tags. Therefore, context is stored within tags. This technique is an improvement over the key-value modelling technique. The advantage of using markup tags is that it allows efficient data retrieval. Further, validation is supported through schema definitions. Sophisticated validation tools are available for popular markup techniques such as XML. Range checking is also possible up to some degree for numerical values. Markup schemas such as XML are widely used in almost all application domains to store data temporarily, transfer data among applications, and transfer data among application components. In contrast, markup languages do not provide advanced expressive capabilities which allow reasoning. Further, due to lack of design specifications, context modelling, retrieval, interoperability, and re-usability over different markup schemes can be difficult. A common application of markup based modelling is modelling profiles. Profiles are commonly developed using languages such as XML. However, the concept of markup languages are not restricted only to XML. Any language or mechanism (e.g. JSON) that supports tag based storage allows markup scheme modelling. An example of popular markup scheme modelling is Composite Capabilities/Preference Profiles (CC/PP) [@P529]. There are a significant number of similar emerging applications such as ContextML [@P423] in context-aware computing. Tuples are also used to model context [@P271]. ### Graphical Modelling It models context with relationships. Some examples of this modelling technique are Unified Modelling Language (UML) [@P530] and Object Role Modelling (ORM) [@P531]. In terms of expressive richness, graphical modelling is better than markup and key-value modelling as it allows relationships to be captured into the context model. Actual low-level representation of the graphical modelling technique could be varied. For example, it could be a SQL database, noSQL database, XML, etc. Many other extensions have also been proposed and implemented using this technique [@P389]. Further, as we are familiar with databases, graphical modelling is a well known, easy to learn, and easy to use technique. Databases can hold massive amounts of data and provide simple data retrieval operations, which can be performed relatively quickly. In contrast, the number of different implementations (i.e. different databases and other solutions) makes it difficult with regards to interoperability. Further, there are limitations on data retrieval mechanisms such as SQL. In addition, sophisticated context retrieval requirements may demand very complex SQL queries to be employed. The queries can be difficult to create, use, and manage even with the sophisticated tools that exist today. Adding context information and changing the data structure is also difficult in later stages. However, some of the recent trends and solutions in the noSQL [@P556] movement allows these structure alteration issues to be overcome. Therefore, graphical modelling techniques can be used as persistent storage of context. ### Object Based Modelling Object based (or object oriented) concepts are used to model data using class hierarchies and relationships. Object oriented paradigm promotes encapsulation and re-usability. As most of the high-level programming languages support object oriented concepts, modelling can integrated into context-aware systems easily. Therefore, object based modelling is suitable to be used as an internal, non-shared, code based, run-time context modelling, manipulation, and storage mechanism. However, it does not provide inbuilt reasoning capabilities. Validation of object oriented designs is also difficult due to the lack of standards and specifications. ### Logic Based Modelling Facts, expressions, and rules are used to represent information about the context. Rules are used by other modelling techniques, such as ontologies, as well. Rules are primarily used to express policies, constraints, and preferences. It provides much more expressive richness compared to the other models discussed previously. Therefore, reasoning is possible up to a certain level. The specific structures and languages that can be used to model context using rules are varied. However, lack of standardisation reduces the re-usability and applicability. Furthermore, highly sophisticated and interactive graphical techniques can be employed to develop logic based or rule based representations. As a result, even non-technical users can add rules and logic to the systems during run time. Logic based modelling allows new high-level context information to be extracted using low-level context. Therefore, it has the capability to enhance other context modelling techniques by acting as a supplement. ### Ontology Based Modelling The context is organised into ontologies using semantic technologies. A number of different standards (RDF, RDFS, OWL) and reasoning capabilities are available to be used depending on the requirement. A wide range of development tools and reasoning engines are also available. However, context retrieval can be computationally intensive and time consuming when the amount of data is increased. According to many surveys, in context-aware computing and sensor data management, ontologies are the preferred mechanism of managing and modelling context despite its weaknesses. Due to its popularity and wider adaptation during the last five years in both academia and industry we present a brief discussion on semantic modelling and reasoning. However, our intention is not to survey semantic technologies but to highlight the applicability of semantics in a context-aware domain from an IoT perspective. Comprehensive and extensive amounts of information on semantic technology are available in [@P557; @P558; @P378]. \[chapter2:CAF:CM:Ontology\_Based\_Modelling\] [ m[1.2cm]{} m[4.8cm]{} m[4.8cm]{} m[5.5cm]{} ]{} Techniques & Pros & Cons & Applicability \ Key-Value & Simple Flexible Easy to manage when small in size & Strongly coupled with applications Not scalable No structure or schema Hard to retrieve information No way to represent relationships No validation support No standard processing tools are available & Can be used to model limited amount of data such as user preferences and application configurations. Mostly independent and non-related pieces of information. This is also suitable for limited data transferring and any other less complex temporary modelling requirements. \ Markup Scheme Tagged Encoding (e.g. xml) & Flexible More structured Validation possible through schemas Processing tools are available & Application depended as there are no standards for structures Can be complex when many levels of information are involved Moderately difficult to retrieve information & Can be used as intermediate data organisation format as well as mode of data transfer over network. Can be used to decouple data structures used by two components in a system. (e.g. SensorML [@P256] for store sensor descriptions, JSON as a format to data transfer over network)\ Graphical (e.g. databases) & relationships modelling Information retrieval is moderately easier Different standards and implementations are available. Validation possible through constraints & Querying can be complex Configuration may be required Interoperability among different implementation is difficult No standards but governed by design principles & Can be used for long term and large volume of permanent data archival. Historic context can be store in databases.\ Object Based & relationships modelling Can be well integrated using programming languages Processing tools are available & Hard to retrieve information No standards but govern by design principles Lack of validation & Can be used to represent context in programming code level. context runtime manipulation. Very short term, temporary, and mostly stored in computer memory. Also support data transfer over network.\ Logic Based & to generate high-level context using low-level context Simple to model and use support logical reasoning Processing tools are available & No standards Lack of validation Strongly coupled with applications & Can be used to generate high-level context using low-level context (i.e. generate new knowledge), model events and actions (i.e. event detection), and define constrains and restrictions. \ Ontology Based & Support semantic reasoning more expressive representation of context Strong validation Application independent and sharing Strong support by standardisations Fairly sophisticated tools available & Representation can be complex Information retrieval can be complex and resource intensive & Can be used to model domain knowledge and structure context based on the relationships defined by the ontology. Rather than storing data on ontologies, data can be stored in appropriate data sources (i.e. databases) while structure is provided by ontologies.\ \[Tbl:Comparison\_of\_Context\_Modelling\_and\_Representation\_Techniques\] Khoo [@P057] has explained the evolution of the web in four stages: basic Internet as Web 1.0, social media and user generated content as web 2.0, semantic web as web 3.0 and IoT as web 4.0. In this identification, semantic web has been given a separate phase to show its importance and the significant changes that semantic technologies can bring to the web in general. Ontology is the main component in semantic technology that allows it to model data. Based on the previous approaches and survey [@P184], one of the most appropriate formats to manage context is ontologies. Ontologies offer an expressive language to represent the relationships and context. IT also provides comprehensive reasoning mechanisms as well. Ontologies also allow knowledge sharing and they decouple the knowledge from the application and program codes [@P419]. There are several reasons to develop and use ontologies in contrast to other modelling techniques. The most common reasons are to [@P191; @P447] share a common understanding of the structure of information among people or software agents, analyse domain knowledge, separate domain knowledge from operational knowledge, enable reuse of domain knowledge, high-level knowledge inferring, and make domain assumptions explicit. Due to the dynamic nature, the IoT middleware solutions should support applications which are not even known at the middleware design-time. Ontologies allow the integration of knowledge on different domains into applications when necessary. Studer et al. [@P546] defined the concept of ontology as follows. *“An ontology is a formal, explicit specification of a shared conceptualisation. A conceptualisation refers to an abstract model of some phenomenon in the world by having identified the relevant concepts of that phenomenon. Explicit means that the type of concepts used, and the constraints on their use are explicitly defined. For example, in medical domains, the concepts are diseases and symptoms, the relations between them are causal and a constraint is that a disease cannot cause itself. Formal refers to the fact that the ontology should be machine readable, which excludes natural language. Shared reflects the notion that an ontology captures consensual knowledge, that is, it is not private to some individual, but accepted by a group.”* Another acceptable definition has been presented by Noy and McGuinness [@P447]. Further ontologies are discussed extensively as principles, methods, and applications in perspective [@P445]. Some of the requirements and objectives behind designing an ontology are simplicity, flexibility and extensibility, generality, and expressiveness [@P545]. In addition, some of the general requirements in context modelling and representation are unique identification, validation, reuse, handling uncertainty, and incomplete information [@P185]. A further eight principles for developing ontologies are identified by Korpipaa and Mantyjarvi [@P034] as: domain, simplicity, practical access, flexibility and expandability, facilitate inference, genericity, efficiency, and expressiveness. Ontologies consists of several common key components [@P197; @P332] such as individuals, classes, attributes, relations, function terms, restrictions, rules, axioms, and events. Furthermore, there are two steps in developing ontologies. First, the domain and scope need to be clearly defined. Then existing ontologies need to be reviewed to find the possibilities of leverage existing in ontologies. One of the main goals of ontologies is the reusability of shared knowledge. By the time this survey was prepared, there were several popular domains that design, develop, and use ontologies. Sensor domain is one of them. A survey of the semantic specification of sensors is presented in [@P103]. They have evaluated and compared a number of ontologies and their capabilities. There are several popular semantic web ontology languages that can be used to develop ontologies: RDF [@P252], RDFS [@P559], OWL [@P148]. The current recommendation is OWL 2 which is an extended version of OWL. A significant amount of OWL usage has been noticed in the context modelling ad reasoning domain [@P185]. It further emphasises the requirement of having the modelling language, reasoning engines, and mechanism to define rules as a bundle, rather than choosing different available options arbitrarily, to get the real power of semantic technologies. SWRL is one of the available solutions to add rules in OWL [@P216]. SWRL is not a hybrid approach as it is fully integrated into ontological reasoning. In contrast, when the amount of data becomes larger and structure becomes complex, ontologies can becomes exceedingly complex causing the reasoning process to be resource intensive and slow. However, some of the main reasons to choose OWL as the context modelling mechanism are [@P419; @P332]. W3C strongly supports the standardisation of OWL. Therefore, a variety of development tools are available for integrating and managing OWL ontologies, which makes it easier to develop and share. OWL allows interoperability among other context-aware systems. These features, such as classes, properties and constraints, and individuals are important for supporting ontology reuse, mapping and interoperability. OWL supports a high-level of inference / reasoning support. OWL is more expressive. For example, it provides cardinality constraints, which enables imposing additional restrictions on the classes. We compare the two most popular web ontology languages, RDF(S) and OWL(2) in Table \[Tbl:Comparison of Semantic Technologies\], to highlight the fundamental differences. ![image](./Figures/43-Model_Types_Survey.pdf) After evaluating several context modelling techniques, it was revealed that incorporating multiple modelling techniques is the best way to produce efficient and effective results, which will mitigate each other’s weaknesses. Therefore, no single modelling technique is ideal to be used in a standalone fashion. There is a strong relationship between context modelling and reasoning. For example, some reasoning techniques prefer some modelling techniques. However, it should not limit the employability of different context reasoning and modelling techniques together. In the next section we discuss reasoning context-aware computing. Context Reasoning Decision Models {#chapter2:CAF:Context Reasoning Decision Models} --------------------------------- Context reasoning can be defined as a method of deducing new knowledge, and understanding better, based on the available also be explained as a process of giving high-level context deductions from a set of contexts [@P331]. The requirement of reasoning also emerged due to two characteristics of raw context: imperfection (i.e. unknown, ambiguous, imprecise, or erroneous) and uncertainty. Reasoning performance can be measured using efficiency, soundness, completeness, and interoperability [@P185]. Reasoning is also called inferencing. Contest reasoning comprises several steps. Broadly we can divide them into three phases [@P214]. Context pre-processing: This phase cleans the collected sensor data. Due to inefficiencies in sensor hardware and network communication, collected data may be not accurate or missing. Therefore, data needs to be cleaned by filling missing values, removing outliers, validating context via multiple sources, and many more. These tasks have been extensively researched by database, data mining, and sensor network research communities over many years. Sensor data fusion: It is a method of combining sensor data from multiple sensors to produce more accurate, more complete, and more dependable information that could not be achieve through a single sensor [@P248]. In the IoT, fusion is extremely important, because there will be billions of sensors available. As a result, a large number of alternative sources will exist to provide the same information. Context inference: Generation of high-level context information using lower-level context. The inferencing can be done in a single interaction or in multiple interactions. Revisiting an example from a different perspective, W4 Diary [@P287] represented context as tuples John, What: walking:4km/h, Where: ANU,Canberra, When: 2013-01-05:9.30am). This low-level context can be inferred through a number of reasoning mechanisms to generate the final results. For example, in the first iteration, longitude and latitude values of a GPS sensor may be inferred as *PurplePickle cafe in canberra*. In the next iteration *PurplePickle cafe in canberra* may be inferred as *John’s favourite cafe*. Each iteration gives more accurate and meaningful information. There are a large number of different context reasoning decision models, such as decision tree, naive Bayes, hidden Markov models, support vector machines, k-nearest neighbour, artificial neural networks, Dempster-Shafer, ontology-based, rule-based, fuzzy reasoning and many more. Most of the models originated and are employed in the fields of artificial intelligence and machine learning. Therefore, these models are not specific to context-reasoning but commonly used across many different fields in computing and engineering. We present the results of a survey conducted by Lim and Dey [@P384] in Figure \[Fig:Survey\_on\_Context\_Reasoning\_Techniques\]. They have investigated the popularity of context reasoning decision models. The survey is based on literature from three major conferences over five years: Computer-Human Interaction (CHI) 2003-2009, Ubiquitous Computing (Ubicomp) 2004-2009, and Pervasive 2004-2009. In the IoT paradigm, there are many sensors that sense and produce context information. The amount of information that will be collected by over 50 billion sensors is enormous. Therefore, using all this context for reasoning in not feasible for many reasons, such as processing time, power, storage, etc. Furthermore, Guan et al. [@P331] has proved that using more context will not necessarily improve the accuracy of the inference in a considerable manner. They have used two reasoning models in their research: back-propagation neural networks and k-nearest neighbours. According to the results, 93% accuracy has been achieved by using ten raw context. Adding 30 more raw context to the reasoning model has increased the accuracy by only 1.63%. Therefore, selecting the appropriate raw context for reasoning is critical to infer high-level context with high accuracy. Context reasoning has been researched over many years. The most popular context reasoning techniques (also called decision models) are surveyed in [@P185; @P216; @P215]. Our intention in this paper is not to survey context reasoning techniques but to briefly introduce them so it will help to understand and appreciate the role of context reasoning in the IoT paradigm. We classify context reasoning techniques broadly into six categories: *supervised learning, unsupervised learning, rules, fuzzy logic, ontological reasoning* and *probabilistic reasoning*. A comparison of these techniques is presented in Table \[Tbl:Context\_Reasoning\_Decision\_Models\] [ m[2.1cm]{} m[4.8cm]{} m[4.5cm]{} m[5cm]{} ]{} Techniques & Pros & Cons & Applicability \ Supervised Learning (Artificial neural network, Bayesian Networks, Case-based reasoning, Decision tree learning, Support vector machines) & Fairly accurate Number of alternative models are available Have mathematical and statistical foundation & Require significant amount of data Every data element need to be converted in to numerical values Selecting feature set could be challenging Can be more resource intensive (processing, storage, time) less semantic so less meaningful Training data required Models can be complex Difficult to capture existing knowledge & For situation where the feature set is easily identifiable, possible out comes are known, and large data sets (for training as well) are available in numerical terms. (For example: activity recognition, missing value identification) \ Unsupervised Learning (Clustering, k-Nearest Neighbour) & No training data required No need to know the possible outcome & Models can be complex Less semantic so less meaningful Difficult to validate Outcome is not predictable Can be more resource intensive (processing, storage, time) & For situations where possible out comes are not known (For example: unusual behaviour detection, analysing agricultural fields to identify appropriate location to plant a specific type of crop) \ Rules & Simple to define Easy to extend Less resource (e.g. processing, storage) intensive & Should define manually Can be error prone due to manual work No validation or quality checking & For situations where raw data elements need to be converted in to high level context information. Suitable to be used to define events.\ Fuzzy Logic & Allow more natural representation Simple to define Easy to extend Less resource (e.g. processing, storage) intensive Can handle uncertainty & Should define manually Can be error prone due to manual work No validation or quality checking May reduce the quality (e.g. precision) of the results due to natural representation & For situation where low-level context need to be converted in to high-level more natural context information. This type of simplification will make it easy to process further. For example, control automated irrigation system where water will be released when the system detect the soil is ‘dry’\ Ontology based (First-Order Predicate Logic) & Allow complex reasoning Allow complex representation More meaningful results Validation and quality checking is possible Can reason both numerical and textual data & Data need to be modelled in a compatible format (e.g. OWL, RDF) Limited numerical reasoning Low performance (e.g. require more computation power and time) & For situations where knowledge is critical. For example, store and reason domain knowledge about agricultural domain. It allows the context information to be store according to the ontology structure and automatically reason later when required\ Probabilistic logic (Dempster-Shafer, hidden Markov Models, naive Bayes) & Allows to combine evidence Can handle unseen situations Alternative models are available Can handle uncertainty provide moderately meaningful results & Should know the probabilities Reason numerical values only & For situations where probabilities are known and combing evidence from different sources are essential. For example, evidence produced from a camera, infra-red sensors, acoustics sensor, and motion detector can be combined to detect a wind animal infiltrate to a agricultural field\ \[Tbl:Context\_Reasoning\_Decision\_Models\] ### Supervised learning In this category of techniques, we first collect training examples. Then we label them according to the results we expect. Then we derive a function that can generate the expected results using the training data. This technique is widely used in mobile phone sensing [@P217] and activity recognition [@P187]. *Decision tree* is a supervised learning technique where it builds a tree from a dataset that can be used to classify data. This technique has been used to develop a student assessment system in [@P561]. *Bayesian Networks* is a technique based on probabilistic reasoning concepts. It uses directed acyclic graphs to represent events and relationships among them. It is a widely used technique in statistical reasoning. Example applications are presented in [@P197; @P289]. Bayesian networks are commonly used in combining uncertain information from a large number of sources and deducing higher-level contexts. *Artificial neural networks* is a technique that attempts to mimic the biological neuron system. They are typically used to model complex relationships between inputs and outputs or to find patterns in data. Body sensor networks domain has employed this technique for pervasive healthcare monitoring in [@P267]. *Support vector machines* are widely used for pattern recognition in context-aware computing. It has been used to detect activity recognition of patients in the healthcare domain [@P562] and to learn situations in a smart home environment [@P209]. ### Unsupervised learning This category of techniques can find hidden structures in unlabelled data. Due to the use of no training data, there is no error or reward signal to evaluate a potential solution. Clustering techniques such as *K-Nearest Neighbour* is popularly used in context-aware reasoning. Specifically, clustering is used in low-level (sensor hardware level) sensor network operations such as routing and high level tasks such as indoor and outdoor positioning and location [@P565]. Unsupervised neural network techniques such as Kohonen Self-Organizing Map (KSOM) are used to classify incoming sensor data in a real-time fashion [@P566]. Noise detection and outlier detection are other applications in context-aware computing. Applications of unsupervised learning techniques in relation to body sensor networks are surveyed in [@P267]. The unsupervised clustering method has been employed to capturing user contexts by dynamic profiling in [@P268]. ### Rules This is the simplest and most straightforward methods of reasoning out of all of them. Rules are usually structure in an IF-THEN-ELSE format. This is the most popular method of reasoning according to Figure \[Fig:Survey\_on\_Context\_Reasoning\_Techniques\]. It allows the generation of high level context information using low level context. Recently, rules have been heavily used when combined with ontological reasoning [@P243; @P420; @P421]. MiRE [@P298] is a minimal rule engine for context-aware mobile devices. Most of the user preferences are encoded using rules. Rules are also used in event detection [@P136; @P128]. Rules are expected to play a significant role in the IoT, where they are the easiest and simplest way to model human thinking and reasoning in machines. PRIAMOS [@P139] has used semantic rules to annotate sensor data with context information. Application of rule based reasoning is clearly explained in relation to context-aware I/O control in [@P567]. ### Fuzzy logic This allows approximate reasoning instead of fixed and crisp reasoning. Fuzzy logic is similar to probabilistic reasoning but confidence values represent degrees of membership rather than probability [@P444]. In traditional logic theory, acceptable truth values are 0 or 1. In fuzzy logic partial truth values are acceptable. It allows real world scenarios to be represented more naturally; as most real world facts are not crisp. It further allows the use of natural language (e.g. temperature: slightly warm, fairly cold) definitions rather than exact numerical values (e.g. temperature: 10 degrees Celsius). In other words it allows imprecise notions such as tall, short, dark, trustworthy and confidence to be captured, which is critical in context information processing. In most cases, fuzzy reasoning cannot be used as a standalone reasoning technique. It is usually used to complement another techniques such as rules based, probabilistic or ontological reasoning. Gaia [@P568] has used fuzzy logic in context providers to handle uncertainty. Several examples of applying fuzzy logic to represent context information are presented in [@P547; @P548]. ### Ontology based : It is based on description logic, which is a family of logic based knowledge representations of formalisms. Ontological reasoning is mainly supported by two common representations of semantic web languages: RDF(S) [@P252] and OWL(2) [@P148]. We discussed ontology based modelling in Section \[chapter2:CAF:CM:Ontology\_Based\_Modelling\]. Semantic web languages are also complemented by several semantic query languages: RDQL, RQL, TRIPLE and number of reasoning engines: FACT [@P253], RACER, Pellet [@P150]. Rules such as SWRL [@P243] are increasingly popular in ontological reasoning. The advantage of ontological reasoning is that it integrates well with ontology modelling. In contrast, a disadvantage is that ontological reasoning is not capable of finding missing values or ambiguous information where statistical reasoning techniques are good at that. Rules can be used to minimise this weakness by generating new context information based on low-level context. Missing values can also be tackled by having rules that enable missing values to be replaced with suitable predefined values. However, these mechanism will not perform accurately in highly dynamic and uncertain domains. Ontological reasoning is heavily used in a wide range of applications, such as activity recognition [@P187], hybrid reasoning [@P187], and event detection [@P128]. A survey on semantic based reasoning is presented in [@P215]. It also compares a number of context aware frameworks based on modelling technique, reasoning techniques, and architectures used in their systems. Comprehensive and extensive amounts of information on semantic technology are available in [@P557; @P558; @P378]. In addition, a semantic based architecture for sensor data fusion is presented in [@P072; @P073; @P071]. ### Probabilistic logic This category of techniques allows decisions to be made based on probabilities attached to the facts related to the problem. It can be used to combine sensor data from two different sources. Further, it can be used to identify resolutions to conflicts among context. Most often these techniques are used to understand occurrence of events. Probabilistic logic has been used in [@P444] to encode access control policies. *Dempster-Shafer*, which is based on probabilistic logic, allows different evidence to be combined to calculate the probability of an event. Dempster-Shafer is commonly used in sensor data fusion for activity recognition. In [@P548; @P238], it has been used to understand whether there is a meeting in the room. Other example applications are presented in [@P236; @P235]. *hidden Markov Models [@P553]* are also a probabilistic technique that allows state to be represented using observable evidence without directly reading the state. For example, it provides a method to bridge the gap between raw GPS sensor measurements and high level information such as a user destination, mode of transportation, calendar based observable evidence such as user calendar, weather, etc. hidden Markov Models are commonly used in activity recognition in context-aware domains. For example, it has been used to learn situation models in a smart home [@P209]. Up to now, we have presented and discussed a number of context modelling and reasoning techniques. However, it is clear that each technique has its own strengths and weakness. No single technique can be used to accomplish perfect results. Therefore, the best method to tackle the problem of context awareness it to combine multiple models in such a way that, as a whole, they reduce weaknesses by complementing each other. For example, Alternative Context Construction Trees (ACCT) [@P326] is an approach that enables the concurrent evaluation and consolidation of different reasoning models such as logic rules, Bayesian networks and CoCoGraphs [@P560]. There are two reasons that context information can become uncertain, as discussed in \[chapter2:CDLC:ESRF:QCRV\]. Therefore, employing or incorporating strategies that can reason under uncertainty such as Bayesian networks, Dempster-Shafer or fuzzy logic is essential in such situations. The process of how the multiple techniques can be combined together is presented in [@P216; @P463]. We briefly explain the hybrid context modelling and reasoning approach as follows. At the lowest level, statistical techniques can be used to fuse sensor data. Then, fuzzy logic can be employed to convert fixed data in to more natural terms. In the future, Dempster-Shafer can be used to combine sensor data from different sources. In addition, machine learning techniques, such as support vector machines and artificial neural networks, can be used for further reasoning. After completing statistical reasoning, the high level data can be modelled using semantic technologies such as ontologies. Ontological reasoning can be applied to infer additional context information using domain knowledge at the higher level. A similar process is explained in detail in [@P463]. Context Distribution {#chapter2:CAF:Context Distribution} -------------------- Context distribution is a fairly straightforward task. It provides methods to deliver context to the consumers. From the consumer perspective this task can be called context acquisition, where the discussion we presented in Section \[chapter2:CDLC:Context Acquisition\] is completely applicable. Therefore all the factors we discussed under context acquisition need to be considered for context distribution as well. Other than that there are two other methods to that are used commonly in context distribution: Query: Context consumer makes a request in terms of a query, so the context management system can use that query to produce results. Subscription (also called publish / subscribe): Context consumer can be allowed to subscribe with a context management system by describing the requirements. The system will then return the results periodically or when an event occurs (threshold violation). In other terms, consumers can subscribe for a specific sensor or to an event. However, in underline implementations, queries may also use to define subscriptions. Further, this method is typically use in real time processing. Existing Research Prototypes and Systems {#chapter2:PRE} ======================================== In this section, first we present our evaluation framework and then we briefly discuss some of the most significant projects and highlight their significance. Later, we identify the lessons we can learn from them towards context-aware development in the IoT paradigm in Section \[chapter2:LL\]. The projects are discussed in the same order as in Table \[Tbl:Evaluation\_of\_Previous\_Research\_Efforts\]. Our taxonomy is summarized in Table \[Tbl:Summarized taxonmy\]. Evaluation Framework {#chapter2:CDLC:Evaluation of Surveyed Research Efforts} -------------------- We used abbreviations as much as possible to make sure that the structure allowed all 50 projects to be presented in a single page, which enables the readers to analyse and identify positive and negative patterns that we have not explicitly discussed. In Table \[Tbl:Evaluation\_of\_Previous\_Research\_Efforts\], we use a dash ([–]{}) symbol across all columns to denote that the functionality is either missing or not mentioned in related publications that are available. In order to increase the readability, we have numbered the columns of the Table \[Tbl:Evaluation\_of\_Previous\_Research\_Efforts\] corresponding to the taxonomy numbered below. Our taxonomy and several other features that will provide additional value in IoT solutions are visually illustrated in Figure \[Fig:Taxonomy\_and\_Conceptual\_Framework\]. ### **Project Name** This is the name given to the project by the authors of the related publications. Most of the project names are abbreviations that are used to refer to the project. However, some project do not have an explicit project name, here we used a dash ([–]{}) symbol. ### **Citation** We provide only one citation due to space limitations. Other citations are listed under each project’s descriptions and highlights in Section \[chapter2:PRE\]. ### **Year** Table \[Tbl:Evaluation\_of\_Previous\_Research\_Efforts\] is ordered according to chronological order (i.e. from oldest to newest) based on the year of publication. ### **Project Focus** Based on our evaluation, each project has its own focus on whether to build a system, a toolkit, or a middleware solution. The following abbreviations are used to denote the focus: system (S), toolkit (T), and middleware (M). Systems focus on developing an end-to-end solution where it involves hardware, software and application layer. Systems cannot be used as middleware. It is designed to provide one or a few tasks. Building different functionalities on top of the system is not an option. Systems are designed and developed for a use by the end users. Toolkits are not designed to be used by the end users. They are employed by system, application, and middleware developers. They provide very specific functionalities. Toolkits are usually designed according to well-known design principles and standards and always released with proper documentation that shows how to use them at programming code level. Middleware [@P064] can be explained as a software layer that lies between the hardware and application layers. It provides reusable functionalities that are required by the application to meet complex customer requirements. They are usually built to address common issues in application development such as heterogeneity, interoperability, security, and dependability. A goal of middleware is to provide a set of programming abstractions to help software development where heterogeneous components need to be connected and communicate together. Middleware is designed to be used by application developers, where the middleware solution handles most of the common functionalities leaving more time and effort for the application developers to deal with application functionalities. ### **Modelling** This has been discussed in detail in Section \[chapter2:CAF:Context Modelling\]. We use the following abbreviations to denote the context modelling techniques employed by the project: key-value modelling (K), markup Schemes (M), graphical modelling (G), object oriented modelling (Ob), logic-based modelling (L), and ontology-based modelling (On). ### **Reasoning** This has been discussed in detail in Section \[chapter2:CAF:Context Reasoning Decision Models\]. We use the following abbreviations to denote the context reasoning techniques employed by the project: supervised learning (S), un-supervised learning (U), rules (R), fuzzy logic (F), ontology-based (O), and probabilistic reasoning (P). The symbol ($\checkmark$) is used where reasoning functionality is provided but the specific technique is not mentioned. ### **Distribution** This has been discussed in detail in Section \[chapter2:CAF:Context Distribution\]. We use the following abbreviations to denote the context distribution techniques employed by the project: publish/subscribe (P) and query (Q). ### **Architecture** This varied widely from one solution to another. Architecture can be classified into different categories based on different perspectives. Therefore, there is no common classification scheme that can be used for all situations. We consider the most significant architectural characteristics to classify the solution. Different architectural styles are numbered as follows. (1) Component based architecture where the entire solution is based on loosely coupled major components, which interact each other. For example, Context Toolkit [@P143] has three major components which perform the most critical functionalities of the system. (2) Distributed architecture enables peer-to-peer interaction in a distributed fashion, such as in Solar [@P569]. (3) Service based architecture where the entire solution consists of several services working together. However, individual access to each service may not be provided in solutions such as Gaia [@P444]. (4) Node based architecture allows to deployment of pieces of software with similar or different capabilities, which communicate and collectively process data in sensor networks [@P344]. (5) Centralised architecture which acts as a complete stack (e.g. middleware) and provides applications to be developed on top of that, but provides no communication between different instances of the solution. (6) Client-server architecture separates sensing and processing from each other, such as in CaSP [@P317]. ### **History and Storage** Storing context history is critical [@P290] in both traditional context-aware computing and the IoT. Historic data allows sensor data to be better understood. Even though most of the IoT solutions and applications are focused on real time interaction, historic data has its own role to play. Specifically, it allows user behaviours, preferences, patterns, trends, needs, and many more to be understood. In contrast, due to the scale of the IoT, storing all the context for the long term may not feasible. However, storage devices are getting more and more powerful and cheap. Therefore, it would be a tradeoff between cost and understanding. The symbol ($\checkmark$) is used denote that context history functionality is facilitated and employed by the project. ### **Knowledge Management** This functionality is broader than any others. Most of the tasks that are performed by IoT middleware solutions require knowledge in different perspectives, such as knowledge on sensors, domains, users, activities, and many more. One of the most popular techniques to represent knowledge in context-aware computing is using ontologies. However, several other techniques are also available such as rules. Knowledge can be used for tasks such as automated configuration of sensors to IoT middleware, automatic sensor data annotation, reasoning, and event detection. The symbol ($\checkmark$) is used to denote that knowledge management functionality is facilitated and employed by the project in some perspective. ### **Event Detection** {#chapter2:CDLC:ESRF:Event_Detection} This is one of the most important functionalities in IoT solutions. IoT envisions machine-to-machine (M2M) and machine-to-person communication. Most of these interactions are likely to occur based on an event. Events can referred to many things, such as an observable occurrence, phenomenon, or an extraordinary occurrence. We define one or more conditions and identify it as an occurrence of an event once all the defined conditions are satisfied. In the IoT, sensors collect data and compare it with conditions to decide whether the data satisfies the conditions. An occurrence event is also called a *event trigger*. Once an event has been triggered, a notification or action may be executed. For example, detecting current activity of a person or detecting a meeting status in a room, can be considered as events. Mostly, event detection needs to be done in real-time. However, events such as trends may be detected using historic data. The symbol ($\checkmark$) is used to denote that event detection functionality is facilitated and employed by the project in some perspective. ### **Context Discovery and Annotation** We use the following abbreviations to denote context discovery and annotation facilitated and employed by the project: context discovery (D) and context annotation (A). Context annotation allows context related information and raw sensors data to be attached, modelled, and stored. Some of the most common and basic information that needs to be captured in relation to context are context type, context value, time stamp, source, and confidence. Context-aware geographical information retrieval approach [@P421] has proposed a mechanism to map raw sensor data to semantic ontologies using SWRL. This is critical in all types of systems. Even though, statistical reasoning systems can use raw sensor data directly, semantic mapping before the reasoning allows more information to be extracted. Context information only becomes meaningful when it is interpreted with respect to the user. This can be achieved by knowledge base integration and reasoning using ontologies. Another application is discussed in [@P420]. Ontologies and other context modelling techniques allow structure data to be more meaningful which express relationships among data. End-users in the IoT paradigm are more interested in high-level information compared to low-level raw sensor data [@P285]. The following examples explain the difference between high-level information and low-level raw sensor data. It is raining (high-level information) can be derived from humidity is 80% (low-level sensor data). Further, high-level sensor data can be explained as semantic information as it provides more meaning to the end users. Challenges of semantic sensor webs are identified and discussed in [@P031]. This is the most common form of discovery. ### **Level of Context Awareness** Context-awareness can be employed at two levels: low (hardware) level and high (software) level. At the hardware level, context-awareness is used to facilitate tasks such as efficient routing, modelling, reasoning, storage and event detection (considering energy consumption and availability) [@P288]. At the hardware level, data and knowledge available for decision making is less. Further, sensors are resource constraint devices, so complex processing cannot be performed at the hardware level. However, applying context-aware technologies in the hardware level allows resources to be saved, such as network communication costs by preliminary filtering. The software level has access to a broader range of data and knowledge as well as more resources, which enables more complex reasoning to be performed. We use the following abbreviations to denote the level of context awareness facilitated and employed by the project: high level (H) and low level (L). ### **Security and Privacy** This is a major concern in context-aware computing in all paradigms. However, the IoT paradigm will intensify the challenges in security and privacy. In the IoT, sensors are expected to collect more information about users (i.e. people) in all aspects. This includes both physical and conceptual data, such as location, preferences, calendar data, and medical information to name a few. As a result, utmost care needs to be taken when collecting, modelling, reasoning, and with persistent storage. Security and privacy need to be handled at different levels in the IoT. At the lowest level, the hardware layer should ensure security and privacy during collecting and temporary storage within the device. Secure protocols need to ensure communication is well protected. Once the data is received, application level protection needs to be in placed to monitor and control who can see or use context and so on. Different projects use different techniques such as policies, rules, and profiles to provide security and privacy. The symbol ($\checkmark$) denoted the presence of security and privacy related functionality in the project, in some form. ### **Data Source Support** There are different sources that are capable of providing context. Broadly we call them *sensors*. We discussed different types of sensors in Section \[chapter2:CAF\]. Based on the popularity of the data sources supported by each solution, we selected the following classification. (P) denotes that the solution supports only physical sensors. Software sensors (S) denotes that the solution supports either virtual sensors, logical sensors or both. (A) denotes that the solution supports all kinds of data sources (i.e. physical, virtual, and logical). (M) denotes that the solution supports mobile devices. ### **Quality of Context** {#chapter2:CDLC:ESRF:QCRV} We denote the presence of conflict resolution functionality using (C) and context validation functionality using (V). Conflict resolution is critical in the context management domain [@P310]. There has to be a consistency in collecting , aggregating, modelling, and reasoning. In the IoT paradigm, context may not be accurate. There are two reasons for context information not to be certain. First is that the sensor technology is not capable of producing 100% accurate sensor data due to various technical and environmental challenges. Secondly, even with sensors that produce 100% accurate sensor data, reasoning models are not 100% accurate. In summary, problems in sensor technology and problems in reasoning techniques contribute to context conflicts. There are two types of context conflicts that can occurred and they are defined in [@P310]: Internal context conflict: Fusing two or more context elements that characterises the situation from different dimensions of the same observed entity in a given moment may lead to internal context conflict. (e.g. motion sensor detects that a user is in the kitchen and calendar shows that the user is supposed to be in a meeting. Therefore, it is unable to correctly deduce the current location by fusing two data sources: calendar and motion sensor.) External context conflicts: The context conflict/inconsistency that may occur between two or more bits of context that describe the situation of an observed entity from the same point of view. (e.g. two motion sensors located in the same area provide two completely different readings, where one sensor detects a person and other sensor detects three people.) Context validation ensures that collected data is correct and meaningful. Possible validations are checks for range, limit, logic, data type, cross-system consistency, uniqueness, cardinality, consistency, data source quality, security, and privacy. [ c l m[11cm]{} ]{} & Taxonomy & Description\ 5 & Modelling & Key-value modelling (K), Markup schemes (M), Graphical modelling (G), Object oriented modelling (Ob), Logic-based modelling (L), and Ontology-based modelling (On)\ 6 & Reasoning & Supervised learning (S), Un-supervised learning (U), rules (R), Fuzzy logic (F), Ontology-based (O), and Probabilistic reasoning (P)\ 7 & Distribution & Publish/subscribe (P) and Query (Q)\ 8 & Architecture & Component based architecture (1) , Distributed architecture (2), Service based architecture (3), Node based architecture (4) , Centralised architecture (5), Client-server architecture (6)\ 9 & History and Storage & Available ($\checkmark$)\ 10 & Knowledge Management & Available ($\checkmark$)\ 11 & Event Detection & Available ($\checkmark$)\ 12 & Context Discovery and Annotation & context Discovery (D) and context Annotation (A)\ 13 & Level of Context Awareness & High level (H) and Low level (L).\ 14 & Security and Privacy & Available ($\checkmark$)\ 15 & Data Source Support & Physical sensors (P), Software sensors (S), Mobile devices (M), Any type of sensor (A)\ 16 & Quality of Context & Conflict resolution (C), context Validation (V)\ 17 & Data Processing & Aggregate (A), Filter (F)\ 18 & Dynamic Composition & Available ($\checkmark$)\ 19 & Real Time Processing & Available ($\checkmark$)\ 20 & Registry Maintenance & Available ($\checkmark$)\ \[Tbl:Summarized taxonmy\] ### **Data Processing** We denote the presence of context aggregation functionality using (A) and context filter functionality using (F). Aggregation can be explained in different ways; for example, Context Toolkit [@P143] has a dedicated component called context aggregator to collect data related to a specific entity (e.g. person) from different context sources and act as a proxy to context applications. They do not perform any complex operations; just collect similar information together. This is one of the simplest forms of aggregation of context. Context filter functionality makes sure the reasoning engine processes only important data. Specially in IoT, processing all the data collected by all the sensors is not possible due to scale. Therefore, IoT solutions should process only selected amounts of data that allows it to understand context accurately. Filtering functionality can be presented in different solutions in different forms: filter data, filter context sources, or filter events. Filtering helps both at the low (hardware) level and software level. At the hardware level, it helps to reduce the network communication cost by transmitting only important data. At the high-level, filtering can save process energy by only processing important data. Context processing can be classified into three categories (also called layers) [@P185]. Typical methods and techniques used in each layer are also presented as follows: *Activity and context recognition layer*: Feature extraction, classification, clustering, fuzzy rules *Context and representation layer*: Conceptual models, logic programming, ontology based representation and reasoning, databases and query languages, rule based representation and reasoning, cased based representation and reasoning, representing uncertainty, procedural programming *Application and adaptation layer*: Rules, query languages, procedural programming Data fusion, which is also considered a data processing technique, is critical in understanding sensor data. In order to lay a solid foundation to our discussion, we adopt the definition provided by Hall and Llinas [@P248] on sensor data fusion. *“Sensor data fusion is a method of combining sensor data from multiple sensors to produce more accurate, more complete, and more dependable information that could not be possible to achieve through a single sensor [@P248].”* For example, in positioning, GPS does not work indoors. In contrast, there are a variety of other indoor positioning schemes that can be used. Therefore, in order to continuously track the positioning regardless of indoor or outdoor, sensor data fusion is essential [@P115]. Data fusion methods, models, and classification techniques in the wireless sensor networks domain are comprehensively surveyed in [@P130]. In order to identify context, it is possible to combine data from different data sources. For example, consider a situation where we want to identify the location of a user. The possible sources that can be used to collect evidence regarding the location are GPS sensors, motion sensor, calendar, email, social networking services, chat clients, ambient sound (sound level, pattern), users nearby, camera sensors, etc. This long list shows the possible alternatives. It is always a tradeoff between required resource (e.g. processing power, response time) and accuracy. Processing and combining all the above sensor readings would produce a more accurate result; however, it would require more resources and time. There is a significant gap between low-level sensor readings and high-level ‘situation-awareness’ [@P287]. Collecting low-level sensor data is becoming significantly easier and cheaper than ever due to advances in sensing technology. As a result, enormous amounts of sensor data (e.g. big data [@ZMP003]) is available. In order to understand big data, a variety of different reasoning techniques need to employed as we discussed in Section \[chapter2:CAF:Context Reasoning Decision Models\]. ### **Dynamic Composition** As explained in Solar [@P569], IoT solutions must have a programming model that allows dynamic composition without requiring the developer or user to identify specific sensors and devices. Dynamic organising is critical in environments like the IoT, because it is impossible to identify or plan possible interaction at the development stage. Software solutions should be able to understand the requirements and demands on each situation, then organise and structure its internal components according to them. Components such as reasoning models, data fusion operators, knowledge bases, and context discovery components can be dynamically composed according to the needs. The symbol ($\checkmark$) denoted the presence of dynamic composition functionality in the project in some form. \[chapter2:CDLC:ESRE:Dynamic\_Composition\] \[t!\] [ m[1.9cm]{} c m[0.25cm]{} c c c c c c c c c c c c c c c c c ]{} Project Name & Citations & Year & Project Focus & Modelling & Reasoning & Distribution & Architecture & History and Storage & Knowledge Management & Event Detection & Context Discovery and Annotation & Level of Context Awareness & Security and Privacy & Data Source Support & Quality of Context & Data Processing & Dynamic Composition & Real Time Processing & Registry Maintenance \ \(1) & (2) & (3) & (4) &(5) & (6)& (7)& (8)& (9) & (10) & (11) & (12) & (13) & (14) & (15) & (16) & (17) & (18) & (19) & (20)\ \ Context Toolkit & [@P143] & 2001 & T & K & $\checkmark$ & Q & 1,5 & $\checkmark$ & [–]{}& [–]{}& [–]{}& H & [–]{}& A & [–]{}& A & [–]{}& [–]{}& [–]{}\ \ Solar & [@P569] & 2002 & M & K,M,Ob & R & P & 2 & [–]{}& [–]{}& $\checkmark$ & D & H & $\checkmark$ & P & $\checkmark$ & A & $\checkmark$ & [–]{}& [–]{}\ Aura & [@P555] & 2002 & M & M & R & P & 2 & [–]{}& [–]{}& $\checkmark$ & D & H & [–]{}& A & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ \ CoOL & [@P190] & 2003 & T & On & R,O & Q & 1 & [–]{}& $\checkmark$ & $\checkmark$ & D & H & [–]{}& S & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ CARISMA & [@P386] & 2003 & M & M & R & Q & 2 & [–]{}& [–]{}& [–]{}& [–]{}& H & [–]{}& M & C & [–]{}& [–]{}& [–]{}& [–]{}\ \ CoBrA & [@P419] & 2004 & M & On & R,O & Q & 1 & $\checkmark$ & $\checkmark$ & $\checkmark$ & [–]{}& H & $\checkmark$ & A & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ Gaia & [@P444] & 2004 & M & F,On & S,P, F & Q & 2,3 & $\checkmark$ & $\checkmark$ & $\checkmark$ & D & H & $\checkmark$ & A & [–]{}& [–]{}& $\checkmark$ & [–]{}& $\checkmark$\ SOCAM & [@P570] & 2004 & M & On & R,O & Q,P & 3 & $\checkmark$ & $\checkmark$ & $\checkmark$ & D & H & [–]{}& A & [–]{}& A & [–]{}& [–]{}& $\checkmark$\ \ CARS & [@P311] & 2005 & S & K & U & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$ & A & H & [–]{}& P & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ CASN & [@P288] & 2005 & M & F,On & F,O & P & 2 & [–]{}& $\checkmark$ & [–]{}& D & L & [–]{}& P & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ SCK & [@P332] & 2005 & M & M,On & R,O & Q & 1 & $\checkmark$ & $\checkmark$ & $\checkmark$ & A,D & H & [–]{}& A & V & [–]{}& [–]{}& [–]{}& $\checkmark$\ TRAILBLAZER & [@P305] & 2005 & S & K & R & Q & 2 & [–]{}& [–]{}& [–]{}& D & L & [–]{}& P & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ \ BIONETS & [@P316] & 2006 & M & On & R,O & Q & 1 & [–]{}& $\checkmark$ & [–]{}& A & H & [–]{}& A & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ PROCON & [@P278] & 2006 & S & K & R & Q & 2 & [–]{}& [–]{}& $\checkmark$ & D & L & [–]{}& P & [–]{}& A,F & [–]{}& [–]{}& [–]{}\ CMF (MAGNET) & [@P344] & 2006 & M & M & R & P,Q & 2,4 & $\checkmark$ & [–]{}& [–]{}& D & H & [–]{}& A & C & [–]{}& $\checkmark$ & [–]{}& [–]{}\ e-SENSE & [@P266] & 2006 & M & [–]{}& R & Q & [2,4]{} & [–]{}& $\checkmark$ & [–]{}& D & H & $\checkmark$ & P & [–]{}& F & [–]{}& [–]{}& [–]{}\ \ HCoM & [@P336] & 2007 & M & G,On & R,O & Q & 5 & $\checkmark$ & $\checkmark$ & [–]{}& D & H & [–]{}& S & V & F & [–]{}& [–]{}& $\checkmark$\ CMS & [@P340] & 2007 & M & On & O & P,Q & [1,2]{} & $\checkmark$ & [–]{}& $\checkmark$ & S & H & [–]{}& A & [–]{}& A & [–]{}& [–]{}& $\checkmark$\ MoCA & [@P338] & 2007 & M & M,Ob & O & P,Q & [4,5]{} & [–]{}& [–]{}& $\checkmark$ & D & H & $\checkmark$ & A & V & [–]{}& [–]{}& $\checkmark$ & $\checkmark$\ CaSP & [@P317] & 2007 & M & M,On & O & P,Q & 6 & $\checkmark$ & [–]{}& [–]{}& D & H & [–]{}& A & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ SIM & [@P349] & 2007 & M & K,G & R & [–]{}& 2 & $\checkmark$ & [–]{}& [–]{}& [–]{}& H & [–]{}& P & C & A & [–]{}& [–]{}& [–]{}\ — & [@P335] & 2007 & M & On & O & Q & [ ]{} & [–]{}& [–]{}& $\checkmark$ & D & H & [–]{}& P & V & A & [–]{}& [–]{}& [–]{}\ \ COSMOS & [@P403] & 2008 & M & Ob & R & Q & [2,4]{} & [–]{}& [–]{}& $\checkmark$ & [–]{}& H & [–]{}& P & [–]{}& A & $\checkmark$ & [–]{}& $\checkmark$\ DMS-CA & [@P308] & 2008 & S & M & R & Q & 5 & [–]{}& [–]{}& $\checkmark$ & [–]{}& H & [–]{}& A & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ CDMS & [@P293] & 2008 & M & K,M & R & Q & 2 & $\checkmark$ & [–]{}& $\checkmark$ & D & H & [–]{}& A & [–]{}& A,F & [–]{}& [–]{}& $\checkmark$\ — & [@P197] & 2008 & M & On & O,P & Q & 5 & [–]{}& $\checkmark$ & [–]{}& D & H & [–]{}& [–]{}& V & [–]{}& [–]{}& [–]{}& [–]{}\ — & [@P333] & 2008 & M & On & R,O & P,Q & 5 & [–]{}& [–]{}& $\checkmark$ & D & H & [–]{}& P & [–]{}& A & [–]{}& [–]{}& [–]{}\ AcoMS & [@P339] & 2008 & M & M,G,On & R,O & P & 5 & [–]{}& $\checkmark$ & $\checkmark$ & A & H & [–]{}& P & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ CROCO & [@P334] & 2008 & M & On & R,O & Q & [ ]{} & $\checkmark$ & $\checkmark$ & [–]{}& D & H & $\checkmark$ & A & C,V & [–]{}& [–]{}& [–]{}& $\checkmark$\ EmoCASN & [@P274] & 2008 & S & K & R & Q & [2,4]{} & [–]{}& [–]{}& [–]{}& D & L & [–]{}& P & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ \ Hydra & [@P105] & 2009 & M & K,On,Ob & R,O & Q & 3 & $\checkmark$ & $\checkmark$ & $\checkmark$ & [–]{}& H & $\checkmark$ & P & V & [–]{}& [–]{}& [–]{}& [–]{}\ UPnP & [@P300] & 2009 & M & K,M & R & Q & 4 & $\checkmark$ & [–]{}& $\checkmark$ & D & H & $\checkmark$ & A & [–]{}& A & $\checkmark$ & [–]{}& $\checkmark$\ COSAR & [@P187] & 2009 & M & On & S,O & Q & 5 & [–]{}& $\checkmark$ & $\checkmark$ & A & H & [–]{}& P & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ SPBCA & [@P420] & 2009 & M & On & R,O & Q & 2 & [–]{}& [–]{}& $\checkmark$ & A & H & $\checkmark$ & A & [–]{}& [–]{}& [–]{}& [–]{}& [–]{}\ C-CAST & [@P280] & 2009 & M & M & R & P,Q & 5 & $\checkmark$ & [–]{}& $\checkmark$ & D & H & [–]{}& A & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ — & [@P312] & 2009 & M & On & O & P & 5 & $\checkmark$ & [–]{}& $\checkmark$ & D & H & [–]{}& A & [–]{}& A & [–]{}& [–]{}& [–]{}\ CDA & [@P341] & 2009 & M & Ob & [–]{}& Q & [4,6]{} & [–]{}& [–]{}& [–]{}& [–]{}& H & [–]{}& V & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ SALES & [@P314] & 2009 & M & M & R & Q & [2,4]{} & [–]{}& [–]{}& $\checkmark$ & D & L & [–]{}& P & [–]{}& F & [–]{}& [–]{}& $\checkmark$\ MidSen & [@P275] & 2009 & M & K & R & P,Q & 5 & [–]{}& $\checkmark$ & $\checkmark$ & D & H & [–]{}& P & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ \ SCONSTREAM & [@P309] & 2010 & S & G & R & Q & 5 & $\checkmark$ & [–]{}& $\checkmark$ & [–]{}& H & [–]{}& P & [–]{}& [–]{}& [–]{}& $\checkmark$ & [–]{}\ — & [@P328] & 2010 & M & M & P & Q & [2,4]{} & $\checkmark$ & [–]{}& $\checkmark$ & [–]{}& H & [–]{}& A & [–]{}& F & $\checkmark$ & [–]{}& [–]{}\ Feel@Home & [@P346] & 2010 & M & G,On & O & P,Q & [2,4]{} & [–]{}& $\checkmark$ & $\checkmark$ & [–]{}& H & $\checkmark$ & A & [–]{}& [–]{}& [–]{}& [–]{}& $\checkmark$\ CoMiHoC & [@P347] & 2010 & M & Ob & R,P & Q & 5 & [–]{}& $\checkmark$ & $\checkmark$ & D & H & [–]{}& A & V & [–]{}& [–]{}& [–]{}& [–]{}\ Intelligibility & [@P384] & 2010 & T & [–]{}& R,S,P & Q & [1,5]{} & [–]{}& [–]{}& $\checkmark$ & D & H & [–]{}& A & V & [–]{}& [–]{}& [–]{}& [–]{}\ ezContext & [@P294] & 2010 & M & K,Ob & R & Q & 5 & $\checkmark$ & $\checkmark$ & $\checkmark$ & [–]{}& H & [–]{}& A & [–]{}& A & [–]{}& [–]{}& $\checkmark$\ UbiQuSE & [@P322] & 2010 & M & M & R & Q & 5 & $\checkmark$ & [–]{}& $\checkmark$ & D,A & H & [–]{}& A & [–]{}& [–]{}& [–]{}& $\checkmark$ & [–]{}\ COPAL & [@P571] & 2010 & M & M & R & P,Q & [1,5]{} & [–]{}& [–]{}& $\checkmark$ & D & H & $\checkmark$ & & V & A,F & [–]{}& $\checkmark$ & $\checkmark$\ \ Octopus & [@P285] & 2011 & S & $\checkmark$ & $\checkmark$ & P & [2,4]{} & [–]{}& [–]{}& $\checkmark$ & D & H & [–]{}& A & [–]{}& A & $\checkmark$ & [–]{}& [–]{}\ — & [@P327] & 2011 & M & [–]{}& $\checkmark$ & P & 2 & [–]{}& [–]{}& [–]{}& D & H & [–]{}& P & [–]{}& A & [–]{}& [–]{}& $\checkmark$\ — & [@P289] & 2011 & S & K,Ob & S,P & & [2,4]{} & $\checkmark$ & $\checkmark$ & $\checkmark$ & D,A & H & [–]{}& M & V & A,F & [–]{}& [–]{}& $\checkmark$\ ### **Real Time Processing** Most of the interactions are expected to be processed in real time in the IoT. This functionality has been rarely addressed by the research community in the context-aware computing domain. The most important real time processing task is event detection as we explained in Section \[chapter2:CDLC:ESRF:Event\_Detection\]. However, context reasoning, and query processing can also be considered as essential real time processing tasks. Real time processing solutions are focused on processing faster than traditional methods, which allows sensor stream data processing [@P309]. The symbol ($\checkmark$) denoted the presence of real time processing functionality in some form. ### **Registry Maintenance and Lookup Services** We use the ($\checkmark$) symbol to denote the presence of registry maintenance and lookup services functionality in the project. This functionality allows different components such as context sources, data fusion operators, knowledge bases, and context consumers to be registered. This functionality is also closely related to dynamic composition where it needs to select relevant and matching components to be composed together. Registries need to be updated to reflect (dis)appearing components. Evaluation of Research Efforts {#chapter2:CDLC:Evaluation of Surveyed Research Efforts2 } ------------------------------ **Context Toolkit** [@P143] aims to facilitating development and deployment of context-aware applications. This is one of the earliest efforts of providing framework support for context-aware application development. Context Toolkit contains a combination of features and abstractions to support context-aware application developers. It introduces three main abstractions: context widget (to retrieve data from sensors), context interpreter (to reason sensor data using different reasoning techniques), and context aggregator. The research around Context Toolkit is still active and a number of extensions have been developed to enhance its context-aware capabilities. Enactor [@P393] provides a context decision modelling facility to the Context Toolkit. Further, the Intelligibility Toolkit [@P384] extends the Enactor framework by supporting more decision models for context reasoning. Context Toolkit identifies the common features required by context-aware applications as capture and access of context, storage, distribution, and independent execution from applications. **Aura** [@P555] is a task oriented system based on distributed architecture which focuses on different computational devices used by human users every day. The objective is to run a set of applications called *personal aura* in all devices in order to manage user tasks in a context-aware fashion across all the devices smoothly. Aura addresses two major challenges. First, aura allows a user to preserve continuity in his/her work when moving between different environments. Second, it is capable of adapting to the on-going computation of a particular environment in the presence of dynamic resource variability. Aura consists of four major components: context observer (collects context and send it to task and environment managers), task manager (also called prism, four different kinds of changes: user moves to another environment, environment, task, and context), environment manager (handles context suppliers and related service), and context suppliers (provides context information). XML based markup schemes are used to describe services. . **CARISMA** [@P386] (Context-Aware Reflective middleware System for Mobile Applications) is focused on mobile systems where they are extremely dynamic. Adaptation (also called reflection) is the main focus of CARISMA. context is stored as application profiles (XML based), which allows each application to maintain meta-data under two categories: passive and active. The passive category defines actions that middleware would take when specific events occur using rules, such as shutting down if battery is low. However, conflicts could arise when two profiles defines rules that conflict each other. The active category allows relationships to be maintained between services used by the application, the policies, and context configurations. This information tells how to behave under different environmental and user conditions. A conflict resolution mechanism is also introduced in CARISMA based on macroeconomic techniques. An auction protocol is used to handle the resolution as they support greater degrees of heterogeneity over other alternatives. In simple terms, rules are used in auctions with different constraints imposed on the bidding by different agents (also called applications). Final decisions are made in order to maximise the social welfare among the agents. **CoBrA** [@P419] (Context Broker Architecture) is a broker-centric agent architecture that provides knowledge sharing and context reasoning for smart spaces. It is specially focused on smart meeting places. CoBrA addresses two major issues: supporting resource-limited mobile computing devices and addressing concerns over user privacy. Context information is modelled using OWL ontologies. Context brokers are the main elements of CoBrA. A context broker comprises the following four functional components: context knowledge base (provides persistent storage for context information), context reasoning engine (performs reasoning over context information stored in storage), context acquisition module (retrieve context from context sources), and policy management module (manages policies, such as who has access to what data). Even though the architecture is centralised, several brokers can work together through a broker federation. Context knowledge is represented in Resource Description Framework (RDF) triples using Jena. **Gaia** [@P444] is a distributed context infrastructure uncertainty based reasoning. Ontologies are used to represented context information. Gaia has employed a Prolog based probabilistic reasoning framework. The architecture of Gaia consists of six key components: context provider (data acquisition from sensors or other data sources), context consumer (different parties who are interest in context), context synthesiser (generate high-level context information using raw low-level context), context provider lookup service (maintains a detailed registry of context providers so the appropriate context providers can be found based on their capabilities when required), context history service (stores history of context), and ontology server (maintains different ontologies). **SOCAM** [@P570] (Service Oriented Context-Aware Middleware) is an ontology based context-aware middleware. It separates the ontologies into two levels: upper level ontology for general concepts and lower level ontologies domain specific descriptions. SOCAM architecture comprises several key components: context provider (acquires data from sensors and other internal and external data sources and converts the context in to OWL representation), context interpreter (performs reasoning using reasoning engine and stores the processed context information in the knowledge base), context-aware services (context consumers), and services locating service (context providers and interpreter are allowed to register so other components can search for appropriates providers and interpreters based on their capabilities). **e-SENSE** [@P266] enables ambient intelligence using wireless multi-sensor networks for making context-rich information available to applications and services. e-SENSE combines body sensor networks (BSN), object sensor networks (OSN), and environment sensor networks (ESN) to capture context in the IoT paradigm. The features required by context-aware IoT middleware solutions are identified as sensor data capturing, data pre-filtering, context abstraction data source integration, context extraction, rule engine, and adaptation. **HCoM** [@P336] (Hybrid Context Management) is a hybrid approach which combines semantic ontology and relational schemas. This approach claims that standard database management systems alone cannot be used to manage context. In contrast, semantic ontologies may not perform well in terms of efficiency and query processing with large volumes of data. So the hybrid approach is required. HCoM architecture consists of five layers: acquisition layer, pre-processing layer, data modelling and storage layer, management modelling layer, and utilising layer. HCoM has identified several key requirements that a context management solution should have that are encapsulated in several components: context manager (aggregates the results and sends the data to reasoning engine), collaboration manager (if context selector decides the existing context information is not sufficient to perform reasoning, the collaboration manager attempts to gather more data from other possible context sources), context filter (once the context is received, it validates and decide whether it needs to be stored in RCDB), context selector (based on the user request, it decides what context should be used in reasoning processing based on the accuracy, time, and required computational resources), context-onto (manages the ontologies and acts as a repository), rules and policy (users are allowed to add rules to the system), RCDB (stores the captured context in a standard database management system), rule-mining (a data base that consists of rules that tell what actions to perform when), and interfaces (provides interface to the context consumers). **MoCA** [@P338] is a service based distributed middleware that employs ontologies to model and manage context. The primary conceptual component is context domain. The context management node (CMN) is infrastructure that is responsible for managing the context domain. Similar to most of the other context management solutions, the three key components in MoCA are: context providers (responsible for generating or retrieving context from other sources available to be used by the context management system), context consumers (consume the context gathered and processed by the system), and context service (responsible for receiving, storing, and disseminating context information). MoCA uses an object oriented model for context handling, instead of an ontology-based model due to the weaknesses posed by ontologies in terms of scalability and performance. XML is used to model context. The XML files are fed into the context tool in order to check validation. Then the program codes are generated automatically to acquire data. These program codes will acquire context and insert the data into context repositories. **CaSP** [@P317] (Context-aware Service Platform) is a context gathering framework for mobile solutions based on middleware architecture. The platform provides six different functionalities: context sensing, context modelling, context association, context storage, and retrieval. The paper also provides a comprehensive evaluation of existing context sensing solutions. CaSP consists of typical context management components which handle the mentioned functionalities. **SIM** [@P349] (Sensor Information Management) is focused on the smart home domain which addresses location tracking. SIM uses an agent based architecture according to the standard specifications provided in Foundation for Intelligent Physical Agents. Its emphasis is on collecting sensor data from multiple sources and aggregating them together to analyse and derive more accurate information. SIM collects two types of information: node level and attribute level. In node level, node ID, location, and priority are collected. Attributes are stored in attribute information base comprising attribute and the corresponding measurement. A location tracking algorithm has been introduced using a mobile positioning device. A position manager handles tracking. SIM has the capability to resolve conflicts in sensor information based on sensor priority. Conflict resolution is handled by a context manager with the help of aggregation, classification, and decision components. Even though SIM is not focused on hardware level context management, the approach is closer to low-level instead of high-level compared to other projects. **COSMOS** [@P403] is middleware that enables the processing of context information in ubiquitous environments. COSMOS consists of three layers: context collector (collects information from the sensors), context processing (derives high level information from raw sensor data), and context adaptation (provides access to the processed context for the applications). In contrast to the other context solutions, the components of COSMOS are context nodes. In COSMOS, each piece of context information is defined as a context node. COSMOS can support any number of context nodes which are organised into hierarchies. Context node is an independently operated module that consists of its own activity manager, context processor, context reasoner, context configurator, and message managers. Therefore, COSMOS follows distributed architecture which increases the scalability of the middleware. **DMS-CA** [@P308] (Data Management System-Context Architecture) is based on smart building domain. XML is used to define rules, contexts, and services. Further, an event driven rule checking technique is used to reason context. Rules can be configured by mobile devices and push them to the server to be used by the rule checking engine. Providing a mobile interface to build rules and queries is important in a dynamic and mobile environment such as the IoT. **ACoMS** [@P339] (Autonomic Context Management System) can dynamically configure and reconfigure its context information acquisition and pre-processing functionality to perform fault tolerant provisioning of context information. ACoMS architecture comprises application context subscription manager stores (manages context information requests from the applications using a subscribe mechanism), context source manager (performs actions such as low-level communication with context sources, context source discovery, registration, and configuration), and reconfiguration manager (performs monitoring tasks such as mapping context sources to context information). **CROCO** [@P334] (CROss application COntext management) is an ontology based context modelling and management service. CROCO identifies several requirements to be a cross application, such as application plug-in capability. CROCO has three responsibilities where they are distributed among three separate layers: data management (perform operations such as storing inferred data for historic use, develop and maintain fact database), consistency checking and reasoning (consistency manager is responsible for checking the consistency, such as data types, and cardinality when sensor data arrives before it is feed in to reasoning or storage; reasoning manager performs reasoning based on the facts stored in the fact data base), and context update and provision (allows context consumers to register themselves, retrieve context from context sources, and provide query interface to the consumers). **EMoCASN** [@P274] (Environment Monitoring Oriented Context Aware Sensor Networks) proposes a context-aware model for sensor networks (CASN). This modelling approach is narrowly focused on managing sensor networks using low level context such as node context, task context, and data context. For example, CASN uses low level context such as remaining energy of a node, location of the sensor, and orientation of the sensor to decide energy efficient routing. **Hydra**[^6] [@P105] is an IoT middleware that aims to integrate wireless devices and sensors into ambient intelligence systems. Hydra comprises a Context Aware Framework (CAF). CAF provides the capabilities of both high-level, powerful reasoning, based on the use of ontologies and lower-level semantic processing based on object-oriented/key-value approach. CAF consists of two main components: Data Acquisition Component (DAqC) and the Context Manager (CM). DAqC is responsible for connecting and retrieving data from sensors. CM is responsible for context management, context awareness, and context interpretation. A rule engine called Drools platform [@P395] has been employed as the core context reasoning mechanism. CAF models three distinct types of context: device contexts (e.g. data source), semantic contexts (e.g. location, environment, and entity), and application contexts (e.g. domain specific). Hydra identifies context reasoning rule engine, context storage, context querying, and event/action management as the key components of a context-aware framework. **C-Cast** [@P280] is middleware that integrates WSN into context-aware systems by addressing context acquisition, dissemination, representation, recognising, and reasoning about context and situations. C-Cast lays its architecture on four layers: sensor, context detection, context acquisition, and application. In C-Cast, context providers (CP) are the main components. Each context provider handles one task. For example, WeatherCP collects weather information and Address-bookCP collects related addresses. Any amount of CPs can be added to the system to extend the system wide functionality. Each context provider independently handles data acquisition, context processing (e.g. filter and aggregate context), context provider management (e.g. handles subscriptions), and context access and dissemination (e.g. handles queries). C-Cast claims that complex reasoning and intuitive reasoning can only be achieved by using rich representation models. In contrast, C-CAST avoids using ontologies to model context claiming ontologies are too resource intensive. **SALES** [@P314] (Scalable context-Aware middleware for mobiLe EnviromentS) is a context-aware middleware that achieves scalability in context dissemination. The main components of this middleware are nodes. These nodes are not sensor nodes but servers, computers, laptops, PDAs, and mobile phones. SALES consists of four types of nodes. XML schemes are used to store and transfer context. **MidSen** [@P275] is context-aware middleware for WSN. The system is based on Event-Condition-Action (ECA) rules. It highlights the importance of efficient event detection by processing two algorithms: event detection algorithm (EDA) and context-aware service discovery algorithm (CASDA). MidSen has proposed a complete architecture to enable context awareness in WSN. It consists of the following key components: knowledge manager, application notifiers, knowledge base, inference engine, working memory, application interface, and network interface. **Feel@Home** [@P346] is a context management framework that supports interaction between different domains. The proposed approach is demonstrated using three domains: smart home, smart office, and mobile. The context information is stored using OWL [@P148]. Feel@Home supports two different interactions: intra-domain and cross domain. The cross domain interaction is essential in the IoT paradigm. Further, this is one of the major differences between sensor networks and the IoT. Sensor networks usually only deal with one domain. However, IoT demands the capability of dealing with multiple domains. In addition, context management frameworks should not be limited to a specific number of domains. Feel@Home consists of three parts: user queries, global administration server (GAS), and domain context manager (DCM). User queries are first received by GAS. It decides what the relevant domain needs to be contacted to answer the user query. Then, GAS redirects the user query to the relevant domain context managers. Two components reside in GAS, context entry manager (CEM) and context entry engine (CEE), which performs the above task. DCM consists of typical context management components such as context wrapper (gathers context from sensors and other sources), context aggregator (triggers context reasoning), context reasoning, knowledge base (stores context), and several other components to manage user queries, publish/subscribe mechanism. The answers to the user query will return by using the same path as when received. **CoMiHoc** [@P347] (Context Middleware for ad-HoC network) is a middleware framework that supports context management and situation reasoning. CoMiHoc proposes a CoMoS (Context Mobile Spaces), a context modeling, and situation reasoning mechanism that extends the context spaces [@P195]. CoMiHoc uses Java Dempster-Shafer library [@P441]. CoMiHoc architecture comprises six components: context provisioner, request manager, situation reasoner, location reasoner, communication manager, and On-Demand Multicast Routing Protocol (ODMRP). **ezContext** [@P294] is a framework that provides automatic context life cycle management. ezContext comprises several components: context source (any source that provides context, either physical sensors, databases or web service), context provider (retrieves context from various sources whether in push (passive) or pull (active) method, context manager (manages context modelling, storage and producing high-level context using low-level context), context wrapper (encapsulate retrieved context into correct format, in this approach, key-value pairs), and providers’ registry (maintains list of context providers and their capabilities). JavaBeans are used as the main data format. **Octopus** [@P285] is an open-source, dynamically extensible system that supports data management and fusion for IoT applications. Octopus develops middleware abstractions and programming models for the IoT. It enables non-specialised developers to deploy sensors and applications without detailed knowledge of the underlying technologies and network. Octopus is focused on the smart home/office domain and its main component is *solver*. Solver is a module that performs sensor data fusion operations. Solvers can be added and removed from the system at any time based on requirements. Further solvers can be combined together dynamically to build complex operations. Lessons Learned {#chapter2:LL} =============== ### Development Aids and Practices ### Mobility, Validity, and Sharing ### On Demand Data Modelling ### Hybrid Reasoning ### Hardware Layer Support ### Dynamic Configuration and Extensions ### Distributed Processing ### Other Aspects ![image](./Figures/61-Conceptual_Framework.pdf) Challenges and Future Research Directions {#chapter2:LLFRD} ========================================= As we mentioned earlier, one of our goal in this survey is to understand how context-aware computing can be applied in the IoT paradigm based on past experience. Specifically, we evaluated fifty context-aware projects and highlighted the lessons we can learn from them in the IoT perspective. In this section our objective is to discuss six unique challenges in the IoT where novel techniques and solution may need to be employed. ### Automated configuration of sensors {#chapter2:LLFRD:Automated_configuration_of_sensors} In traditional pervasive/ubiquitous computing, we connect only a limited number of sensors to the applications (e.g. smart farm, smart home). In contrast, the IoT envisions billions of sensors to be connected together over the Internet. As a result, a unique challenge would arise on connection and configuration of sensors to applications. Due to the scale, it is not feasible to connect sensors manually to an application or to a middleware [@ZMP005]. There has to be an automated or at least semi-automated process to connect sensors to applications. In order to accomplish this task, applications should be able to understand the sensors (e.g. sensors’ capabilities, data structures they produce, hardware/driver level configuration details). Recent developments such as Transducer Electronic Data Sheet (TEDS) [@P258], Open Geospatial Consortium (OGC) Sensor Web Enablement related standards such as Sensor Markup Languages (SensorML) [@P256], sensor ontologies [@P103], and immature but promising efforts such as Sensor Device Definitions [@ZMP002] show future directions to carry out the research work further, in order to tackle this challenge. ### Context discovery Once we connect sensors to a software solution, as mentioned above, there has to be a method to understand the sensor data produced by the sensors and the related context automatically. We discussed context categorisation techniques comprehensively in Section \[chapter2:CAF:context Types\]. There are many types of context that can be used to enrich sensor data. However, understanding sensor data and appropriately annotating it automatically in a paradigm such as the IoT, where application domains vary widely, is a challenging task. Recent developments in semantic technologies [@P191; @P103; @P088] and linked data [@P520; @P068] show future directions to carry out further research work. Semantic technology is popularly used to encode domain knowledge. ### Acquisition, modelling, reasoning, and distribution After analysing acquisition, modelling, and reasoning in different perspectives, it is evident that no single technique would serve the requirements of the IoT. Incorporating and integrating multiple techniques has shown promising success in the field. Some of the early work such as [@P216; @P463] have discussed the process in detail. However, due to the immaturity of the field of IoT, it is difficult to predict when and where to employ each technique. Therefore, it is important to define and follow a standard specification so different techniques can be added to the solutions without significant effort. Several design principles have been proposed by [@P143; @P384] as a step towards standardisation of components and techniques. The inner-workings of each technique can be different from one solution to another. However, common standard interfaces will insure the interoperability among techniques. ### Selection of sensors in sensing-as-a-service model This is going to be one of the toughest challenges in the IoT. It is clear that we are going to have access to billions of sensors. In such an environment, there could be many different alternative sensors to be used. For example, let us consider a situation where an environmental scientist wants to measure environmental pollution in New York city. There are two main problems: (1) ‘what sensors provide information about pollution?’ [@ZMP004] (2) when there are multiple sensors that can measure the same parameter (e.g. pH concentration in a lake), ‘what sensor should be used?’ [@ZMP006] In order to answer question (1), domain knowledge needs to be incorporate with the IoT solution. Manually selecting the sensors that will provide information about environmental pollution is not feasible in the IoT due to its scale. In order to answer question (2), quality frameworks need to be defined and employed. Such a framework should be able to rank the sensors based on factors such as accuracy, relevancy, user feedback, reliability, cost, and completeness. Similar challenges have been addressed in the web service domain during the last decade [@P563; @P564] where we can learn from those efforts. ### Security, privacy, and trust This has been a challenge for context-aware computing since the beginning. The advantage of context is that it provides more meaningful information that will help us understand a situation or data. At the same time, it increases the security threats due to possible misuse of the context (e.g. identity, location, activity, and behaviour). However, the IoT will increase this challenge significantly. Even though security and privacy issues are addressed at the context-aware application level, it is largely unattended at the context-aware middleware level. In the IoT, security and privacy need to be protected in several layers: sensor hardware layer, sensor data communication (protocol) layer, context annotation and context discovery layer, context modelling layer, and the context distribution layer. IoT is a community based approach where the acceptance of the users (e.g. general public) is essential. Therefore, security and privacy protection requirements need to be carefully addressed in order to win the trust of the users. ### Context Sharing {#chapter2:LLFRD:Context_Data_Sharing} This is largely neglected in the context-aware middleware domain. Most of the middleware solutions or architectures are designed to facilitate applications in isolated factions. Inter-middleware communication is not considered to be a critical requirement. However, in the IoT, there would be no central point of control. Different middleware solutions developed by different parties will be employed to connect to sensors, collect, model, and reason context. Therefore, sharing context information between different kinds of middleware solutions or different instances of the same middleware solution is important. Sensor data stream processing middleware solutions such as GSN [@P050] have employed this capability to share sensor data among different instances (e.g. installed and configured in different computers and locations) where context is not the focus. However, in contrast to sensor data, context information has strong relationships between each other (e.g. context modelled using RDF). Therefore, relationship models also need to be transferred and shared among different solutions, which enables the receiver to understand and model the context accurately at the receivers end. Conclusions {#chapter2:Conclusions} =========== The IoT has gained significant attention over the last few years. With the advances in sensor hardware technology and cheap materials, sensors are expected to be attached to all the objects around us, so these can communicate with each other with minimum human intervention. Understanding sensor data is one of the main challenges that the IoT would face. This vision has been supported and heavily invested by governments, interest groups, companies, and research institutes. For example, context awareness has been identified as an important IoT research need by the Cluster of European Research Projects on the IoT (CERP-IoT) [@P019] funded by the European Union. The EU has allocated a time frame for research and development into context-aware computing focused on the IoT to be carried out during 2015-2020. In this survey paper, we analysed and evaluated context-aware computing research efforts to understand how the challenges in the field of context-aware computing have been tackled in desktop, web, mobile, sensor networks, and pervasive computing paradigms. A large number of solutions exist in terms of systems, middleware, applications, techniques, and models proposed by researchers to solve different challenges in context-aware computing. We also discussed some of the trends in the field that were identified during the survey. The results clearly show the importance of context awareness in the IoT paradigm. Our ultimate goal is to build a foundation that helps us to understand what has happened in the past so we can plan for the future more efficiently and effectively. Acknowledgment {#acknowledgment .unnumbered} ============== Authors acknowledge support from SSN TCP, CSIRO, Australia and ICT OpenIoT Project, which is co-funded by the European Commission under seventh framework program, contract number FP7-ICT-2011-7-287305-OpenIoT. The Author(s) acknowledge help and contributions from The Australian National University. [Charith Perera]{} received his BSc (Hons) in Computer Science in 2009 from Staffordshire University, Stoke-on-Trent, United Kingdom and MBA in Business Administration in 2012 from University of Wales, Cardiff, United Kingdom. He is currently pursing his PhD in Computer Science at The Australian National University, Canberra, Australia. He is also working at Information Engineering Laboratory, ICT Centre, CSIRO and involved in OpenIoT Project (Open source blueprint for large scale self organizing cloud environments for IoT applications), which is co-funded by the European Commission under seventh framework program. His research interests include Internet of Things, pervasive and ubiquitous computing with a focus on sensor networks, middleware, context aware computing, mobile computing and semantic technologies. He is a member of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). [Arkady Zaslavsky]{} is the Science Leader of the Semantic Data Management science area at Information Engineering Laboratory, ICT Centre, CSIRO. He is also holding positions of Adjunct Professor at ANU, Research Professor at LTU and Adjunct Professor at UNSW. He is currently involved and is leading a number of European and national research projects. Before coming to CSIRO in July 2011, he held a position of a Chaired Professor in Pervasive and Mobile Computing at Luleå University of Technology, Sweden where he was involved in a number of European research projects, collaborative projects with Ericsson Research, PhD supervision and postgraduate education. Between 1992 and 2008 Arkady was a full-time academic staff member at Monash University, Australia. Arkady made internationally recognised contribution in the area of disconnected transaction management and replication in mobile computing environments, context-awareness as well as in mobile agents. He made significant internationally recognised contributions in the areas of data stream mining on mobile devices, adaptive mobile computing systems, ad-hoc mobile networks, efficiency and reliability of mobile computing systems, mobile agents and mobile file systems. Arkady received MSc in Applied Mathematics majoring in Computer Science from Tbilisi State University (Georgia, USSR) in 1976 and PhD in Computer Science from the Moscow Institute for Control Sciences (IPU-IAT), USSR Academy of Sciences in 1987. Before coming to Australia in 1991, Arkady worked in various research positions at industrial R&D labs as well as at the Institute for Computational Mathematics of Georgian Academy of Sciences where he lead a systems software research laboratory. Arkady Zaslavsky has published more than 300 research publications throughout his professional career and supervised to completion more than 30 PhD students. Arkady Zaslavsky is a Senior Member of ACM, a member of IEEE Computer and Communication Societies. [Peter Christen]{} is an Associate Professor in the Research School of Computer Science at the Australian National University. He received his Diploma in Computer Science Engineering from ETH Zürich in 1995 and his PhD in Computer Science from the University of Basel in 1999 (both in Switzerland). His research interests are in data mining and data matching (entity resolution). He is especially interested in the development of scalable and real-time algorithms for data matching, and privacy and confidentiality aspects of data matching and data mining. He has published over 80 papers in these areas, including in 2012 the book ‘Data Matching’ (by Springer), and he is the principle developer of the *Febrl* (Freely Extensible Biomedical Record Linkage) open source data cleaning, deduplication and record linkage system. [Dimitrios Georgakopoulos]{} is a Research Director at the CSIRO ICT Centre where he heads the Information Engineering Laboratory that is based in Canberra and Sydney. The laboratory has 70 researchers and more than 40 visiting scientists, students, and interns specializing in the areas of Service/Cloud Computing, Human Computer Interaction, Machine Learning, and Semantic Data Management. Dimitrios is also an Adjunct Professor at the Australian National University. Before coming to CSIRO in October 2008, Dimitrios held research and management positions in several industrial laboratories in the US. From 2000 to 2008, he was a Senior Scientist with Telcordia, where he helped found Telcordia’s Research Centers in Austin, Texas, and Poznan, Poland. From 1997 to 2000, Dimitrios was a Technical Manager in the Information Technology organization of Microelectronics and Computer Corporation (MCC), and the Chief Architect of MCC’s Collaboration Management Infrastructure (CMI) consortial project. From 1990-1997, Dimitrios was a Principal Scientist at GTE (currently Verizon) Laboratories Inc. Dimitrios has received a GTE (Verizon) Excellence Award, two IEEE Computer Society Outstanding Paper Awards, and was nominated for the Computerworld Smithsonian Award in Science. He has published more than one hundred journal and conference papers. Dimitrios is the Vice-Chair of the 12th International Semantic Web Conference (ISWC 2013) in Sydney, Australia, 2013, and the General Co-Chair of the 9th IEEE International Conference on Collaborative Computing (CollaborateCom 2013) in Austin, Texas, USA, 2013. In 2011, Dimitrios was the General chair of the 12th International Conference on Web Information System Engineering (WISE), Sydney, Australia, and the 7th CollaborateCom, Orlando, Florida, October 2011. In 2007, he was the Program Chair of the 8th WISE in Nancy France, and the 3rd CollaborateCom in New York, USA. In 2005, he was the General chair of the 6th WISE in New York. In 2002, and he served as the General Chair of the 18th International Conference on Data Engineering (ICDE) in San Jose, California. In 2001, he was the Program Chair of the 17th ICDE in Heidelberg, Germany. Before that he was the Program Chair of 1st International Conference on Work Activity Coordination (WACC) in San Francisco, California, 1999, and has served as Program Chair in a dozen smaller conferences and workshops. [^1]: Charith Perera, Arkady Zaslavsky and Dimitrios Georgakopoulos are with the Information and Communication Centre, Commonwealth Scientific and Industrial Research Organisation, Canberra, ACT, 2601, Australia (e-mail: firstname.lastname@csiro.au) [^2]: Peter Christen is with the Research School of Computer Science, The Australian National University, Canberra, ACT 0200, Australia. (e-mail: peter.christen@anu.edu.au) [^3]: Manuscript received xxx xx, xxxx; revised xxx xx, xxxx. [^4]: The term *‘context*’ implicitly provide the meaning of *‘information*’ according to the widely accepted definition provided by [@P104]. Therefore, it is inaccurate to use the term ‘*context information*’ where *‘information*’ is explicitly mentioned. However, research community and documents on the web frequently use the term ‘*context information*’. Therefore, we also use both terms interchangeably. [^5]: We use both terms, ‘*objects*’ and ‘*things*’ interchangeably to give the same meaning as they are frequently used in IoT related documentation. Some other terms used by the research community are ‘smart objects’, ‘devices’, ‘nodes’. [^6]: The name ‘Hydra’ has changes its name due to name conflict between another project registered under same name in Germany. The new name of the middleware is the ‘LinkSmart’ middleware.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A new nonlinear 3+1 dimensional evolution equation admitting the Lax pair is presented. In the case of one spatial dimension, the equation reduces to the Burgers equation. A method of construction of exact solutions, based on a class of discrete symmetries of the former equation is developed. These symmetries reduce to the Cole-Hopf transformation in one-dimensional limit. Some exact solutions are analyzed, in the physical context of spatial dissipative structures and shock wave dressing.' author: - 'M. Rudnev, A.V. Yurov, V.A. Yurov' title: A new integrable 3+1 dimensional generalization of the Burgers equation --- ŭ Integrable PDEs, Lax pairs, Darboux transformation, Burgers equation, Cole-Hopf transformation\ 1. Introduction {#introduction .unnumbered} --------------- The vast majority of known completely integrable nonlinear evolution PDEs is 1+1 dimensional [@AS], [@N]. Dealing with more than a single spatial dimension, one faces fundamental algebraic and geometric obstructions. This fact accounts for the scarcity of integrable $d$+1 dimensional systems, for $d\geq 2$, available today. A notable exception is the Burgers equation, which in 1+1 dimensions is $$u_t+uu_x-\nu u_{xx}=0.\label{be}$$ It generalizes to $d\geq2$ spatial dimensions as follows $$\u_t+\u\cdot\nabla \u-\nu \nabla^2 \u=0,\;\;\u=-\nabla\Phi,\;\;\nabla=(\partial_{x_1},\ldots,\partial_{x_d}). \label{mty}$$ from the physical point of view, the equation describes the balance between nonlinearity and dissipation, [@B], rather than dispersion, which is more characteristic of integrable evolution equations. It comes as a model in various problems of continuous media dynamics, condensed matter physics cosmology, etc., see for instance [@FB], [@HV], [@P]. Mathematically, the Burgers equation is special, as it can be fully linearized via the Cole-Hopf ([@C], [@H]) substitution $$\u=-2\nu\nabla\log\Theta,\label{ch}$$ which reduces it to the heat equation for $\Theta$, with diffusivity $\nu$. For the state of the art on the pure mathematical side of the Burgers equation, see for instance [@WKMS] and the references contained therein. The principal content of this note is as follows. Apart from the natural integrable higher dimensional generalization (\[mty\]) of the Burgers equation (\[be\]), the latter can be used as the foundation to construct higher dimensional integrable PDEs, are not fully linearizable, yet their solutions can be found via Lax pairs. In the recent work [@Y], the 2+1 dimensional BLP (Boiti, Leon and Pempinelli) system was studied and shown to reduce to an integrable two dimensional generalization of the Burgers equation (\[be\]). Here we present a 3+1 dimensional nonlinear equation, which contains dissipative additives and has the following properties: 1. It is a scalar second order evolution PDE with quadratic nonlinearity. 2. In one dimensional limits this equation reduces to the Burgers equation. However, unlike the $d$+1 dimensional Burgers equation, our equation is non-isotropic, nor it is linearizable via the Cole-Hopf substitution. 3. It allows for an explicit Lax pair. 4. It possesses a class of Darboux-transformation-like discrete symmetries, and to take advantage of these symmetries one has to solve the Lax pair equations. The symmetries generate a rich spectrum of exact solutions of the equation. In particular they enable one to fulfill a 3+1 dimensional dressing of particular solutions of the 1+1 dimensional Burgers equation. Conversely, in one dimensional limits the symmetries reduce to the Cole-Hopf transformation. 2. General result {#general-result .unnumbered} ----------------- Consider the following equation: $$\begin{array}{c} \displaystyle{K[u]\equiv u_t+a_1 \left({u_x}^2-u_{xx}\right)+a_2 \left({u_z}^2-u_{zz}\right)+b_1 \left(u_x u_y-u_{xy}\right)+b_2 \left(u_x u_z-u_{xz}\right)-\rho u_x-} \\ \displaystyle{-\mu u_y-\lambda u_z=0}, \end{array} \label{Synok}$$ where $u=u(t,x,y,z,t)$, and all other parameters are constants. In one dimensional limits equation (\[Synok\]) reduces to the dissipative Burgers equation. Indeed, if $u=u(x,t)$, defining $$\xi(x,t)=u_x(x,t),\label{oned}$$ we obtain $$\xi_t-\rho\xi_x-a_1\xi_{xx}+2a_1\xi\xi_x=0. \label{Bx}$$ The latter equation boils down to the Burgers equation after the change $t\to t'$, such that $$\partial_{t'}=\partial_t-\rho\partial_x,$$ or simply letting $\rho=0$. In the same fashion, if $u=u(z,t)$ and $\eta(z,t)=u_z(z,t)$, one has $$\eta_t-\lambda\eta_z-a_2\eta_{zz}+2a_2\eta\eta_z=0, \label{Bz}$$ the analog of (\[Bx\]). Finally, the reduction $u=u(y,t)$ results in a linear equation $$u_t-\mu u_y=0.$$ In view of the above, equation (\[Synok\]) can be viewed as a special non-isotropic three dimensional extension of the Burgers equation. To emphasize this, let $w=u_x$, consider $\mu=\rho=\lambda=0$ and rewrite (\[Synok\]) as follows: $$\begin{array}{c} w_t+2a_1ww_x+b_1ww_y+b_2ww_z-a_1w_{xx}-a_2w_{zz}-b_1w_{xy}-b_2w_{xz}-\mu w_y+\\ +b_1u_yw_x+b_2u_zw_x+2a_2u_zw_z=0. \end{array} \label{Synok1}$$ Our main result is the following theorem \[th\]Let $u(x,y,z,t)$ be a particular solution of equation (\[Synok\]) and $\psi=\psi(x,y,z,t)$ be a solution of the following linear equation: $$\begin{array}{c} \displaystyle{ \psi_t=a_1 \psi_{xx}+a_2 \psi_{zz}+b_1 \psi_{xy}+b_2 \psi_{xz}+\left(\rho-2 a_1 u_x-b_2 u_z-b_1 u_y\right) \psi_x+\left(\mu-b_1 u_x\right) \psi_y+} \\ \displaystyle{+\left(\lambda-2 a_2 u_z-b_2 u_x\right) \psi_z}\equiv {\boldsymbol A}[u]\, \psi. \end{array} \label{A}$$ Then any ${\tilde u}_{klm} ={\tilde u}_{klm}(x,y,z,t),$ defined by the formula $${\tilde u}_{klm}=u-\log\left( \left(\partial_x-u_x\right)^{k}\left(\partial_y-u_y\right)^{l} \left(\partial_z-u_z\right)^{m}\psi\right), \;\;(k,l,m)\in\Z^3_+\label{result}$$ is also a solution of equation (\[Synok\]). Theorem \[th\] rests on the following fact. Equation (\[Synok\]) admits the following Lax pair: $\psi_t={\boldsymbol A}(u)\psi$, cf. (\[A\]), and $$\begin{array}{c} \displaystyle{ \psi_{xyz}=u_z\psi_{xy}+u_y\psi_{xz}+u_x\psi_{yz}+\left(u_{yz}-u_yu_z\right)\psi_x+\left(u_{xz}-u_xu_z\right)\psi_y+ \left(u_{xy}-u_xu_y\right)\psi_z+} \\ \displaystyle{\left(u_{xyz}-u_{yz}u_x-u_{yx}u_z-u_{xz}u_y+u_xu_yu_z\right)\psi}, \end{array} \label{L}$$ Verification of this proposition a direct calculation. Further in the note we shall refer to equations (\[L\]) and (\[A\]) as the Lax, or LA-pair for equation (\[Synok\]), and $u$ as a potential. Observe that the spectral equation (\[L\]) of the Lax pair can be rewritten in a more compact form: $$\left(\partial_x-u_x\right)\left(\partial_y-u_y\right) \left(\partial_z-u_z\right)\psi\equiv{\boldsymbol L}_1[u]{\boldsymbol L}_2[u]{\boldsymbol L}_3[u]\psi\equiv{\boldsymbol L}[u]\psi=0. \label{factor}$$ Also observe that if we redefine the operator ${\boldsymbol A}[u]\to {\boldsymbol A}'[u]= {\boldsymbol A}[u]+K[u]$, then the compatibility condition of the Lax pair equations (\[L\]) and (\[A\]) will be reduced to a an identity. Namely the operators $\boldsymbol L[u]$ and ${\boldsymbol B}'[u]=\partial_t-{\boldsymbol A}'[u]$ will commute: $[\boldsymbol{L}[u],\boldsymbol{B}'[u]]=0$. Theorem \[th\] implies the following corollary, which follows after successive iteration of (\[result\]). If $\{\psi_i\}$, $i=1,..,N$ is a set of particular solutions of the Lax pair (\[L\]), (\[A\]), given the potential $u$, satisfying equation (\[Synok\]), new solutions of (\[Synok\]) are generated by the following rule: $${\tilde u} =u-\log\left(\prod_{i=1}^N {\boldsymbol L}^{k_i}_1[u]{\boldsymbol L}^{l_i}_2[u]{\boldsymbol L}^{m_i}_3[u]\psi_i \right),\;\;\;(k_i,l_i,m_i)\in\Z^3_+,\;\forall i=1,\ldots,N. \label{cr}$$ As we have indicated earlier, the Burgers equation (\[Bx\]) results from a 1+1 dimensional reduction of equation (\[Synok\]). Conversely, formulae (\[result\]), (\[cr\]) yield a bona fide generalization of the Cole-Hopf substitution (\[ch\]). Indeed, setting $u\equiv 0$, $k=l=m=0$ in (\[result\]), after differentiation in $x$, we obtain the one dimensional Cole-Hopf transformation, cf. (\[ch\]). Clearly, the same can be said about the $z$ variable reduction as well. Proof of Theorem \[th\] {#proof-of-theorem-th .unnumbered} ----------------------- The theorem will follow from the following lemma. Let $\psi$ be a solution of the Lax pair equaitons (\[L\]), (\[A\]) with the potential $u$, which is a solution of equation (\[Synok\]). Then the function $${\tilde \psi}_{klm}={\boldsymbol L}^{k}_1[u]{\boldsymbol L}^{l}_2[u]{\boldsymbol L}^{m}_3[u]\psi, \;\;\;(k,l,m)\in\Z^3_+\label{Lemma}$$ also satisfies (\[L\]), (\[A\]), with the same potential $u$.\[lm\] To prove the lemma, observe the validity of the following commutator relations, for $i,j=1,2,3$: $$[{\boldsymbol L}_i[u],{\boldsymbol L}[u]]=[{\boldsymbol L}_i[u],{\boldsymbol B}[u]]=[{\boldsymbol L}_i[u],{\boldsymbol L}_j[u]]=0.$$ Lemma \[lm\] then follows by induction. $\Box$ Now, to prove Theorem \[th\], let us introduced three intertwining operators $${\boldsymbol D}_i=f_i\partial_i-g_i,\qquad i=1,2,3 \label{D},$$ with the quantities $f_i, g_i$ to be found (naturally, $\partial_{1,2,3}=\partial_{x,y,z}$, respectively), such that the operators ${\boldsymbol D}_i$ have the following property: for some $u_i=u_i(x,y,z,t),$ $${\boldsymbol L}(u_i){\boldsymbol D}_i={\boldsymbol D}_i {\boldsymbol L}[u],\qquad {\boldsymbol B}(u_i){\boldsymbol D}_i={\boldsymbol D}_i {\boldsymbol B}[u]. \label{spl}$$ The commutation relations (\[spl\]) determine the maps $u\to u_i$, which come from substitution of (\[D\]) into (\[spl\]). The explicit form of the operators ${\boldsymbol D}_i$ is found as follows. Substituting (\[D\]) into (\[spl\]) and equating the components at the same partial derivatives results in a system of nonlinear equations (which is not quoted because of its bulk) whence it follows: $${\boldsymbol D}_i={\rm e}^{-v}\left({\boldsymbol L}_i[u]-c_i\right),\qquad u_i={\tilde u}=u-v, \label{new}$$ where $c_i$ - are constants, which will be further assigned zero values. The quantity $v=v(x,y,z,t)$ is a solution of the following nonlinear equation: $$\begin{array}{c} \displaystyle{v_t=a_1 \left(v_{xx}+{v_x}^2\right)+a_2 \left(v_{zz}+{v_z}^2\right)+b_1 \left(v_{xy}+v_x v_y\right)+b_2 \left(v_{xz}+v_x v_z\right)+} \\ \displaystyle{+\left(\rho-2 a_1 u_x-b_2 u_z-b_1 u_y\right) v_x+\left(\mu-b_1 u_x\right)v_y+\left(\lambda-2 a_2 u_z-b_2 u_x\right)v_z} \end{array} \label{vt}$$ Therefore (\[new\]) or explicitly (\[vt\]) indicate that for $u\equiv 0,$ the function “$-v$” satisfies equation (\[Synok\]). Then automatically the quantity $\psi={\rm e}^{-u}$ will satisfy the Lax pair equations (\[L\]) and (\[A\]). In fact, the L-equation, cf. (\[L\]), is satisfied as the identity. The A-equation (\[A\]) however is satisfied only if $u$ is a solution of (\[Synok\]). On the other hand, by Lemma \[lm\], the functions of the form ${\tilde \psi}_{klm}$ defined via relation (\[Lemma\]) are also solutions of (\[L\]) and (\[A\]), with the same potential $u$. Rewriting them as ${\tilde \psi}_{klm}=\exp(v_{klm})$ and substituting into (\[A\]) one verifies that the quantities $v_{klm}$ are indeed solutions of equation (\[vt\]). Theorem \[th\] and formula (\[result\]) now follow from the second relation from (\[new\]). $\Box$ [*Remark.*]{} Formula (\[result\]) has a countenance similar to the Darboux transformation, which is a standard tool for construction of exact solutions of nonlinear PDEs (usually 1+1, more rarely 2+1 dimensional) which admit Lax pairs, see e.g. [@Salle] for the general theory, applications and references. However (\[result\]) does not represent a bona fide Darboux transform for two following reasons. 1. Darboux transforms, representing discrete symmetries of a Lax pair, possess a non-trivial kernel on the solution space of the pair. In other words, there always exists some Lax pair solution which zeroes the transform. This is the property which enabled one Crum, [@Crum], to write down the determinant formulae for successive Darboux transforms. Transformation (\[result\]) however does not have this property. It is known that in addition to the Darboux transform, fairly rich spectral problems, such as the Zakharov-Shabat problem for the Nonlinear Schrödinger equation or its two-dimensional generalization for the Davey-Stewartson equations, admit another discrete symmetry, namely the Schlesinger transform, [@Schl]. The difference between (\[result\]) and the latter transformations lies in the fact that for the Schlesinger transform, the potential transformation rules can be locally defined without using the Lax pair solution $\psi$, while (\[result\]) certainly does so. This feature is shared by (\[result\]) and the standard Darboux transformation. 2. To construct exact solutions of nonlinear PDEs via the Darboux transform, one has to take advantage of the solution of the full Lax pair as a system of equations. In order to get (\[result\]) however, we have used the solution of the A-equation (\[A\]) only. The L-equation (\[L\]) has been used only as a tool to prove Theorem \[th\]. To this effect, transformation (\[result\]) combines essential features of the Darboux and Cole-Hopf transformations. 4. Some exact solutions {#some-exact-solutions .unnumbered} ----------------------- Let us use the above formalism in order to construct some exact solutions of equation (\[Synok1\]) (which is the equation for $u_x$, where $u$ is a solution of equation (\[Synok\]) with $\mu=\lambda=\rho=0$). We consider equation (\[Synok1\]), because it appears to be a closer relation of the Burgers equation and is likely to be interesting from the physics point of view. [**Example 4.1.**]{} As the first example let us consider dressing on the vacuum background $u\equiv 0$. In this case the function $$\psi(x,y,z,t)=a^2x^2+b^2y^2+c^2z^2+2\left(a_1a^2+a_2c^2\right)t+s^2, \label{1}$$ where $a,b,c,s$ are some real constants, is clearly a solution of the Lax pair equations (\[L\]) and (\[A\]). Substituting (\[1\]) in (\[result\]) we derive the ${\tilde u}_{klm}$. After differentiating it with respect to $x$ and choosing $k=l=m=0$, we obtain a solution $w$ of equation (\[Synok1\]) as follows: $$w(x,y,z,t)=-\frac{2a^2x}{a^2x^2+b^2y^2+c^2z^2+2\left(a_1a^2+a_2c^2\right)t+s^2}. \label{1.1}$$ Physically, solution (\[1.1\]) describes a rationally localized impulse, vanishing as $t\to +\infty$. To ensure that (\[1.1\]) is non-singular for $t\ge 0$, one should impose the inequality $a_2\ge -a^2a_1/c^2$ on the coefficients. Moreover, if $a_2= -a^2a_1/c^2$, the solution in question is rationally localized and stationary. [*Remark.*]{} The fact that there exists a localized stationary solution in an equation containing dissipative terms may appear somewhat paradoxical from the point of view of physics. However solution (\[1.1\]) is stationary only if $a_1a_2<0$. One can see that these constants appear in the dissipative terms of equation (\[Synok1\]), $a_1$ characterizing the dissipation along the $x$ and $a_2$ along the $z$ axes. The fact that $a_1$ and $a_2$ should have different signs implies that dissipation in the direction of one axis is compensated by instability in the direction of the other. The balance of these two effects results in the stationary solution, which can be regarded as a three-dimensional [*dissipative structure*]{}. A similar situation occurs with two-dimensional stationary solutions of the BLP equation, cf. [@Y]. Another solution of (\[L\]), (\[A\]) when $u\equiv 0$ is $$\psi(x,y,z,t)=c_1{\rm e}^{\alpha(\alpha a_1+\beta b_1)t}\cosh(\alpha x+\beta y)+c_2 {\rm e}^{(a_1a^2+a_2b^2+abb_2)t}\cosh(ax+bz)+c_3{\rm e}^{a_2c^2t}\cosh(cz), \label{2}$$ where $\alpha,\beta, a,b,c_1,c_2,c_3$ are some real constants. Choosing them such that $$\beta=-\frac{\alpha a_1}{b_1},\qquad b=\frac{-b_2\pm\sqrt{b_2^2-4a_1a_2}}{2a_2}î\label{uh}$$ and using (\[2\]) and (\[uh\]) in the same way as in Example 4.1 above (\[1.1\]), we obtain another solution: $$\displaystyle{ w(x,y,z,t)=-\frac{\alpha c_1\sinh(\alpha x+\beta y)+a c_2\sinh(ax+bz)}{c_1\cosh(\alpha x+\beta y)+c_2\cosh(ax+bz)+c_3{\rm e}^{a_2c^2t}\cosh(cz)}}. \label{2.1}$$ Developing the analog with [@Y] further, one can develop a procedure of construction of exact solutions of the three-dimensional equation (\[Synok1\]) which are based initially on the solutions of the 1+1 dimensional Burgers equation. Consider equation (\[Bx\]) for the unknown $\xi$. Suppose $\lambda=\mu=\rho=0$, let $a_1=\nu$ in (\[Bx\]). Clearly the quantity $U(x,t)=2\nu\xi(x,t)$ solves the one dimensional Burgers equation $$U_t+UU_x-\nu U_{xx}=0. \label{Burg}$$ As a starting point let us take a shock wave solution of (\[Burg\]), e.g. $$\xi=U_x=\frac{v-\nu a}{2\nu}+\frac{a}{1+{\rm e}^{a(x-vt)}}, \label{I}$$ where $a$ and $v$ are constants. Seek a solution of (\[A\]) as a superposition $$\psi=\sum_{k=1}^N A_k(\eta){\rm e}^{\beta_k y+\gamma_k z}, \label{II}$$ where $\eta=x-vt$, while the $2N$ quantities $\beta_k$ and $\gamma_k$ can in general be functions of $\eta$. For simplicity however let us render them constants to be determined. Substitution of (\[II\]) into (\[A\]) yields $N$ linear equations for $A_k(\eta)$: $$\nu{\ddot A}+\left(v+\sigma-2\nu\xi\right){\dot A}+\left(a_2\gamma^2-\sigma\xi\right)A=0. \label{III}$$ In equation (\[III\]) the indices for the quantities $A_k$, $\beta_k$, $\gamma_k$ and $\sigma_k\equiv b_1\beta_k+b_2\gamma_k$ have been omitted, while the quantity $\xi$ came from (\[I\]), dot standing for differentiation with respect to $\eta$. Equation (\[III\]) can be simplified further. To do so, let us introduce a new independent variable: $$q=\xi(x,t)-\xi_0,\qquad {\rm e}^{a\eta}=\frac{a}{q}-1,\qquad \xi_0=\frac{v-\nu a}{2\nu}.$$ In terms of $q$ equation (\[III\]) becomes $$\nu\left(q^2-aq\right)^2A''(q)+\sigma\left(q^2-aq\right)A'(q)+\left(\delta-\sigma q\right)A(q)=0, \label{IV}$$ where $\delta=a_2\gamma^2-\sigma\xi_0$, while prime stands for differentiation in $q$. The latter equation can be simplified further via a substitution $$A(q)=W(q)\left(\frac{q}{q-a}\right)^{\sigma/(2\nu a)}, \label{V}$$ reducing (\[IV\]) to the following equation for $W(q)$: $$\frac{W''(q)}{W(q)}=\frac{2a\sigma\nu-4\delta\nu+\sigma^2}{4\nu^2(q-a)^2q^2}. \label{VI}$$ In a particular case of the dependence $$\beta_k=\frac{-b_2\gamma_k-\nu\pm\sqrt{\nu^2+4a_2\nu\gamma_k^2}}{b_1},$$ one can solve (\[A\]) explicitly as $$\psi=\sum_{k=1}^N\left(W_kq+V_k\right) \left(\frac{q}{q-a}\right)^{\sigma_k/(2\nu a)}{\rm e}^{\beta_ky+\gamma_kz}, \label{last}$$ where $W_k$ and $V_k$ are arbitrary constants. Substitution of (\[last\]) into (\[result\]) yields exact solutions of equations (\[Synok\]), (\[Synok1\]). The described procedure can be regarded as the shock wave dressing. Research has been partially supported by EPSRC Grant GR/S13682/01. [99]{} M.J. Ablowitz, H. Segur. Harvey Solitons and the inverse scattering transform. SIAM Studies in Applied Mathematics, [**4**]{}. [*Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA,*]{} 1981. x+425 pp. J.M. Burgers. The nonlinear diffusion equation. [*D. Reidel, Massachusetts*]{} 1974. C. Haselwandter, D.D. Vvedensky. Fluctuations in the lattice gas for Burgers’ equation. [*J. Phys. A*]{} [**35**]{} (2002), no. 41, L579–L584. Frisch, J. Bec. “Burgulence”. [*Turbulence: nouveaux aspects/New trends in turbulence*]{} (Les Houches, 2000), 341–383, [*EDP Sci., Les Ulis,*]{} 2001. J.D. Cole. On a quasi-linear parabolic equation occurring in aerodynamics. [*Quart. Appl. Math.*]{} [**9**]{} (1951) 225–236. M.M. Crum. Associated Sturm-Liouville systems. [*Quart. J. Math. Oxford Ser. (2)*]{} [**6**]{} (1955) 121–127. E. Hopf. The partial differential equation $u\sb t+uu\sb x=µu\sb {xx}$. [*Comm. Pure Appl. Math.*]{} [**3**]{} (1950) 201–230. A.N. Leznov, A.B. Shabat, R.I. Yamilov. Canonical transformations generated by shifts in nonlinear lattices. [*Phys. Letters A*]{} [**174**]{} (1993) 397–402. V.B. Matveev, M.A. Salle. Darboux Transformation and Solitons. [*Springer Verlag, Berlin–Heidelberg*]{} 1991. A.C. Newell. Solitons in mathematics and physics. CBMS-NSF Regional Conference Series in Applied Mathematics, [**48**]{}. [*Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA,*]{} 1985. xvi+244 pp. P.J.E. Peebles. Principles of Physical Cosmology. [*Princeton University Press,*]{} 1993. E. Weinan, K. Khanin, A. Mazel, Ya. Sinai. Invariant measures for Burgers equation with stochastic forcing. [*Ann. of Math.*]{} (2) [**151**]{} (2000) no. 3, 877–960. A.V. Yurov. BLP dissipative structures in the plane. [*Phys. Letters A*]{} [**262**]{} (1999) no. 6, 445–452. A.V. Yurov. The Bäcklund-Schlesinger transformation for Davey-Stewartson equations. (Russian) [*Teoret. Mat. Fiz.*]{} [**109**]{} (1996) no. 3, 338–346. [**Authors:**]{}\ [Mischa Rudnev:]{} Department of Mathematics, University of Bristol, Bristol BS6 6AL UK; e-mail [*m.rudnev@bris.ac.uk*]{} [Artem V. Yurov:]{} Department of Theoretical Physics, Kaliningrad State University, Aleksandra Nevskogo 14, Kaliningrad 236041, Russia; e-mail [*artyom\_yurov@mail.ru*]{} [Valerian A. Yurov:]{} Department of Theoretical Physics, Kaliningrad State University, Aleksandra Nevskogo 14, Kaliningrad 236041, Russia; e-mail [*yurov@freemail.ru*]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'The origin of the bimodality in cluster core entropy is still unknown. At the same time, recent work has shown that thermal conduction in clusters is likely a time-variable phenomenon. We consider if time-variable conduction and AGN outbursts could be responsible for the cool-core (CC), non cool-core (NCC) dichotomy. We show that strong AGN heating can bring a CC cluster to a NCC state, which can be stably maintained by conductive heating from the cluster outskirts. On the other hand, if conduction is shut off by the heat-flux driven buoyancy instability, then the cluster will cool to the CC state again, where it is stabilized by low-level AGN heating. Thus, the cluster cycles between CC and NCC states. In contrast with massive clusters, we predict the CC/NCC bimodality should vanish in groups, due to the lesser role of conductive heating there. We find tentative support from the distribution of central entropy in groups, though firm conclusions require a larger sample carefully controlled for selection effects.' author: - | Fulai Guo$^{1,2}$[^1] and S. Peng Oh$^{1}$[^2]\ $^{1}$Department of Physics; University of California; Santa Barbara, CA 93106, USA\ $^{2}$UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA bibliography: - 'ms.bib' title: 'Could AGN Outbursts Transform Cool Core Clusters?' --- \[firstpage\] galaxies: clusters: general – cooling flows – conduction – galaxies: active – instabilities – X-rays: galaxies: clusters Introduction {#section:intro} ============ There is a striking observational bimodality in the properties of galaxy cluster cores, which can be broadly separated into two types: cool-core (CC) and non cool-core (NCC) clusters. The former are defined to have temperature profiles which decline significantly toward the center. Measuring the relative abundance of the two types is somewhat hampered by selection effects (CC clusters have strongly peaked X-ray emission profiles which are more easily detected), but surveys indicate that roughly half of all clusters are in each category (e.g., @chen07). The logarithmic slope of the entropy profile is bimodal, with CC/NCC clusters having steeper/shallower slopes respectively [@sanderson09]. The distribution of core entropy also appears to be bimodal in clusters, with population peaks at $K_{o} \sim 15 \, {\rm keV \, cm^{2}}$ and $K_{o} \sim 150 \, {\rm keV \, cm^{2}}$ and a distinct gap between $K_{o} \sim 30-50\, {\rm keV \, cm^{2}}$ [@cavagnolo09]. This in turns feeds into star formation and AGN properties: H$\alpha$ and radio emission from the central brightest cluster galaxy are much more pronounced when the cluster’s core falls below an entropy threshold of $K_{o}< 30 \, {\rm keV \, cm^{-2}}$ [@cavagnolo08], and a majority ($\sim 70\%$) of CC clusters host radio sources [@burns90]. Clearly, unravelling the origin of this dichotomy–for which there is no widely accepted explanation–could potentially yield great insight into cluster thermodynamics. Mergers have been considered a prime candidate for transforming CC to NCC systems, given the frequency of mergers in a hierarchical CDM cosmology, as well as the large amount of energy (as much as $\sim 10^{64}$ erg) in mergers, well in excess of that required. However, detailed simulations have not borne this expectation out. For instance, @poole08 find that CC systems are remarkably robust and only disrupted in direct head-on or multiple collisions; even so, the resulting warm core state is only transient. To date, @burns08 present the only set of simulations where NCC clusters are produced via mergers. For this to happen, nascent clusters must experience major mergers early on which destroyed embryonic CCs and prevented their reformation. Note that these simulations do not incorporate mechanisms (such as AGN feedback) to stop a cooling catastrophe; furthermore, the relatively low numerical resolution ($15.6 \, h^{-1} {\rm kpc}$) may preclude firm conclusions about core structure and evolution. As an alternative, @mccarthy08 suggested that early pre-heating prior to cluster collapse could explain the lack of low entropy gas in NCC systems, which receive higher levels of preheating compared to CC systems. A possible concern in such scenarios is whether one can pre-heat the ICM to a high adiabat and yet retain sufficient low entropy gas in lower mass halos to obtain a realistic galaxy population (@bower08; see also @oh_benson [@evan_oh]). More importantly, many NCC clusters also have a short central cooling time ($\sim 1$ Gyr; @sanderson06), and it is not clear why radiative cooling should not erase memory of the initial preheating episode. Both of these hypotheses focus on ‘nature’, or initial conditions, in the form of an early major merger or preheating, in determining whether a cluster is CC or NCC. This hints at a fine-tuning problem: why are initial conditions such that roughly equal numbers of CC and NCC systems appear? In addition, there appears to be substantial differences in the metallicity profiles of NCC and CC systems, at least in groups (see discussion). Simulations show that metallicity profiles are remarkably stable to subsequent mergers [@poole08]. Recently, in @guo08b [hereafter GOR08], we conducted a global Lagrangian stability analysis of clusters in which cooling is balanced by AGN heating and thermal conduction. This offered an alternative promising explanation, based on ‘nurture’, or physical processes occurring in the ICM in its recent past. Our analysis showed that globally stable clusters could only exist in two forms: (1) cool cores stabilized by both AGN feedback and conduction, or (2) non-cool cores stabilized primarily by conduction[^3]. Intermediate temperature profiles typically lead to globally unstable solutions, which would then quickly evolve to either CC or NCC states. In GOR08, we speculated that these two categories of clusters might even represent different stages of the same object. The importance of thermal conduction on global scales obviously depends on the large scale structure of the cluster magnetic fields. Recent calculations suggest that thermal conduction of heat into the cluster core can be self-limiting: in cases where the temperature decreases in the direction of gravity, a buoyancy instability (the heat flux driven buoyancy instability, hereafter HBI) sets in which re-orients a radial magnetic field to be largely transverse, shutting off conduction to the cluster center [@quataert08; @parrish08a; @parrish09; @bogdanovic09]. In GOR08, we speculated (as subsequently did @bogdanovic09) that powerful outbursts from a central AGN might counteract the HBI by disturbing the azimuthal nature of the magnetic field, thus enabling thermal conduction. In particular, the following scenario could arise: as conductivity falls, gas cooling and mass inflow will increase, triggering AGN activity. The rising buoyant bubbles may re-orient the magnetic field to be largely radial again, increasing thermal conduction and reducing mass inflow, shutting off the AGN until the HBI sets in once again. If AGN heating and/or thermal conduction during their ‘on’ states are strong enough to heat the CC cluster to the NCC state, the cluster could then continuously cycle between cool-core (AGN heating dominated) and non cool-core (conduction dominated) states. The goal of this paper is to perform a very simple feasibility study for such a scenario, motivating future, more detailed work. While the transformation of NCC to CC clusters via radiative cooling can be easily accomplished (as seen, for instance, in Fig. 13 of @parrish09), the possible transformation of CC to NCC systems via AGN outbursts or time variable conduction has not been demonstrated. It has been conjectured before that strong AGN outbursts (the most extreme examples of which are Hydra A and MS 0435+7241) could permanently modify core entropy [@kaiser03; @voit05]. But there have been no explicit calculations, and indeed suggestions that a CC to NCC transformation would be energetically prohibitive [@mccarthy08]. In this paper, we perform explicit calculations to investigate if strong AGN outbursts could transform a CC cluster to the NCC state (§\[section:cycle\]). Even if energetically subdominant (as they likely are for the most massive clusters), AGN could catalyze a dominant contribution from thermal conduction either due to: (i) the strong temperature dependence of thermal conductive flux, $F \propto T^{5/2}$; (ii) altering magnetic topology as discussed above. We study if such effects can indeed allow a CC to NCC transformation. It is important to note that the ability of rising bubbles or other bulk gas motions to globally restructure field geometry and hence thermal conductivity has not been demonstrated. However, there is important circumstantial support from numerical simulations of magnetic draping, which show that magnetic fields are amplified and more ordered in the wake of moving subhalos or bubbles [@ruszkowski07; @asai07; @dursi08; @oneill09]. To isolate the relative contribution of AGN outbursts and conduction, we also consider models of galaxy groups. The strong temperature dependence of thermal conduction implies that conduction should be irrelevant in groups, regardless of magnetic field geometry. If our explanation for the bimodality in cluster cores is correct, then such bimodality should disappear in galaxy groups. We describe our methods in §2, our results in §3, and discuss implications in §4. The cosmological parameters used throughout this paper are: $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$, $h=0.7$. We have rescaled observational results if the original paper used a different cosmology. Basic Assumptions and Setup =========================== We solve the time-dependent hydrodynamic equations using the ZEUS-3D hydrodynamic code [@stone92] in its one-dimensional mode; in particular, we have incorporated into ZEUS a background gravitational potential, radiative cooling, thermal conduction, convection, and AGN heating. We used the code similarly in @guo08a, albeit modified for the case of cosmic-ray heating, and gratefully acknowledge Mateusz Ruszkowski for supplying us with the modified version of ZEUS described in @ruszkowski02 [hereafter RB02], which was used as our base code. The model of AGN heating we adopt here is that of ‘effervescent heating’ proposed by [@begelman01], simulated in RB02, and also used in the global stability model of GOR08. We refer the reader to these papers for details of the models. Here we simply summarize several modifications and reiterate some important points to note. In all of the preceding papers, it was assumed that the AGN kinetic luminosity was directly related to the instantaneous mass accretion rate, $L = \epsilon \dot{\rm M} c^{2}$. This in itself was a simplification, given that AGN activity is likely to be intermittent, and not necessarily directly related to the mass inflow rate. It was justified on the grounds that AGN intermittency timescales are likely shorter than bubble rise times and gas cooling times, and hence that AGN feedback can be incorporated in a time-averaged sense. Here, we drop this assumption, and instead make the alternative (and perhaps more realistic) prescription of directly incorporating AGN outbursts and intermittency in the simulations. In our model, we assume that AGN is triggered when the central gas entropy drops below a critical value ($S_{5}< S_{\rm crit}$, where $S_{5}$ is the gas entropy at radius of $r=5$ kpc from the cluster center). We have also considered other AGN trigger criteria: for example, the AGN is triggered when the central mass inflow rate is larger than a critical value or the central gas cooling time is less than a critical value. We found that our results are fairly robust to the specific criterion we adopted. See Table \[table1\] for the specific AGN trigger criterion adopted for each run. Once an AGN is triggered, we assume that the AGN heating lasts for a duration of order the bubble rise time, which is typically comparable to the sound crossing time $t_{\rm sc} \sim 10^{8} r_{100} c_{s,1000}^{-1}$yr for a radius $r \sim 100 r_{100}$ kpc and sound speed $c_{\rm s} \sim 1000 c_{s,1000} \, {\rm km \, s^{-1}}$ (e.g., see table 3 in @birzan04). Thus we assume that each AGN heating episode starts once the AGN trigger criterion is satisfied, and lasts for $t_{\rm agn}\sim 1\times 10^{8}$ yr. During the outburst, we assume a energy $E_{\rm agn} \sim 10^{60}-10^{61.5}$erg is liberated. Estimates of the work needed to inflate observed cavities in rich clusters yields $E_{\rm agn} \sim 10^{60}$erg [@birzan04; @mcnamara05], rising by 1-2 orders of magnitude in the most extreme events such as Hydro A [@nulsen05] and MS 0735+7421 [@mcnamara05]. By comparison, note that a $\sim 10^{9} \, {\rm M_{\odot}}$ black hole which doubles its mass over an accretion episode will liberate $\sim 10^{62} (\epsilon/0.05)$ erg, where $\epsilon$ is the efficiency in converting rest mass energy to kinetic energy. The AGN luminosity during each active AGN heating episode is then $L=E_{\rm agn}/t_{\rm agn}\sim 10^{44}-10^{46} \, {\rm erg \, s^{-1}}$. After a period $t_{\rm agn}$ with active AGN heating, we turn off the AGN heating, but continue to monitor the AGN trigger criterion. Once it is again satisfied, a new AGN heating episode starts. The AGN duty cycle (the period between the triggering of two successive AGN episodes) is set by the cooling time and is generally larger than $t_{\rm agn}$. During each outburst, the AGN heating rate is determined by the ‘effervescent heating’ model described in RB02, except that here we adopt a stronger inner heating cutoff term $1-e^{-(r/r_{0})^{2}}$, instead of $1-e^{-r/r_{0}}$ adopted in RB02, to account for the finite size of the central radio source and to avoid overheating at the cluster centre. In the rest of this paper, the inner heating cutoff radius $r_{0}$ is taken to be $10$ kpc. In our simulations, the gas entropy usually increases with radius, agreeing with observational trends. However, when the ICM is heated by a strong AGN outburst, negative entropy gradients may appear in some regions for a short time period. Thus, convection is also included in our calculation; the convective flux $F_{\rm conv}$ is given by the mixing length theory described in RB02. In cluster regions with negative entropy gradients, convection is turned on and transports thermal gas energy. Similar to RB02, we found that convection is not important for the parameters of the models presented in this paper: we ran our simulations for the same models but without convection and found similar results. In the low-density weakly-magnetized plasma (e.g., the ICM), anisotropic thermal conduction preferentially along the magnetic field lines modifies the usual Schwarzschild convective criterion through the magneto-thermal instability (MTI; @balbus00 [@parrish05; @parrish07]). While we do not incorporate magnetic fields or MTI, recent calculations which do [@bogdanovic09; @parrish09] similarly find it to be unimportant by two orders of magnitude. We do not conduct self-consistent MHD simulations of the HBI (as for instance in @parrish09 [@bogdanovic09]) and so instead employ a simplified toy model for the conductivity. Given that the mechanism for overcoming the HBI is as yet unknown, we feel that such illustrative examples of the possible impact of time-variable conductivity is justified. We assume that radial conductivity is characterized by the Spitzer conductivity with some time-dependent suppression factor $f$. Initially, the conductivity is assumed to be negligible; we then assume that thermal conduction is efficient (see Table \[table1\] for the value of $f$ adopted in each run) during each AGN outburst ($t_{\rm agn}\sim 1\times 10^{8}$ yr) and then is either turned off or decays as $\sim e^{-(t-t_{\rm off})/t_{\rm HBI}}$, where $t_{\rm off}$ is the end time of the preceding AGN heating episode. In practice, we have found that either assumption, as well as simulations where the onset of efficient conduction lags behind the AGN trigger, all yield very similar results with regard to the stability of the CC state. Here the HBI growth time $t_{\rm HBI}\sim 1.0\times 10^{8}$ yr for the typical cool core cluster A1795 and may be much larger for non-cool core clusters, since $t_{\rm HBI} \propto (d {\rm ln}T/dr)^{1/2}$ [@quataert08; @parrish08]. Our computational grid extends from $r_{\rm{min}}$ ($1$ kpc) to $r_{\rm{max}}$ ($200$ kpc for A1795 and $100$ kpc for NGC 4325). In order to resolve adequately the inner regions, we adopt a logarithmically spaced grid in which $(\Delta r)_{i+1}/(\Delta r)_{i}=(r_{\rm{max}}/r_{\rm{min}})^{1/N}$, where $N$ is the number of active zones. The standard resolution of our simulations presented in this paper is $N=400$; our code has been tested to be numerically convergent through simulations with different levels of resolutions. For initial conditions, we assume the ICM to be isothermal at $T=T_{\rm out}$, and solve for hydrostatic equilibrium. We assume that at the outer boundary $r_{\rm{max}}$, $n_{e}(r_{\rm max})=n_{\rm out}$, which is close to the value extrapolated from the observational density profile. For boundary conditions, we assume that the gas is in contact with a thermal bath of constant temperature and pressure at the outer radius, where the cooling time exceeds the Hubble time. Thus, we ensure that temperature and density of the thermal gas at the outer radius are constant. We extrapolate all hydrodynamic variables from the active zones to the ghost zones by allowing them to vary as a linear function of radius at both the inner and outer boundaries. The intracluster gas is allowed to flow in and out of active zones at both the inner and outer boundaries. Results ======= Stability in the CC state {#section:stability} ------------------------- ![Time evolution of entropy at $r=5$ kpc in various models for the cluster Abell 1795 ([*top*]{}) and for the group NGC 4325 ([*bottom*]{}). A strong AGN outburst and/or a sharp increase in conductivity is able to accomplish a transition from a CC to NCC state.[]{data-label="plot1"}](f1.eps){width="45.00000%"} Let us begin by considering how clusters can be stabilized in the CC state by a combination of AGN and conductive heating. We first run simulations for a typical massive cluster Abell 1795. The parameters for this cluster are $M_{0} = 6.6 \times 10^{14}$ M$_{\sun}$, $r_{\rm{s}}=460$ kpc, $r_{\rm{c}}=r_{\rm{s}}/20$ (@zakamska03; also see GOR08 for details), $T_{\rm out}=6.8$ keV, and $n_{\rm out}=0.003$ cm$^{-3}$. Specific model parameters for each run are listed in Table \[table1\]. Run A1795-1 is a representative simulation. The cluster is initially in hydrostatic equilibrium with spatially constant temperature $T=T_{\rm out}=6.8$ keV, and then evolves via radiative cooling without AGN heating or thermal conduction. The solid line in the [top]{} panel of Figure \[plot1\] shows the time evolution of gas entropy at $r=5$ kpc ($S_{5}$), which decreases gradually in the first $4$ Gyr. When the central gas entropy drops below $15$ keV cm$^{2}$ (at $t\sim 4$ Gyr), an AGN outburst with $E_{\rm agn}=2\times 10^{60}$ erg is triggered and lasts for $t_{\rm agn}= 1\times 10^{8}$ yr. During the period of active AGN heating, thermal conduction with $f=0.4$ is also turned on. As clearly shown in Fig. \[plot1\], the cooling catastrophe is quickly averted, and the gas entropy increases. After $0.1$ Gyr of active heating, both AGN heating and thermal conduction are turned off and the cluster cools again until the next heating episode is triggered. As seen for $t\sim 4-6$ Gyr in Fig. \[plot1\] (top panel), the ICM entropy executes minor oscillations and the cluster stays in the cool core state, where radial profiles of gas temperature and density fit observational data [@ettori02] very well. The radial profiles of entropy oscillating in the CC state are shown in Figure \[plot2\] (top), where entropy profiles are plotted every $0.1$ Gyr since $t= 4$ Gyr. During the CC state, the lines are clearly concentrated in the lower branch with central gas entropy $\sim 10-20$ keV cm$^{2}$. In run A1795-1, the AGN duty cycle is $\sim 0.3$ Gyr. During each heating episode, the volume-integrated conductive heating energy is around $2E_{\rm agn}$, i.e., the conductive heating is comparable to AGN heating. We have also done similar calculations for the cluster A2199 ($T_{\rm out}=4.6$ keV) and the group NGC 4325 ($T_{\rm out}=1$ keV), and found that conductive heating is an order of magnitude smaller than AGN heating in the former and becomes negligible in the latter. The results for NGC 4325 are also presented in this paper for comparison. The parameters for this group are $M_{0} = 1.1 \times 10^{13}$ M$_{\sun}$, $r_{\rm{s}}=78.3$ kpc, $r_{\rm{c}}=0$ kpc, $T_{\rm out}=1$ keV, and $n_{\rm out}=7.27 \times 10^{-4}$ cm$^{-3}$ [@2007ApJ...669..158G]. As can be seen (lower panel Fig \[plot1\], $t \sim 0.8-2$ Gyr), the group is similarly stabilized in the CC state with small entropy oscillations. We have done our calculations with different levels of AGN heating and thermal conduction, and found that our results are quite robust. Higher levels of AGN heating usually correspond to larger amplitude entropy oscillations in the CC state. We also consider a model (run A1795-3) where AGN feedback only triggers conduction without heating the ICM (i.e., $E_{\rm agn}=0$ erg), and find that while the cluster cools significantly to low central entropy, it does not end up in a cooling catastrophe, and instead eventually ends up as a CC cluster as well (see the short-dashed line in Fig. \[plot1\]a when $t\lesssim6.4$ Gyr). Note that conduction is regulated by AGN feedback in this run before $t\sim6.4$ Gyr, after which it is fixed to be $f=0.4$ without regulation (see further discussion in §\[section:cycle\]). Although present in above runs (to account for the shut-off of conductivity by the HBI), the regulation of conductivity is not required for the stability of the CC state. In run A1795-4, we considered a model where conductivity is triggered and then fixed to be $f=0.2$ without regulation since $t\sim 4$ Gyr; the fixed lower value is meant to simulate a situation where the field line geometry reaches a steady state balance between the HBI and some other mechanism (e.g, stirring by galaxies). The cluster evolution is similar to other runs. Note that for this lower value of $f$, the cluster [*would*]{} reach a cooling catatrophe without AGN feedback heating. In addition, for a fixed value of $f$, lower temperature clusters, and galaxy groups usually reach cooling catastrophes if only thermal conduction operates. Note that a minimum amount of thermal conduction is usually required in our calculations, since AGN “effervescent heating" tends to be excessively centrally concentrated: the very central regions of the cluster are overheated, while outer regions develop a cooling catastrophe. Furthermore, thermal conduction is the dominant energy source in massive high-temperature clusters; without it, a much larger $E_{\rm agn}$ (inconsistent with observations) is required to maintain the ICM in the CC state. On the other hand, in low temperature groups such as NGC 4325, AGN heating alone suffices, as seen in the lower panel of Fig. \[plot1\]. The cycle between CC and NCC states {#section:cycle} ----------------------------------- ![Time sequence of entropy in a typical model for the cluster Abell 1795 (run A1795-1 from $t=4$ Gyr to $10$ Gyr; [top]{} panel) and for the group NGC 4325 (run NGC4325-1 from $t=1$ Gyr to $5$ Gyr; [bottom]{} panel). Each line in the [top]{} panel is plotted every $0.1$ Gyr and that in the [bottom]{} panel is plotted every $0.05$ Gyr. The cluster A1795 oscillates in the CC state (lower branch) due to sporadic AGN feedback heating before $t\sim 6.1$ Gyr, when the cluster is quickly heated by a strong AGN outburst to the NCC state. The lines then concentrate in the NCC state (upper branch). The evolution of Abell 1795 is clearly bimodal, while bimodality in the evolution of NGC 4325 is not evident.[]{data-label="plot2"}](f2.eps){width="45.00000%"} Strong AGN outbursts, e.g., Hercules A [@nulsen05] and MS0735.6+7421 (@mcnamara05; @mcnamara09), with $E_{\rm agn}$ up to $\sim 10^{62}$ ergs have been found in X-ray observations. In this subsection, we consider the impact of such strong AGN outbursts on the evolution of cool core clusters. We did our calculations for both the cluster A1795 and the group NGC 4325, aiming to investigate if such strong AGN outbursts can heat the CC system into a NCC state and what the role of thermal conduction is in this context. In run A1795-1, the first AGN outburst after $t=6$ Gyr is modified to be much stronger ($E_{\rm agn}=3\times 10^{61}$ erg). Figure \[plot1\](a) clearly shows that the central gas entropy is quickly boosted from $\sim 15$ keV cm$^{2}$ to $\sim 130$ keV cm$^{2}$ at $t\sim6.1$ Gyr. The cluster indeed reaches a NCC state; the radial entropy profile is shown in the upper branch of Figure \[plot2\](a). During the strong AGN outburst, AGN heating is much stronger (by one order of magnitude) than conductive heating. However, given the spatial dependence of AGN heating, conduction is still important in transporting energy within the cluster to offset cooling in certain regions. Since the cluster’s temperature profile in the NCC state is nearly isothermal, the HBI timescale, which scales as $(d {\rm ln}T/dr)^{1/2}$ [@quataert08], is very long. We therefore assume that for run A1795-1, thermal conduction in the NCC state does not decay. Figure \[plot2\](a) clearly shows that the ICM adjusts itself to the NCC state where cooling is balanced by thermal conduction alone. Global stability of such conduction only models is possible in the hottest clusters for a high level of conductivity (GOR08). Run A1795-3 is a model where AGN feedback only triggers conduction without heating the ICM directly (i.e., $E_{\rm agn}=0$ erg). When $t\lesssim6.4$ Gyr, the cluster reaches and is then maintained in the CC state by AGN-regulated conduction (see §\[section:stability\]). After $t\sim6.4$ Gyr, we assume that AGN is continuously active so that conduction with $f=0.4$ does not decay with time. The short-dashed line in Figure (\[plot1\])a clearly shows that the massive cluster A1795 is also heated to the NCC state, though this process takes much longer than that in run A1795-1. This shows that in the hottest clusters, if AGN can trigger a high level of conductivity, their heat input is unimportant. This is not true if the triggered level of conductivity is lower. In run A1795-4, we assume that conductivity is triggered and then fixed to be $f=0.2$ since $t\sim4$ Gyr. The cluster evolution is similar to other runs, with the strong AGN outburst bringing the cluster to a NCC state. However, if the AGN contributes negligible heat input, as in run A1795-3, the lower level of conduction in this run would be unable to prevent a cooling catastrophe. Since thermal conduction will eventually decay due to the HBI, the cluster is expected to cool from the NCC state to the CC state. The HBI linear growth time for the CC cluster A1795 is $t_{\rm HBI}\sim 1.0\times 10^{8}$ yr; however, in the NCC state, the temperature profile is very flat and thus the HBI growth time, which scales as $(d {\rm ln}T/dr)^{1/2}$ [@quataert08], becomes much longer. Furthermore, simulations show that the cluster takes several instability growth times for the magnetic field lines to be appreciably re-oriented. Thus, in run A1795-2, we assume that conduction decays on a timescale of $t_{\rm HBI}=1$ Gyr after the strong AGN outburst is turned off. The dotted line in Figure (\[plot1\])a demonstrates that the cluster cools gradually until triggering new AGN activity and then stays in the CC state. Thus, the cluster cycles between the CC state and NCC state due to strong AGN outbursts and the HBI. For the group NGC 4325, we modify the AGN outburst at $t\sim 1.9$ Gyr to be much stronger ($E_{\rm agn}=2\times 10^{59}$ erg) and found that the group is heated to the NCC state. Figure \[plot1\]b clearly shows that the group cycles between the CC state and NCC state for all three models considered in this paper (see Table \[table1\] for parameters in each model), with the key difference that unlike the cluster case, the group does not stay in the NCC state for long. We have also done our calculations for the cluster Abell 2199 ($T_{\rm out}=4.6$ keV) and found that a strong AGN outburst with $E_{\rm agn}=5\times 10^{60}$ erg is able to heat the CC cluster to its NCC state. Thus , our calculations suggest the trend that more massive systems require stronger AGN outbursts to reach the NCC state. From Figure (\[plot1\]), a scenario for the evolution of galaxy clusters and groups may arise naturally: a cool core group or cluster is maintained by normal AGN feedback heating, and may be heated by a strong AGN outburst to the NCC state, from which the system may gradually cool to the CC state again due to the decay of conductivity by the HBI. The strong AGN outburst may be triggered by a sudden increase in close gas supply, or mergers[^4]. This scenario naturally explains current X-ray observations of both CC and NCC groups and clusters. The typical timescale of the HBI is around $0.1$ Gyr for the cool core state of A1795 and varies as $t_{\rm HBI} \propto (d {\rm ln}T/dr)^{1/2}$ [@quataert08] during the cluster evolution. Clusters with flatter temperature profiles usually have larger $t_{\rm HBI}$, and thus clusters with quite flat temperature profiles may dominate in the population of NCC clusters. This agrees with current X-ray cluster observations (see Fig. 6 of @sanderson06). Calculations by @mccarthy08 suggest that heating a pure cooling flow cluster to a NCC cluster requires extremely large amounts of energy ($\sim 10^{63}$ erg). In our calculations, AGN feedback is triggered long before strong cooling flows form, and thus the needed AGN energy is much less. We have performed simulations where AGN outbursts are triggered when a strong mass inflow forms (“the cooling flow state"), and found that much larger (several times) AGN heating is required to heat the cluster to its NCC state. The absence of strong cooling flows in clusters suggests that the cluster stabilizes at a CC state far before strong cooling flows form. This permits a NCC state to be attained by heating from a CC state, as we have seen in our simulations. The bimodality of CC and NCC states {#section:bimodality} ----------------------------------- ![Histogram of gas entropy at $0.01r_{500}$ for a sample of 28 nearby galaxy groups from the Two-Dimensional [*XMM-Newton*]{} Group Survery [@johnson09]. If the selection effect is not important, the histogram suggests that the group distribution is unimodal with only one peak for CC groups. []{data-label="plot3"}](f3.eps){width="45.00000%"} Both CC and NCC groups and clusters have been observed in nature. Recently, @sanderson09 and @cavagnolo09 demonstrate that the CC/NCC bimodality does exist in clusters: Figure 12 in the former shows the bimodality in the distribution of the logarithmic slopes of radial entropy profiles, while Figure 6 and 7 in the latter demonstrate the bimodality in the distribution of central gas entropy. In Figure \[plot2\](a), radial entropy profiles in run A1795-1 are plotted every $0.1$ Gyr since $t= 4$ Gyr, when the ICM reaches the CC state (see Fig. \[plot1\]a). As clearly seen, the ICM goes through minor oscillations in the CC state before $t\sim 6.1$ Gyr (lines concentrate in the lower branch). When the strong AGN outburst is triggered at $t\sim 6.1$ Gyr, the cluster is then quickly heated to the NCC state, and stays there due to conductive heating in the NCC state: the lines concentrate there (upper branch). We clearly see the bimodality in the cluster evolution. Thus our simulations suggest that strong AGN outbursts may heat the CC cluster to its NCC state and the cluster CC/NCC bimodality naturally appears in this scenario. In run A1795-1, conduction is regulated by AGN feedback and the HBI. In contrast, we have also considered a model where conduction is fixed (uncorrelated with AGN activity; run A1795-4). As shown in Figure \[plot1\](a), we found that the cluster is also heated by the strong AGN outburst at $t\sim 6$ Gyr to the NCC state, where the cluster then stays due to conductive heating during our whole simulation. It seems that the regulation of thermal conduction is not necessary for the CC/NCC bimodality. However, note that in fixed-conductivity models, $f$ must has a value within a small range: if $f$ is too large, the cluster will evolve to the NCC state without staying in the CC state (e.g. run A1795-3 after $t\sim6.4$ Gyr); if $f$ is too small, the cluster may not stay in the NCC state, but instead develop a cooling catastrophe. In other words, if $f$ is too large, we will not see a significant population of massive CC clusters; if $f$ is too small, we will not see a significant population of NCC clusters. On the other hand, time-varying conductivity regulated by the AGN could naturally circumvent this ’fine-tuning’ problem: the cluster can stay in the CC state due to the alternation of radiative cooling and intermittent heating by AGN feedback and conduction; the cluster can also stay in the NCC state, where efficient conduction triggered by strong AGN outbursts offsets radiative cooling. Furthermore, since conductivity increases strongly with temperature, higher temperature clusters may stay in the NCC state for a much longer time, which is consistent with the observational finding that the fraction of NCC systems in clusters increases with the cluster mass [@chen07]. Figure \[plot2\](b) shows the same plot, but for the $1$ keV group NGC 4325 (run NGC4325-1). The lines are shown every $0.05$ Gyr from $t= 1$ Gyr, when the ICM is in the CC state (see Fig. \[plot1\]b) to $t= 5$ Gyr. At $t\sim1.9$ Gyr, the group is heated by a strong AGN outburst to the NCC state. However, since the conductive heating is inefficient in low-temperature systems ($\kappa \propto T^{5/2}$), the ICM can not be maintained in the NCC state due to radiative cooling and cools to the CC state again (also see Fig. \[plot1\]). In run NGC4325-3, conductivity triggered by the strong AGN outburst is modified to be as large as the Spitzer value (see Table 1) and we still found that the group evolution is similar to that in run NGC4325-1 (see Fig. \[plot1\]b). We tried to build an equilibrium model with conduction alone for the NCC group NGC 4325 and found that conduction with 10 times Spitzer value is required to balance radiative cooling. Obviously, in Figure (\[plot2\])b, the CC/NCC bimodality is not evident in the group evolution; instead, the group distribution is unimodal: lines are only concentrated in the CC state, and the group distribution in NCC states is continuous rather than peaked. If strong AGN outbursts are not common, NCC groups may be rare, since they can not be maintained in NCC states by conductive heating. To test our model, we turn to group observations in literature. In @rasmussen07, 14 of their 15 groups observed by [*Chandra*]{} are in the CC state, which may be due to a selection effect, since all of their groups are reasonably X-ray bright. Here we adopt a sample of $28$ nearby groups studied by @johnson09, which is the largest sample to date with high-quality [*XMM-Newton*]{} data. There exist large Chandra samples, but these only select bright groups. We plot the distribution of the central gas entropy at $0.01r_{500}$ in Figure (\[plot3\]), where $r_{500}$ is the radius within which the mean density of the group is $500$ times the critical density. Figure \[plot3\] clearly shows that the central entropy distribution is unimodal and that groups with high values of central entropy ($S\gtrsim 20$ keV cm$^{2}$) are rare. Thus current group observations seem to agree with our results. However, note that NCC groups are very faint, and many of them may have not been observed yet. If more NCC groups are observed in future, we predict that their distribution is continuous, instead of peaked around one specific NCC state. A large group sample carefully controlled for selection effects would be required to do a more reliable test of our model. ------------ ------------------------ ------------------------ -------------- ----------------------- ------------------------------------ -------------------------------- ------------------------------------ -- [$S_{\rm crit}$[^5]]{} [$E_{\rm agn}$[^6] ]{} [$ f$[^7]]{} [$t_{\rm HBI}$[^8]]{} [$E_{\rm agn,s}^{\mbox{\it b}}$]{} [$f_{\rm s}^{\mbox{\it c}}$]{} [$t_{\rm HBI,s}^{\mbox{\it d}}$]{} Run keV cm$^{2}$ $(10^{60}$ erg) (Gyr) $(10^{60}$ erg) (Gyr) A1795-1 15 2.0 0.4 0 30 0.4 $\infty$ A1795-2 15 2.0 0.4 0 30 0.4 1 A1795-3 15 0.0 0.4 0 0 0.4 $\infty$ A1795-4 15 2.0 0.2 $\infty$ 30 0.2 $\infty$ NGC4325-1 5 0.03 0.4 0 0.2 0.4 $\infty$ NGC4325-2 5 0.03 0.4 0 0.2 0.4 1 NGC4325-3 5 0.03 0.4 0 0.2 1.0 $\infty$ \[table1\] ------------ ------------------------ ------------------------ -------------- ----------------------- ------------------------------------ -------------------------------- ------------------------------------ -- Discussion ========== Let us briefly summarize our findings. From a suite of 1D hydrodynamic simulations, we find that clusters can cycle between CC and NCC states, driven by time-variable conduction and/or AGN outbursts. A strong AGN outburst combined with conduction could heat a CC group or cluster to the NCC state. During this transition, AGN usually provides most of the heating energy, while conduction is important in transporting energy within the cluster to offset cooling in certain regions, given the spatial dependence of AGN heating. The relative importance of conduction increases with cluster temperature, due to the strong temperature dependence of conductive flux, $F \propto T^{5/2}$. High temperature clusters, provided that conduction in the ‘on’ state is relatively unsuppressed $f \sim 0.4$ for an extended time, can even reach the NCC state with no energy input from the AGN. In this case, the AGN may simply serve as a ‘switch’ to regulate conductivity, perhaps by straightening field lines via the production of rising bubbles. Lower temperature clusters (or high temperature clusters if the maximum value of conduction is still relatively weak) require a combination of AGN and conductive heating to attain the NCC state. In both cases, if conduction continues to operate, the cluster can remain stably in the NCC state. On the other hand, if conduction decays via the HBI, then the cluster will cool and revert back to the CC state, where it remains stably with normal AGN feedback (§\[section:stability\]) until the next strong outburst and/or strong increase in conductivity continues the cycle. At the low temperature end, groups cannot be stabilized by any means in the NCC state, and rapidly cool to the CC state until the next outburst. The duty cycle, or timescale to cycle between CC and NCC states, shortens with declining temperature. If this hypothesis for the origin of CC/NCC cluster cores is correct, a number of interesting conclusions follow. Since the stabilizing effects of conduction decline with temperature, the NCC/CC bimodality should be a function of temperature, being the most sharply defined for high temperature clusters, and vanishing in galaxy groups. We show there may be tentative observational evidence for lack of bimodality in core entropy in the group sample of @johnson09, although selection effects have to be carefully quantified before one can draw firm conclusions. The relative abundance of CC/NCC clusters may gives insights on the duty cycle on which conduction (and/or AGN outbursts) vary. For instance, the roughly equal fraction of CC/NCC clusters suggests that the AGN duty cycle between strong outbursts in CC systems is of order the HBI/cooling timescale in the corresponding NCC systems, though a more detailed study attempting to match the fraction of time a cluster spends at a given central entropy, to the distribution of entropies in the cluster population as a whole, would be interesting. Once again, groups can perform a critical test, since they will not spend much time in a high-entropy state. Furthermore, since the duty-cycle between strong AGN outbursts is shorter in groups, turbulence or convective effects due to AGN activity which leave an imprint on the metallicity or entropy profile might have a more pronounced effect there. One possible example is the turbulent diffusion of metals [@rebusco05; @sharma09]. In addition to heating the CC cluster to the NCC state, strong AGN outbursts could potentially remove the centrally-peaked metalicity distribution observed in CC systems, resulting in a relatively flat metallicity profile in the NCC state. While this distinction was seen in @de-grandi01, more recent studies by @baldi07, @leccardi08, and @sanderson09 found that outside the very innermost regions, metallicity profiles were consistent with a single power-law at all radii for both CC and NCC clusters. In contrast, the metallicity profiles in NCC groups are much flatter than those in CC groups [@johnson09b]. This is consistent with the model presented in this paper. In particular, NCC clusters can be stabilized by thermal conduction for sufficiently long periods for the metallicity gradient to be re-established, while NCC groups have a much more recent origin due to the short duty cycle of strong AGN outbursts, and thus retain evidence of AGN ‘stirring’ in the metallicity profile.[^9] Our 1D calculations are frankly exploratory and simplified in nature. Our hope is to show that AGN outbursts and time-variable conductivity are a plausible means of regulating the bimodality between NCC and CC systems, motivating future, more detailed work. Of course, the greatest gap in our understanding is the actual means by which the HBI can be counteracted to allow thermal conduction to operate. Even if the rising bubbles do not cause a radial reorientation of field lines, as we have suggested, [*some*]{} mechanism (perhaps stirring of the gas by galaxies or subhalos) must be counteracting the onset of the HBI in clusters; otherwise, conduction is no longer a viable heating mechanism. This alone would require significant revision of theoretical models, since no known heating mechanisms (such as AGN heating or dynamical friction) acting alone without conduction is sufficient to offset a cooling catastrophe in massive clusters (e.g, see @conroy08): such mechanisms tend to be too centrally concentrated toward the core, and only marginally sufficient energetically. Furthermore, it would then seem a remarkable coincidence that if one simply uses observed temperature profiles in clusters to construct the Spitzer conductive flux from the cluster outskirts, it is very nearly equal to that required to balance the radiative cooling rate as indicated by the observed X-ray surface brightness profile, for some reasonable fraction $f\sim 0.3$ of the Spitzer value (e.g., Fig. 17 of @peterson_fabian). There is no reason in principle why such close agreement should exist, and seems a tantalizing hint that nature somehow ‘knows’ about Spitzer conductivity. Much work remains to be done before we understand if there are large secular variations to the apparent thermal equilibrium in clusters. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the referee, Chris Reynolds, for a helpful report. FG thanks Trevor Ponman and Ewan O’Sullivan for helpful discussions and William Mathews for a careful reading of the manuscript. SPO thanks MPA for hospitality. We acknowledge support by NASA grant NNG06GH95G. \[lastpage\] [^1]: E-mail: fulai@ucolick.org [^2]: E-mail: peng@physics.ucsb.edu [^3]: A recent study of a [*Chandra*]{} cluster sample has similarly found that while thermal conduction appears to be sufficient to stabilize NCC clusters, CC clusters appear to form a distinct population in which additional feedback heating is required [@sanderson09] [^4]: Indeed, little is know about the mechanical variability of AGN; possible reasonable assumptions are that the luminosity has a log-normal distribution with a ’flicker-noise’ power spectrum [@nipoti05]. If our hypothesis is correct, then demography, in particular the relative fraction of CC and NCC systems, may hold the key to understanding the frequency of outbursts as a function of energy. [^5]: Each AGN heating episode is triggered when gas entropy ($S\equiv k_{\rm B}T/ n_{\rm{e}}^{2/3}$) at $r=5$ kpc drops below $S_{\rm crit}$. [^6]: The mechanical energy released during a weak ($E_{\rm agn}$) or strong ($E_{\rm agn,s}$) AGN outburst. We assume that each AGN outburst heats the ICM for a duration of $t_{\rm agn}=1.0\times 10^{8}$ yrs. [^7]: The conduction suppression factor relative to the Spitzer value when AGN heating ($f$) or a strong AGN outburst ($f_{s}$) is active. [^8]: Conduction is on during each active AGN heating episode. When AGN is turned off, conductivity then decays exponentially in a timescale of $t_{\rm HBI}$ (after weak AGN outbursts) or $t_{\rm HBI,s}$ (after strong AGN outbursts). $t_{\rm HBI}=\infty$ indicates non-decaying conduction, while $t_{\rm HBI}=0$ indicates that conduction is turned off once AGN heating is shut off. [^9]: We thank Trevor Ponman for pointing this out.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The pants graph has proved to be influential in understanding 3-manifolds concretely. This stems from a quasi-isometry between the pants graph and the Teichmüller space with the Weil-Petersson metric. Currently, all estimates on the quasi-isometry constants are dependent on the surface in an undiscovered way. This paper starts effectivising some constants which begins the understanding how relevant constants change based on the surface. We do this by studying the hyperbolicity constant of the pants graph for the five-punctured sphere and the twice punctured torus. The hyperbolicity constant of the relative pants graph for complexity 3 surfaces is also calculated. Note, for higher complexity surfaces, the pants graph is not hyperbolic or even strongly relatively hyperbolic.' author: - Ashley Weber bibliography: - 'mybib.bib' date: title: Hyperbolicity constants for pants and relative pants graphs --- Introduction ============ The pants graph has been instrumental in understanding Teichmüller space. This is because the pants graph is quasi-isometric to Teichmüller space equipped with the Weil-Petersson metric [@Brock-WPtoPants]. Brock and Margalit used pants graphs to show that all isometries of Teichmüller space with the Weil-Petersson metric arise from the mapping class group of the surface [@BM-WPisom]. This relationship was also used to classify for which surfaces the associated Teichmüller space is hyperbolic. The relationship between the pants graph and Teichmüller space has been used to study volumes of 3-manifolds [@Brock-WPtoPants; @Brock-WPtrans]. In particular, it has been used to relate volumes of the convex core of a hyperbolic 3-manifold to the distance of two points in Teichmüller space. It has also related the volume of a hyperbolic 3-manifold arising from a psuedo-Anosov element in the mapping class group to the translation length of the psuedo-Anosov element as applied to the pants graph. Both of these relations have constants which depend on the surface; this paper is the start of effectivising those constants. Notice Aougab, Taylor, and Webb have some effective bounds on the quasi-isometry bounds, however even these still depend on the surface in a way that is unknown [@ATW]. Let $S_{g,p}$ be a surface with genus $g$ and $p$ punctures. We define the complexity of a surface to be $\xi(S_{g,p}) =3g + p - 3 $. Brock and Farb have shown that the pants graph is hyperbolic if and only if the complexity of the surface is less than or equal to $2$ [@BF]. Brock and Masur showed that in a few cases the pants graph is strongly relatively hyperbolic, specifically when $\xi(S) = 3$ [@BM]. Even though hyperbolicity is well studied for the pants graph, the hyperbolicity constants associated with the pants graph or the relative pants graph is not. In addition to having a further understanding of the quasi-isometry mentioned above and all of its applications, actual hyperbolicity constants are useful in answering questions about asymptotic time complexity of certain algorithms, especially those involving the mapping class group. More speculatively, estimates on hyperbolicity constants may be crucial to effectively understand the virtual fibering conjecture, which relates the geometry of the fiber to the geometry of the base surface. The focus of this paper is to find hyperbolicity constants for the pants graph and relative pants graph, when these graphs are hyperbolic. For a surface $S = S_{0,5}, S_{1,2}$, ${\mathcal{P}}(S)$ is $2,691,437$-thin hyperbolic. Computing the asymptotic translation lengths of an element in the mapping class group on ${\mathcal{P}}(S)$ is a question explored by Irmer [@Irmer]. Bell and Webb have an algorithm that answers this question for the curve graph [@BellWebb]. Combining the works of Irmer, and Bell and Webb, one could conceivably come up with an algorithm for asymptotic translation lengths on ${\mathcal{P}}(S)$. In this case, the above Theorem would put a bound on the run-time of the algorithm in the cases that $S = S_{0,5}, S_{1,2}$. We now turn our attention to the relatively hyperbolic cases. For a surface $S = S_{3,0}, S_{1,3}, S_{0,6}$, ${\mathcal{P}}_{rel}(S)$ is $2,606,810,489$-thin hyperbolic. To show both of our main theorems, we construct a family of paths that is very closely related to hierarchies, introduced in [@MMII]. We show that this family of paths satisfies the thin triangle condition which, by a theorem of Bowditch, allows us to conclude the whole space is hyperbolic [@Bow]. A key tool used throughout is the Bounded Geodesic Image Theorem [@MMII]. This theorem allows us to control the length of geodesics in subspaces. This method cannot be made to generalize to pants graphs in general since any pants graph of a surface with complexity higher than $3$ is not strongly relatively hyperbolic [@BM]. Although, this method may be able to be used for other graphs which are variants on the pants graph. One might consider approaching this problem by finding the sectional curvature of Teichmüller space and using the quasi-isometry to inform on the hyperbolicity constant of the pants graph. If the sectional curvature is bounded away from zero, one can relate the curvature of the space to the hyperbolicity constant of the space. However, the sectional curvature of Teichmüller sapce is not bounded away from zero [@Huang]. Therefore, this technique cannot be used. **Acknowledgments:** I would like to thank my advisor, Jeff Brock, for suggesting this problem, support, and helpful conversations. I’d also like to thank Tarik Aougab and Peihong Jiang for helpful conversations. Preliminaries ============= Hyperbolicity ------------- Assume $\Gamma$ is a connected graph which we equip with the metric where each edge has length 1. We give two definitions of a graph being hyperbolic. A triangle in $\Gamma$ is $k$-*centered* if there exists a vertex $c \in \Gamma$ such that $c$ is distance $\leq k$ from each of its three sides. $\Gamma$ is $k$-*centered hyperbolic* if all geodesic triangles (triangles whose edges are geodesics) are $k$-centered. We say a triangle in $\Gamma$ is $\delta$-*thin* if each side of the triangle is contained in the $\delta$-neighborhood of the other two sides for some $\delta \in {\mathbb{R}}$. A graph is $\delta$-*thin hyperbolic* if all geodesic triangles are $\delta$-thin. Note that $\delta$-thin hyperbolic and $k$-centered hyperbolic are equivalent up to a linear factor [@ABC]. \[centered to thin\] If $\Gamma$ is $k$-centered hyperbolic then $\Gamma$ is $4k$-thin hyperbolic. The following proof is very similar to the proof of an existence of a global minsize of triangles implies slim triangles in [@ABC] (Proposition 2.1). We denote $[a,b]$ as a geodesic between $a$ and $b$; if $c \in [a,b]$ then $[a, c]$ or $[c,b]$ refers to the subpath of $[a,b]$ with $c$ as one of the endpoints. Consider the triangle $xyz$ and assume it is $k$-centered. Let $p$ be the centered point and $x'$ be the point on the edge $[y,z]$ closest to $p$. Similarly define $y'$ and $z'$. Suppose there is a point $t \in [x,z']$ such that $d(t, [x, y']) > 2k$. Let $u$ be the point in $[t, z']$ nearest to $t$ such that $d(u, u') = 2k$ for some point $u' \in [x, y']$, see Figure \[center to thin figure\]. Consider the geodesic triangle $uu'x$. There exists points $a$, $b$, and $c$ on the three sides of $uu'x$ that are less than or equal to $k$ away from some point $q$, see Figure \[center to thin figure\]. Since $a \in [x, u]$, by assumption $a$ does not lie in $[t, u]$ and $d(u, a) \leq 4k$. So $d(t, u') \leq 4k$ or $d(t, c) \leq 4k$, making the triangle $xyz$ $4k$-thin. Bowditch shows, in [@Bow] Proposition 3.1, that we don’t always have to work with geodesic triangles to show hyperbolicity of a graph. \[subset hyperbolic\] Given $h \geq 0$, there exists $\delta \geq 0$ with the following property. Suppose that $G$ is a connected graph, and that for each $x, y \in V(G)$, we have associated a connected subgraph, ${\mathcal{L}}(x,y) \subset G$, with $x, y \in {\mathcal{L}}(x,y)$. Suppose that: 1. for all $x, y, z \in V(G)$, $${\mathcal{L}}(x,y) \subset N_h({\mathcal{L}}(x,z) \cup {\mathcal{L}}(z, y))$$ and 2. for any $x, y \in V(G)$ with $d(x,y) \leq 1$, the diameter of ${\mathcal{L}}(x,y)$ in $G$ is at most $h$. Then $G$ is $\delta$-thin hyperbolic. In fact, we can take any $\delta \geq (3m-10h)/2$, where $m$ is any positive real number satisfying $$2h(6 + \log_2(m+2)) \leq m.$$ Graphs ------ Let $S = S_{g,p}$ be a surface where $g$ is the genus and $p$ is the number of punctures. We define $\xi(S_{g,p}) = 3g + p -3$ and refer to $\xi(S_{g,p})$ as the complexity of $S_{g,p}$. When $\xi(S) > 1$ the curve graph of $S$, ${\mathcal{C}}(S)$, originally introduced by Harvey in [@Harvey], is a graph whose vertices are homotopy classes of essential simple closed curves on $S$ and there is an edge between two vertices if the curves can be realized disjointly, up to isotopy. From here on when we talk about curves we really mean a representative of the homotopy class of an essential, non-peripheral, simple closed curve. When $\xi(S) = 1$, the definition of the curve graph is slightly altered in order to have a non-trivial graph: the vertices have the same definition, but there is an edge between two curves if they have minimal intersection number. We can similarly define the *arc and curve graph*, ${\mathcal{A}}{\mathcal{C}}(S)$, where a vertex is either a homotopy class of curves or homotopy class of arcs and the edges represent disjointness. This definition is the same for all surfaces such that $\xi(S) > 0$. A related graph associated to a surface is the pants graph. We call a maximal set of disjoint curves on a surface a *pants decomposition*. For $\xi(S) \geq 1$ the *pants graph*, denoted ${\mathcal{P}}(S)$, of a surface $S$ is a graph whose vertices are homotopy classes of pants decompositions and there exists an edge between two pants decompositions if they are related by an elementary move. Pants decompositions $\alpha$ and $\beta$ differ by an elementary move if one curve, $c$, from $\alpha$ can be deleted and replaced by a curve that intersects $c$ minimally to obtain $\beta$, see Figure \[elementary moves\]. We equip both graphs with the metric where each edge is length 1. Then ${\mathcal{C}}(S)$ and ${\mathcal{P}}(S)$ are complete geodesic metric spaces. The hyperbolicity of these graphs have been studied before. \[curve hyp\] For any hyperbolic surface $S$, ${\mathcal{C}}(S)$ is $17$-centered hyperbolic. Brock and Farb showed: For any hyperbolic surface $S$, ${\mathcal{P}}(S)$ is hyperbolic if and only if $\xi(S) \leq 2$. Relative graphs --------------- Let $S$ be a hyperbolic surface such that $\xi(S) \geq 3$. We say that a curve $c \in {\mathcal{C}}(S)$ is *domain separating* if $S \backslash c$ has two components of positive complexity. Each domain separating curve $c$ determines a set in ${\mathcal{P}}(S)$, $X_c = \{\alpha \in {\mathcal{P}}(S) | c \in \alpha \}$. To form the *relative pants graph*, denoted ${\mathcal{P}}_{rel}(S)$, we add a point $p_c$ for each domain separating curve and an edge from $p_c$ to each vertex in $X_c$, where each edge has length $1$. Effectively, we have made the set $X_c$ have diameter $2$ in the relative pants graph. Brock and Masur have shown: For $S$ such that $\xi(S) = 3$, ${\mathcal{P}}_{rel}(S)$ is hyperbolic. Paths in the Pants Graph ------------------------ Here we describe how we will get a path in ${\mathcal{P}}(S)$ if $\xi(S) =2$ or ${\mathcal{P}}_{rel}(S)$ if $\xi(S) = 3$. The paths for ${\mathcal{P}}(S)$ are hierarchies and were originally introduced by Masur and Minsky in [@MMII] (in more generality than we will use here); the paths in ${\mathcal{P}}_{rel}(S)$ are motivated by hierarchies. Take two pants decompositions, $\alpha = \{ \alpha_0, \alpha_1\}$ and $\beta = \{ \beta_0, \beta_1\}$, in ${\mathcal{P}}(S)$ where $S = S_{0,5}$ or $S_{1,2}$. To create a hierarchy between $\alpha$ and $\beta$ first connect $\alpha_0$ and $\beta_0$ with a geodesic path in ${\mathcal{C}}(S)$. This geodesic is referred to as the *main geodesic*, $g_{\alpha\beta} = \{ \alpha_0 = g_0, \ldots, g_n = \beta_0\}$. For each $g_i$, $0 \leq i \leq n$, connect $g_{i-1}$ to $g_{i+1}$ by a geodesic, $\gamma_i$, in ${\mathcal{C}}(S\backslash g_i)$, where $g_{-1} = \alpha_1$ and $g_{n+1} = \beta_1$. The collection of all of these geodesics is a *hierarchy* between $\alpha$ and $\beta$, generally pictured as in Figure \[Hierarchy picture\]. We often refer to the geodesic $\gamma_i$ as the geodesics whose domain is ${\mathcal{C}}(S \backslash g_i)$ or the geodesic connecting $g_{i-1}$ and $g_{i+1}$. We can turn a hierarchy into a path in ${\mathcal{P}}(S)$ by looked at all edges in turn, as pictured in Figure \[Hierarchy picture\]. We will often blur the line between the hierarchy being a path in the pants graph or a collection of geodesics - and refer to both as the hierarchy between $\alpha$ and $\beta$. Let $\xi(S) =3$. We make a path in ${\mathcal{P}}_{rel}(S)$ using a similar technique. Take two pants decompositions in ${\mathcal{P}}_{rel}(S)$, $\alpha = \{\alpha_0, \alpha_1, \alpha_2\}$ and $\beta = \{\beta_0, \beta_1, \beta_2\}$. Connect $\alpha_0$ to $\beta_0$ with a geodesic $g_{\alpha\beta}$ in ${\mathcal{C}}(S)$, we still refer to this as the main geodesic. For every non-domain separating curve $w \in g$, connect $w^{-1}$ to $w^{+1}$ with a geodesic, $h$, in ${\mathcal{C}}(S \backslash w)$ where $w^{-1}$ and $w^{+1}$ are the curves before and after $w$ in $g$. If $w = \alpha_0$ then $w^{-1} = \alpha_1$ and if $w = \beta_0$ then $w^{+} = \beta_1$. Now for each non-domain separating curve $z \in h$ connect $z^{-1}$ to $z^{+1}$ with a geodesic in ${\mathcal{C}}(S \backslash (w \cup z))$, where $z^{-1}$ and $z^{+1}$ are the curves before and after $z$ in $h$. If $z = w^{-1}$ then $z^{-1}$ is the curve preceding $w$ in the geodesic whose domain is ${\mathcal{C}}(S \backslash w^{-1})$. If $z = w^{+1}$ then $z^{+1}$ is the curve following $w$ in the geodesic whose domain is ${\mathcal{C}}(S \backslash w^{+1})$ (see Figure \[general hierarchy\] (top)). We can get a path in ${\mathcal{P}}_{rel}(S)$ by a similar process as before - going along each of the edges. Whenever we come across a domain separating curve, $c$, where $c$ is in the main geodesic or in a geodesic whose domain is ${\mathcal{C}}(S \backslash w)$ where $w$ is in the main geodesic, we add in the point $p_c$ into the path before moving on. For an example see Figure \[general hierarchy\]. These paths are *relative 3-archies*. As before, we will blur the line between the collection of geodesics and the path of a relative 3-archy. When discussing hierarchies (or relative 3-archies), subsurface projections of curves or geodesics are involved. The following maps are to define what is meant by subsurface projections [@MMII]. An *essential subsurface* is a subsurface where each boundary component is essential. Let ${\mathscr{P}}(X)$ be the set of subsets of $X$. For a set $A$ we define $f(A) = \cup_{a \in A}f(a)$, for any map $f$. Take an essential, non-annular subsurface $Y \subset S$. We define a map $$\phi_Y: {\mathcal{C}}(S) {\longrightarrow}{\mathscr{P}}({\mathcal{A}}{\mathcal{C}}(Y))$$ such that $\phi_Y(a)$ is the set of arcs and curves obtained from $a \cap Y$ when $\partial Y$ and $a$ are in minimal position. Define another map $$\psi_Y : {\mathscr{P}}({\mathcal{A}}{\mathcal{C}}(Y)) {\longrightarrow}{\mathscr{P}}({\mathcal{C}}(Y))$$ such that if $a$ is a curve, then $\psi_Y(a) = a$, and if $b$ is an arc, then $\psi_Y(b)$ is the union of the non-trivial components of the regular neighborhood of $(b\cap Y) \cup \partial Y$ (see Figure \[nbhd\]). Composing these two maps we define the map $$\begin{aligned} \pi_Y: {\mathcal{C}}(S) &{\longrightarrow}{\mathscr{P}}({\mathcal{C}}(Y)) \\ c &\longmapsto \psi_Y(\phi_Y(c))\end{aligned}$$ We use this map to define distances in a subsurface: for any two sets $A$ and $B$ in ${\mathcal{C}}(S)$, $$d_Y(A, B) = d_Y(\pi_Y(A), \pi_Y(B)).$$ We often refer to this as the distance in the subsurface $Y$. The relationship between hierarchies and these maps give rise to some useful properties including the Bounded Geodesic Image Theorem which was originally proven by Masur-Minsky [@MMII]. \[bounded geodesic image\] Let $Y$ be a subsurface of $S$ with $\xi(Y) \neq 3$ and let $g$ be a geodesic segment, ray, or biinfinite line in ${\mathcal{C}}(S)$, such that $\pi_Y(v) \neq \emptyset$ for every vertex of $v$ of $g$. There is a constant $M$ depending only on $\xi(S)$ such that $${\mathrm{diam}}_Y(g) \leq M.$$ It can be shown that $M$ is at most $100$ for all surfaces [@Webb]. Hyperbolicity of Pants Graph for Complexity 2 ============================================= In this section we explore the hyperbolicity constant for the pants graph of surfaces with complexity $2$. Before we state any results, some notation must be discussed. Throughout the paper we denote $[a, b]_\Sigma$ as a geodesic in ${\mathcal{C}}(\Sigma)$ connecting $a$ to $b$, for any surface $\Sigma$. If a geodesic satisfying this is contained in a hierarchy (or relative 3-archy, in later sections) being discussed, $[a,b]_\Sigma$ denotes the geodesic in the hierarchy. \[hierarchy k-centered\] For $S = S_{0,5}, S_{1,2}$, hierarchy triangles in ${\mathcal{P}}(S)$ are $8,900$-centered. Let $S = S_{0,5}$ or $S_{1,2}$. Take three pants decompositions $\alpha = \{\alpha_0, \alpha_1\}$, $\beta = \{\beta_0, \beta_1\}$, and $\gamma = \{\gamma_0, \gamma_1\}$ in $S$. Consider the triangle $\alpha\beta\gamma$ in ${\mathcal{P}}(S)$ where the edges are taken to be hierarchies instead of geodesics. There are three cases: 1. All three main geodesics have a curve in common. 2. Any two of the main geodesics share a curve, but not the third. 3. None of the main geodesics have common curves. In all three cases we will find a pants decomposition such that the hierarchy connecting this pants decomposition to each edge in $\alpha\beta\gamma$ is less than $8,900$. **Case 1**: Assume the main geodesics of all three edges share the curve $v \in {\mathcal{C}}(S)$. Define $v_{\alpha \beta}^{-1}$ to be the curve on $g_{\alpha \beta}$ preceding $v$ and $v_{\alpha \beta}^{+1}$ the curve on $g_{\alpha \beta}$ following $v$ when viewing $g_{\alpha\beta}$ going from $\alpha_0$ to $\beta_0$. Similarly define $v_{\alpha \gamma}^{-1}$, $v_{\alpha \gamma}^{+1}$, $v_{\beta \gamma}^{-1}$, and $v_{\beta \gamma}^{+1}$. See Figure \[Case 1\]. We want to show the geodesics connecting $v_*^{-1}$ to $v_*^{+1}$ in ${\mathcal{C}}(S \backslash v)$ are not too far apart in ${\mathcal{C}}(S \backslash v)$. Connect $v_{\alpha\beta}^{-1}$ to $v_{\alpha\gamma}^{-1}$, $v_{\alpha\gamma}^{+1}$ to $v_{\beta \gamma}^{+1}$ and $v_{\beta\gamma}^{-1}$ to $v_{\alpha\beta}^{+1}$ by geodesics in ${\mathcal{C}}(S \backslash v)$. We now have a loop in ${\mathcal{C}}(S\backslash v)$. Since all curves besides $v$ in $S$ intersect the subsurface $S \backslash v$ non-trivially we can apply the Bounded Geodesic Image Theorem on $[v_{\alpha \beta}^{-1}, \alpha_1]_S$ and $[\alpha_1, v_{\alpha \gamma}^{-1}]_S$ to get $d_{{\mathcal{C}}(S\backslash v)}(v_{\alpha \beta}^{-1}, v_{\alpha \gamma}^{-1}) \leq 2M$. Similarly, $d_{{\mathcal{C}}(S\backslash v)}(v_{\alpha \gamma}^{+1}, v_{\beta \gamma}^{+1}) \leq 2M$ and $d_{{\mathcal{C}}(S\backslash v)}(v_{\beta \gamma}^{-1}, v_{\alpha\beta}^{+1}) \leq 2M$. Consider the geodesic triangle $v_{\alpha\beta}^{+1}v_{\alpha\gamma}^{-1}v_{\beta\gamma}^{+1}$ in ${\mathcal{C}}(S \backslash v)$. We now have the picture in ${\mathcal{C}}(S \backslash v)$ as in Figure \[2 links are thin\]. By Theorem \[curve hyp\], the inner triangle is $17$ centered, call this center $z$. Combining Theorem \[curve hyp\] and Lemma \[centered to thin\], the outer three triangles are $17*4$-thin. Therefore $z$ is at most $17*5 + 2M = 285$ away from each of the geodesics in the hierarchy triangle $\alpha\beta\gamma$ whose domain is ${\mathcal{C}}(S \backslash v)$. This all implies that $\alpha\beta\gamma$ is 285-centered at $\{v, z\}$. **Case 2**: Assume that at least two main geodesics share a common curve, but there is no point that all three main geodesics share the same curve. First assume there is only one such shared curve. Without loss of generality assume that $g_{\alpha\beta}$ and $g_{\alpha \gamma}$ share the curve $v$. Then we can consider a new triangle with the main geodesics forming the triangle $v\beta_1\gamma_1$, see Figure \[Case 2\]. This new triangle has no shared curves so is covered by Case 3. Now assume there is more than one shared curve between the main geodesics. By definition of a geodesic, for any two main geodesics that share multiple curves, those curves have to show up in each main geodesic in the same order from either end, therefore we can just take the inner triangle where the edges share no curves and apply Case 3. **Case 3**: The argument given for this case is similar to the short cut argument in [@MMII]. Assume none of the three main geodesics, $g_{\alpha \beta}, g_{\alpha \delta}$, and $g_{ \beta \delta}$ share a curve. By Theorem \[curve hyp\] there exists a curve $c \in {\mathcal{C}}(S)$ that is distance at most $17$ from $g_{\alpha \beta}, g_{\alpha \gamma}$, and $g_{ \beta \gamma}$; let $c$ be the curve that minimizes the distance from all three main geodesics. Define $v_{\alpha \beta}$ to be the vertex in $g_{\alpha \beta}$ which has the least distance to $c$, and similarly define $v_{\alpha \gamma}$ and $v_{\beta \gamma}$. Consider the geodesic $[v_{\alpha\beta}, c]_S$ and let $c_0$ be the curve adjacent to $c$ in this geodesic. Let $v_*^{-1}$ be the curve in $g_*$ that precedes $v_{*}$. Now connect $\{v_{\beta\gamma}, v_{\beta\gamma}^{-1}\}$ to $\{c, c_0 \}$ with a hierarchy. We denote the main geodesic of this hierarchy as $[c, v_{\beta\gamma}]_S$. Take a vertex $w \in [c, v_{ \beta \gamma}]_S$ where $w$ is not equal to $c$ or $v_{\beta\gamma}$ and let $w^{-1}$ and $w^{+1}$ denote the vertices directly before and after $w$ in $[c, v_{ \beta \gamma}]_S$. We want to show that the link connecting $w^{-1}$ to $w^{+1}$ in $S\backslash w$ is at most $5M$. Assume $d_{S \backslash w} (w^{-1}, w^{+1}) \geq 5M$. Consider the path $[w^{+1}, v_{\beta\gamma}]_S \cup [v_{ \beta \gamma}, \beta_0]_S \cup [\beta_0, v_{\alpha \beta}]_S \cup [v_{\alpha \beta}, c]_S \cup [c, w^{-1}]_S$, where geodesics are taken to be on $g_*$ where appropriate. The Bounded Geodesic Image Theorem, and our assumption that $d_{S \backslash w} (w^{-1}, w^{+1}) \geq 5M$, implies that $w$ must be somewhere on the path. $w$ cannot be in $[w^{+1}, v_{\beta\gamma}]_S$, $[v_{ \beta \gamma}, \beta_0]_S$, or $[c, w^{-1}]_S$ since that would contradict the fact that they are geodesics or the definition of how we chose $c$ and $v_{\beta\gamma}$. Therefore, $w$ is in $[\beta_0, v_{\alpha \beta}]_S$ or $[v_{\alpha \beta}, c]_S$. Without loss of generality assume $w \in [\beta_0, v_{\alpha \beta}]_S$. We can apply the same logic to the path $[w^{+1}, v_{\beta\gamma}]_S \cup [v_{ \beta \gamma}, \gamma_0]_S \cup [\gamma_0, v_{\alpha \gamma}]_S \cup [v_{\alpha \gamma}, c]_S \cup [c, w^{-1}]_S$. Now $w$ has to be in $[v_{\alpha \gamma}, c]_S$ so that it doesn’t contradict the fact that the three main geodesic of the triangle $\alpha\beta\gamma$ do not share any curves. However, now all three main geodesics are closer to $w$ than $c$, which contradicts our choice of $c$. Therefore, the length of $[w^{-1}, w^{+1}]_{S \backslash w}$ is at most $5M$. Using a similar argument we can show the geodesic in ${\mathcal{C}}(S \backslash v_{\beta\gamma})$ connecting $v_{\beta\gamma}^{-1}$ to the appropriate vertex in $[c, v_{\beta\gamma}]_S$ is $\leq 5M$. Now consider the geodesic in ${\mathcal{C}}(S \backslash c)$ connecting $c_0$ to the second vertex, $x$, of $[c, v_{\beta\gamma}]_S$. Consider the path $[x, v_{\beta\gamma}]_S \cup [v_{\beta\gamma}, \beta_0]_S \cup [\beta_0, v_{\alpha\beta}]_S \cup [v_{\alpha\beta}, c_0]_S$. $c$ cannot be in anywhere in this path, otherwise it would contradict how we chose $c$ or $v_*$. So we can apply the Bounded Geodesic Image Theorem and get that $d_{S \backslash c}(c', x) \leq 4M$. Therefore the path from $\{v_{\beta\gamma}, v_{\beta\gamma}^{-1}\}$ to $\{c, c_0\}$ in the pants graph is less than or equal to $16(5M) + 5M + 4M$. A similar argument can be made for the other two sides of the triangle $\alpha\beta\gamma$, so $\{c, c_0\}$ can be taken to be a center of the triangle. Since $M \leq 100$ the triangle $\alpha\beta\gamma$ is $8,900$-centered at $\{c, c_0\}$. \[main thm 1\] For a surface $S = S_{0,5}, S_{1,2}$, ${\mathcal{P}}(S)$ is $2,691,437$-thin hyperbolic. For $x, y \in {\mathcal{P}}(S)$ define ${\mathcal{L}}(x,y)$ to be the collection of hierarchy paths between $x$ and $y$. These are connected because each hierarchy path is connected and all contain $x$ and $y$. By Theorem \[hierarchy k-centered\] and Lemma \[centered to thin\] we have that for all $x, y, z \in {\mathcal{P}}(S)$ $${\mathcal{L}}(x, y) \subset N_{4*8,900}({\mathcal{L}}(x,z) \cup {\mathcal{L}}(z,y)).$$ If $d(x,y) \leq 1$ then any hierarchy between $x$ and $y$ is just the edge $\{ xy\}$, so ${\mathcal{L}}(x,y) = \{x, y\}$. Thus, both conditions of Proposition \[subset hyperbolic\] are satisfied. Therefore by applying Proposition \[subset hyperbolic\] we get ${\mathcal{P}}(S)$ is $2,691,437$-thin hyperbolic. Relative Hyperbolicity of Pants Graphs Complexity 3 =================================================== In this section we turn our attention to relative pants graphs and their hyperbolicity constant. \[relative hierarchy k-centered\] Take $S$ such that $\xi(S) = 3$. The relative 3-archy triangles in ${\mathcal{P}}_{rel}(S)$ are $6,191,300$-centered. Take three pants decompositions of $S$, say $\alpha = \{\alpha_0, \alpha_1, \alpha_2 \}$, $\beta = \{\beta_0, \beta_1, \beta_2\}$, and $\gamma = \{\gamma_0, \gamma_1, \gamma_2\}$. Form the triangle $\alpha\beta\gamma$ such that each edge in the triangle is a relative 3-archy in ${\mathcal{P}}_{rel}(S)$. Let $g_{\alpha\beta}$, $g_{\beta\gamma}$, and $g_{\alpha\gamma}$ be the three main geodesics that make up the triangle (which connects $\alpha_0$, $\beta_0$, and $\gamma_0$). As before in Theorem \[hierarchy k-centered\], there are three cases: 1. All three main geodesics have a curve in common. 2. Any two of the main geodesics share a curve, but not the third. 3. None of the main geodesics have common curves. For the rest of the proof, note that if $v \in {\mathcal{C}}(S)$ is a non-domain separating curve, then $S \backslash v$ has one connected component with positive complexity, so by abuse of notation, we denote this component as $S \backslash v$. This means that every curve in ${\mathcal{C}}(S)$ not equal to $v$ intersects $S \backslash v$ so we can use the Bounded Geodesic Image Theorem on any geodesic that doesn’t contain $v$. Take two non-domain separating curve $v,w \in {\mathcal{C}}(S)$ such that $v$ and $w$ are disjoint. Then, because $\xi(S) = 3$, $S \backslash (v \cup w)$ has one connected component with positive complexity, and again we denote this component as $S \backslash (v \cup w)$. Furthermore, every curve in ${\mathcal{C}}(S)$ not equal to $v$ or $w$ intersects $S \backslash (v \cup w)$, so we may use the Bounded Geodesic Image Theorem for any geodesic that doesn’t contain $v$ or $w$. Whenever a domain separating curve, $c$, shows up in a relative 3-archy in ${\mathcal{P}}_{rel}(S)$, the section of the relative 3-archy containing $c$ has length $2$. Therefore, when referring to a curve along a geodesic within a relative 3-archy we will assume it is non-domain separating since this type of curve adds the most length to the relative 3-archy. This also just makes the proof cleaner. **Case 1:** Let $v$ be a vertex where all three main geodesics intersect. If $v$ is a domain separating curve then each edge of the triangle $\alpha\beta\gamma$ contains the point $p_v$, so the triangle is $0$-centered. Now assume $v$ is not a domain separating curve. Let $v_{\alpha\beta}^{-1}$ and $v_{\alpha\beta}^{+1}$ be the curves that are directly before and after $v$ on $g_{\alpha\beta}$. Similarly define $v_{\alpha\gamma}^{-1}$, $v_{\alpha\gamma}^{+1}$, $v_{\beta\gamma}^{-1}$, and $v_{\beta\gamma}^{+1}$. Consider the geodesics associated with $v$ in each relative 3-archy edge; in other words, all geodesics in the relative 3-archy that contribute to defining the path where $v$ is a part of every pants decomposition. Let $x_{\alpha\beta}$ be the curve in $[v_{\alpha\beta}^{-1}, v_{\alpha\beta}^{+1}]_{S \backslash v}$ that is adjacent to $v_{\alpha\beta}^{-1}$; similarly define $x_{\alpha\gamma}$. Now connect $\{v_{\alpha\beta}^{-1}, x_{\alpha\beta} \}$ to $\{v_{\alpha\gamma}^{-1}, x_{\alpha\gamma}\}$ with a hierarchy in ${\mathcal{P}}(S \backslash v)$. Note, to make our notation cleaner, we will refer to this as the hierarchy between $v_{\alpha\beta}^{-1}$ and $v_{\alpha\gamma}^{-1}$; similarly later on we won’t necessarily specify the second curve. By the Bounded Geodesic Image Theorem the geodesic connecting $v_{\alpha\beta}^{-1}$ and $v_{\alpha\gamma}^{-1}$ in ${\mathcal{C}}(S\backslash v)$ has length at most $2M$. Now consider any curve, $w$, in the geodesic $[v_{\alpha\beta}^{-1}, v_{\alpha\gamma}^{-1}]_{S \backslash v}$ contained in the hierarchy connecting $\{v_{\alpha\beta}^{-1}, x_{\alpha\beta} \}$ to $\{v_{\alpha\gamma}^{-1}, x_{\alpha\gamma}\}$. Assume $w$ is not a domain separating curve in $S$ and let $w^{-1}$ and $w^{+1}$ be the two curves before and after $w$ on $[v_{\alpha\beta}^{-1}, v_{\alpha\gamma}^{-1}]_{S \backslash v}$. Then the geodesic connecting $w^{-1}$ to $w^{+1}$ in ${\mathcal{C}}(S \backslash (v \cup w))$ has length at most $4M$ by using the Bounded Geodesic Image Theorem on $[w^{-1}, v_{\alpha\beta}^{-1}]_{S \backslash v} \cup [v_{\alpha\beta}^{-1}, \alpha_0]_S \cup [\alpha_0, v_{\alpha\gamma}^{-1}]_S \cup [v_{\alpha\gamma}^{-1}, w^{+1}]_{S \backslash v}$; note $w$ cannot be on this path because $w$ is distance $1$ from $v$, so if it was anywhere in the path it would be violating the assumption that we have geodesics. Therefore the hierarchy between $v_{\alpha\beta}^{-1}$ and $v_{\alpha\gamma}^{-1}$ has length at most $8M^2$. Similarly the hierarchies between $v_{\alpha\gamma}^{+1}$ and $v_{\beta\gamma}^{+1}$, and $v_{\alpha\beta}^{+1}$ and $v_{\beta\gamma}^{-1}$ have length less than $8M^2$. Now, make a hierarchy triangle $v_{\alpha\beta}^{+1}v_{\alpha\gamma}^{-1}v_{\beta\gamma}^{+1}$ in ${\mathcal{P}}(S \backslash v)$, see Figure \[links are thin\] for how this fits in with above. By Theorem \[hierarchy k-centered\], $v_{\alpha\beta}^{+1}v_{\alpha\gamma}^{-1}v_{\beta\gamma}^{+1}$ in ${\mathcal{P}}(S \backslash v)$ is $8,900$ centered, call the point at the center $z$. Then by Theorem \[hierarchy k-centered\] and Lemma \[centered to thin\], the hierarchy triangles $v_{\alpha\beta}^{+1}v_{\alpha\beta}^{-1}v_{\alpha\gamma}^{-1}$, $v_{\beta\gamma}^{-1}v_{\beta\gamma}^{+1}v_{\alpha\gamma}^{+1}$, and $v_{\alpha\gamma}^{-1}v_{\alpha\gamma}^{+1}v_{\beta\gamma}^{+1}$ are $35,600$ thin. Therefore $z$ is at most $124,500$ away from each $[v_*^{+1}, v_{*}^{-1}]_{S \backslash v}$. This implies that $\{z, v\}$ is at most $124,500$-centered in the relative 3-archy triangle $\alpha\beta\gamma$. **Case 2:** For the same reasons as in Theorem \[hierarchy k-centered\] case 2, this case can be reduced to case 3. **Case 3:** This proceeds with the same strategy as in case 3 of Theorem \[hierarchy k-centered\]. By Theorem \[curve hyp\], we know the triangle of main geodesics, $g_{\alpha\beta}g_{\beta\gamma}g_{\alpha\gamma}$ in ${\mathcal{C}}(S)$ is $17$-centered. Let $c$ be the curve that is at the center of this triangle. Connect $c$ to $g_{\alpha\beta}$, $g_{\beta\gamma}$, and $g_{\alpha\gamma}$ with a geodesic in ${\mathcal{C}}(S)$. Define $v_{\alpha \beta}$ to be the vertex in $g_{\alpha \beta}$ which is the least distance to $c$, and similarly define $v_{\alpha \gamma}$ and $v_{\beta \gamma}$. Let $c_0$ be the curve directly preceding $c$ in $[v_{\alpha\beta}, c]_S$ and let $c^{-1}$ be the curve directly preceding $c_0$. Consider a geodesic in ${\mathcal{C}}(S \backslash c_0)$ which connects $c^{-1}$ to $c$, define $c_1$ to be the curve directly preceding $c$ in this geodesic. We will show $\{c, c_0, c_1\}$ is a center of our relative 3-archy triangle $\alpha\beta\gamma$. Let $v_{\beta\gamma}^{-1}$ be the curve before $v_{\beta\gamma}$ in $g_{\beta\gamma}$ and $v_{\beta\gamma}'$ be the curve adjacent to $v_{\beta\gamma}$ in the geodesic contained in the relative 3-archy connecting $\beta$ to $\gamma$ whose domain is ${\mathcal{C}}(S \backslash v_{\beta\gamma}^{-1})$. Now connect $\{v_{\beta\gamma}, v_{\beta\gamma}^{-1}, v_{\beta\gamma}'\}$ to $\{c, c_0, c_1 \}$ with a relative 3-archy, $H$. Our goal is to bound the length of $H$. Using the exact argument as in Theorem \[hierarchy k-centered\] case 3, for each $w \in [c, v_{\beta\gamma}]_S$ which is non-separating, the geodesic in $H$ whose domain is ${\mathcal{C}}(S\backslash w)$ has length no more than $5M$. Let $w^{-1}$ and $w^{+1}$ be the curves before and after $w$ in $[c, v_{\beta\gamma}]_S$ and let $[w^{-1}, w^{+1}]_{S \backslash w}$ be the geodesic coming from $H$. Take $z \in [w^{-1}, w^{+1}]_{S \backslash w}$ and consider the geodesic in $H$ with domain ${\mathcal{C}}(S \backslash (w \cup z))$. Define $z^{-1}$ and $z^{+1}$ to be the curves before and after $z$ on $[w^{-1}, w^{+1}]_{S \backslash w}$. We will show $[z^{-1}, z^{+1}]_{S \backslash (w \cup z)}$ has length at most $7M$. Assume towards a contradiction that the length of $[z^{-1}, z^{+1}]_{S \backslash (w \cup z)}$ is greater than $7M$. Then the path $[z^{+1}, w^{+1}]_{S \backslash w} \cup [w^{+1}, v_{\beta\gamma}]_{S} \cup [v_{\beta\gamma}, \gamma_0]_S \cup [\gamma_0, v_{\alpha\gamma}]_S \cup [v_{\alpha\gamma}, c]_S \cup [c, w^{-1}]_S \cup [w^{-1}, z^{-1}]_{S \backslash w}$ must contain $z$ or $w$ somewhere, otherwise by the Bounded Geodesic Image Theorem using this path we would get that the length of $[z^{-1}, z^{+1}]_{S \backslash (w \cup z)}$ is at most $7M$. Since $w$ and $z$ are distance $1$ apart, it doesn’t matter which one shows up in the path because we eventually will arise at the same contradiction. Thus, without loss of generality we assume $z$ is in the path (and all other paths considered for this argument). Then $z$ must be in $[\gamma_0, v_{\alpha\gamma}]_S$ or $[v_{\alpha\gamma}, c]_S$, otherwise there would be a contradiction with the definition of a geodesic or the definition of $c$ or $v_{\beta\gamma}$ Without loss of generality assume $z \in [\gamma_0, v_{\alpha\gamma}]_S$. Similarly the path $[z^{+1}, w^{+1}]_{S \backslash w} \cup [w^{+1}, v_{\beta\gamma}]_{S} \cup [v_{\beta\gamma}, \beta_0]_S \cup [\beta_0, v_{\alpha\beta}]_S \cup [v_{\alpha\beta}, c]_S \cup [c, w^{-1}]_S \cup [w^{-1}, z^{-1}]_{S \backslash w}$ must contain $z$. Again, the only place $z$ could be, without yielding a contradiction, is in $[v_{\alpha\beta}, c]_S$. However even here, since $z$ is adjacent to $w$, $w$ is strictly closer than $c$ to the three main geodesics of $\alpha\beta\gamma$ which contradicts our choice of $c$. Therefore, the length of $[z^{-1}, z^{+1}]_{S \backslash (w \cup z)}$ is at most $7M$. Now all that’s left to bound is the beginning and end geodesics, i.e. the ones associated to $c$ and $v_{\beta\gamma}$. Let $y$ be the curve adjacent to $v_{\beta\gamma}$ in $[c, v_{\beta\gamma}]_S$ and let $y'$ be the curve adjacent to $v_{\beta\gamma}$ in the geodesic contained in $H$ whose domain is ${\mathcal{C}}(S \backslash y)$. Then the very beginning part of $H$ is the hierarchy connecting $\{y, y' \}$ to $\{v_{\beta\gamma}^{-1}, v_{\beta\gamma}' \}$ in $S \backslash v_{\beta\gamma}$. We will first bound the length of the geodesic $[y, v_{\beta\gamma}^{-1}]_{S \backslash v_{\beta\gamma}}$. Assume that the length is more than $5M$. Then the path $[v_{\beta\gamma}^{-1}, \beta_0]_S \cup [\beta_0, v_{\alpha\beta}]_S \cup [v_{\alpha\beta}, c]_S \cup [c, y]_S$ has to contain $v_{\beta\gamma}$. By our assumption that the main geodesics on the triangle $\alpha\beta\gamma$ don’t intersect, the only part of the path that $v_{\beta\gamma}$ could be on without forming a contraction would be $[v_{\alpha\gamma}, c]_S$. The same is true of the path $[v_{\beta\gamma}^{-1}, \beta_0]_S \cup [\beta_0, \alpha_0]_S \cup [\alpha_0, v_{\alpha\gamma}]_S \cup [v_{\alpha\gamma}, c]_S \cup [c, y]_S$, where $v_{\beta\gamma}$ would have to be in $[v_{\alpha\gamma}, c]_S$. However, then we could take $v_{\beta\gamma}$ to be the center of the main geodesic triangle which would give strictly smaller lengths to each of the sides, contradicting our choice of $c$. Therefore, $[y, v_{\beta\gamma}^{-1}]_{S \backslash v_{\beta\gamma}}$ has length at most 5M. Now take $w \in [y, v_{\beta\gamma}^{-1}]_{S \backslash v_{\beta\gamma}}$ and let $w^{-1}$ and $w^{+1}$ be the curves that come directly before and after $w$ in $[y, v_{\beta\gamma}^{-1}]_{S \backslash v_{\beta\gamma}}$. We want to bound the length of $[w^{-1}, w^{+1}]_{S \backslash (v_{\beta\gamma} \cup w)}$. Assume the length is greater than $7M$. Then the path $[w^{+1}, v_{\beta\gamma}^{-1}]_{S \backslash v_{\beta\gamma}} \cup [v_{\beta\gamma}^{-1}, \beta_0]_S \cup [\beta_0, v_{\alpha\beta}]_S \cup [v_{\alpha\beta}, c]_S \cup[c, y]_S \cup [y, w^{-1}]_{S \backslash v_{\beta\gamma}}$ must contain $w$ or $v_{\beta\gamma}$. The only two places this could happen without raising a contradiction is in $[\beta_0, v_{\alpha\beta}]_S$ or $[v_{\alpha\beta}, c]_S$. Again, whether we assume $w$ or $v_{\beta\gamma}$ is in the path doesn’t matter since we will arrive at the same contradiction, hence we can assume without loss of generality $w$ is always on the path. Therefore, assume $w \in [v_{\alpha\beta}, c]_S$. Similarly, $w$ is contained in the path $[w^{+1}, v_{\beta\gamma}^{-1}]_{S \backslash v_{\beta\gamma}} \cup [v_{\beta\gamma}^{-1}, v_{\beta\gamma}^{+1}]_{S \backslash v_{\beta\gamma}} \cup [v_{\beta\gamma}^{+1}, \gamma_0] \cup [\gamma_0, v_{\alpha\gamma}]_S \cup [v_{\alpha\gamma}, c]_S \cup[c, y]_S \cup [y, w^{-1}]_{S \backslash v_{\beta\gamma}}$, where $w \in [\gamma_0, v_{\alpha\gamma}]_S$ since anywhere else in the path would lead to a contradiction as explained previously. Note if $w \in [v_{\alpha\gamma}, c]_S$ then since $w$ is disjoint from $v_{\beta\gamma}$ and that $w \in [v_{\alpha\beta}, c]_S$, we could make a shorter path to each of the three sides on the main geodesic triangle and then $v_{\beta\gamma}$ would be the center of the triangle, contradicting our choice of $c$. The path $[w^{+1}, v_{\beta\gamma}^{-1}]_{S \backslash v_{\beta\gamma}} \cup [v_{\beta\gamma}^{-1}, \beta_0]_S \cup [\beta_0, \alpha_0]_S \cup [\alpha_0, v_{\alpha\gamma}]_S \cup [v_{\alpha\gamma}, c]_S \cup[c, y]_S \cup [y, w^{-1}]_{S \backslash v_{\beta\gamma}}$ has to contain $w$ as well. No matter where $w$ is on this path is creates a contradiction - either with the definition of $c$, with the we have a geodesic, or with the assumption the main geodesics do not share any curves. Consequently, $[w^{-1}, w^{+1}]_{S \backslash (v_{\beta\gamma} \cup w)}$ must have length at most $7M$. Note that this argument also works when $w = y$ or $w = v_{\beta\gamma}^{-1}$, which gives a length bound on the geodesic in $H$ whose domain is ${\mathcal{C}}(S \backslash (v_{\beta\gamma} \cup y))$ or ${\mathcal{C}}(S \backslash (v_{\beta\gamma} \cup v_{\beta\gamma}^{-1}))$, respectively. Let $x$ be the curve adjacent to $c$ in $[v_{\beta\gamma}, c]_S$ and $x'$ be the last curve adjacent to $c$ in the geodesic from the hierarchy whose domain is ${\mathcal{C}}(S \backslash x)$. First, the geodesic $[c_0, x]_{S \backslash c}$ has length no more than $4M$ by the Bounded Geodesic Image Theorem applied to $[c_0, v_{\alpha\beta}]_S \cup [v_{\alpha\beta}, \beta_0]_S \cup [\beta_0, v_{\beta\gamma}]_S \cup [v_{\beta\gamma}, x]_S$, which doesn’t contain $c$ because if it did we would get a contradiction on the definition of $c$. Now take any curve $w \in [c_0, x]_{S \backslash c}$ and define $w^{-1}$ and $w^{+1}$ as before. Then the path $[w^{+1}, x]_{S \backslash c} \cup [x, v_{\beta \gamma}]_S \cup [v_{\beta\gamma}, \beta_0]_S \cup [\beta_0, v_{\alpha\beta}]_S \cup [v_{\alpha\beta}, c_0]_S \cup [c_0, w^{-1}]_{S \backslash c}$ cannot contain $w$ because $w$ is adjacent to $c$ so if any geodesic making up the path contained $w$ it would either contradict that it is a geodesic or that $c$ is minimal distance from the main geodesics of the triangle $\alpha\beta\gamma$. Hence, applying the Bounded Geodesic Image Theorem to the path we get that $[w^{-1}, w^{+1}]_{S \backslash (c \cup w)}$ has length no more than $6M$. This leaves bounding the lengths of the geodesics connecting $c_1$ to the second vertex of $[c_0, x]_{S \backslash c}$ and $x'$ to the penultimate vertex of $[c_0, x]_{S \backslash c}$. By a similar argument using the Bounded Geodesic Image Theorem each of these geodesics have length at most $6M$. Therefore, putting all the length bounds together we get that the relative 3-archy connecting $\{v_{\beta\gamma}, v_{\beta\gamma}^{-1}, v_{\beta\gamma}'\}$ to $\{c, c_0, c_1 \}$ has length at most $16*5M*7M + (4M-1)*6M+12M + (5M+1)*7M = 6,191,300$ Similarly $\{c, c_0, c_1\}$ is length at most $6,191,300$ from the other two sides of the triangle $\alpha\beta\gamma$. Therefore, the relative 3-archy triangle $\alpha\beta\gamma$ is $6,191,300$-centered. \[main thm 2\] For a surface $S$ such that $\xi(S) =3$, ${\mathcal{P}}_{rel}(S)$ is $1,607,425,314$-thin hyperbolic. For $x, y \in {\mathcal{P}}_{rel}(S)$ define ${\mathcal{L}}(x,y)$ to be the collection of relative 3-archy paths between $x$ and $y$. These are connected because each relative 3-archy path is connected and all the relative 3-archies in ${\mathcal{L}}(x, y)$ contain $x$ and $y$. By Theorem \[relative hierarchy k-centered\] and Lemma \[centered to thin\] we have that for all $x, y, z \in {\mathcal{P}}_{rel}(S)$ $${\mathcal{L}}(x, y) \subset N_{4*6,191,300}({\mathcal{L}}(x,z) \cup {\mathcal{L}}(z,y)).$$ If $d(x,y) \leq 1$ then any relative 3-archy between $x$ and $y$ is just the edge $\{ xy\}$, so ${\mathcal{L}}(x,y) = \{x, y\}$. We now have both conditions of Proposition \[subset hyperbolic\] satisfied. Therefore by applying Proposition \[subset hyperbolic\] we get that ${\mathcal{P}}_{rel}(S)$ is $1,607,425,314$-thin hyperbolic. [*Email:*]{}\ aweber@math.brown.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'The dynamical density fluctuations around the QCD critical point (CP) are analyzed using relativistic dissipative fluid dynamics, and we show that the sound mode around the QCD CP is strongly attenuated whereas the thermal fluctuation stands out there. We speculate that if possible suppression or disappearance of a Mach cone, which seems to be created by the partonic jets at RHIC, is observed as the incident energy of the heavy-ion collisions is decreased, it can be a signal of the existence of the QCD CP. We have presented the Israel-Stewart type fluid dynamic equations that are derived rigorously on the basis of the (dynamical) renormalization group method in the second part of the talk, which we omit here because of a lack of space.' address: - 'Department of Physics, Kyoto University, Kyoto 606-8502, Japan' - 'Analysis Technology Center, Fujifilm Corporation, Kanagawa 250-0193, Japan' author: - 'Teiji Kunihiro$^{(a)}$, Yuki Minami$^{(a)}$ and Kyosuke Tsumura$^{(b)}$' title: 'Critical Opalescence around the QCD Critical Point and Second-order Relativistic Hydrodynamic Equations Compatible with Boltzmann Equation ' --- Introduction ============ A unique feature of the QCD phase diagram is the existence of a critical point. At the QCD CP, the first order phase transition terminates and turns to a second order phase transition. Around a critical point of a second order transition, we can expect large fluctuations of various quantities, and more importantly there should exist a soft mode associated to the CP. The QCD CP belongs to the same universality class as the liquid-gas phase transition point, and, hence, the density fluctuating mode in the space-like region is a softening mode at the CP: The would-be soft mode of the chiral transition, the $\sigma$ mode, is coupled to the density fluctuation[@Kunihiro:1991qu] and becomes a slaving mode of the density variable[@fujii]; see [@Ohnishi:2005br] for another argument on the fate of the $\sigma$ mode around the CP. The density fluctuation depends on the transport as well as thermodynamic quantities that show an anomalous behavior around the critical point. In particular, we should note that the density-temperature coupling which was not explicitly taken into account can be important. In fact, the dynamical density fluctuations are analyzed in the non-relativistic case with use of the Navier-Stokes equation, which shows that the Rayleigh peak due to the thermal fluctuation would overwhelm the Brillouin peak due to the sound modes[@reichl]. We apply for the first time relativistic fluid dynamic equations to analyze the spectral properties of density fluctuations, and examine possible critical phenomena. We shall show that even the so called first-order relativistic fluid dynamic equations have generically no problem to describe fluid dynamical phenomena with long wave lengths contrary to naive expectation. In this report[@minami09], we shall show that the genuine and remaining soft mode at the QCD CP is not a sound mode but the diffusive thermal mode that is coupled to the sound mode, and that the possible divergent behavior of the viscosities might not be observed through the density fluctuations because the sound modes are attenuated around the CP and would eventually almost die out at the CP. Relativistic fluid dynamic equations for a viscous system ========================================================= The fluid dynamic equations are the balance equations for energy-momentum and particle number, $\partial_\mu T^{\mu \nu}=0$,$\partial_\mu N^\mu =0$, where $T^{\mu \nu}$ is the energy-momentum tensor and $N^\mu$ the particle current, respectively. They are expressed as $T^{\mu \nu}=(\epsilon+P)u^{\mu}u^{\nu}-Pg^{\mu\nu}+\tau^{\mu\nu}$ and $N^\mu = n u^\mu+\nu^\mu$, where $\epsilon$ is the energy density, $P$ the pressure, $u^\mu$ the flow velocity, and $n$ the particle density, the dissipative part of the energy-momentum tensor and the particle current are denoted by $\tau^{\mu \nu}$ and $\nu^\mu$, respectively. The so called first order equations such as Landau[@landau] and Eckart[@eckart] equations are parabolic and formally violates the causality, and are hence called acausal. The causality problem is circumvented in the Israel-Stewart equation[@is], which is a second-order equation with relaxation times incorporated. One should, however, note that the problem of the causality is only encountered when one tries to describe phenomena with small wave lengths beyond the valid region of the fluid dynamics: The phenomena which the fluid dynamics should describe are slowly varying ones with the wave lengths much larger than the mean free path. Indeed, the results for fluid dynamical modes with long wave lengths are qualitatively the same irrespective whether the second-order or first-order equations are used or not[@minami09]. As for the instability seen in the Eckart equation[@hiscock], a new first-order equation in the particle frame constructed by Tsumura, Kunihiro and Ohnishi (TKO) [@tko] has no such a pathological behavior. We employ Landau[@landau], Eckart[@eckart], Israel-Stewart(I-S)[@is] and TKO equation. Spectral function of the dynamical density fluctuation ====================================================== By linearizing the fluid dynamic equation around the equilibrium, we can obtain the spectral function of the density fluctuation. The calculational procedure is an extension of the non-relativistic case described in the text book [@reichl]. The spectral function derived from the Landau equation is found to be $$\begin{aligned} S_{n n}({\mbox{{\boldmath $k$}}},\omega ) &=& {\langle}(\delta n({\mbox{{\boldmath $k$}}},t=0))^2{\rangle}[\;(1-\frac{1}{\gamma}) \frac{2\Gamma_{\rm R} k^{2}}{\omega^{2}+\Gamma_{\rm R}^{2}k^{4}} \nonumber \\ &+& \frac{1}{\gamma} \{\frac{\Gamma_{\rm B} k^{2}}{(\omega -c_{s}k)^{2}+\Gamma_{\rm B}^{2}k^{4}} +\frac{\Gamma_{\rm B} k^{2}}{(\omega +c_{s}k)^{2}+\Gamma_{\rm B}^{2}k^{4}}\} \;]. \label{eq:landau}\end{aligned}$$ Here, the first factor represent the static spectral function, which would show a divergent behavior in the forward angle (${\mbox{{\boldmath $k$}}}=0$) at the CP; this is known as the critical opalescence. The first term in the square bracket represents the thermal mode called Rayleigh mode, whereas the second and the third the sound mode or Brillouin mode. The Eckart equation in the particle frame does not give a sensible result for the dynamical density fluctuation, in accord with its pathological property[@hiscock]. It is noteworthy that newly proposed equation, the TKO equation[@tko], in the particle frame gives a sensible result even thou it is a first-order equation. We have also applied the Israel-Stewart equation[@is] in the particle frame to obtain the spectral function for the dynamical density fluctuation. The result is the same as that of Landau equation; this tells us that the modified part to circumvent the causality problem does not affect the dynamics in the proper fluid dynamic regime. Critical behavior of the dynamical density fluctuations ======================================================= We examine the critical behavior of the spectral function of the density fluctuations around the QCD CP. We introduce the static critical exponents $\tilde{\gamma}$ and $\tilde{\alpha}$ which are defined as follows $\tilde{c}_n = c_0 t^{-\tilde{\alpha}}$,$K_T = K_0 t^{-\tilde{\gamma}}$, where $t=\vert (T - T_c) / T_c \vert$ is a reduced temperature, $c_0$ and $K_0$ are constants and $K_T=(1/n_0)(\partial n/\partial P)_T$ is the isothermal compressibility. We also denote the exponent of the thermal conductivity by $a_{\kappa}$, i.e., $\kappa =\kappa_0 t^{-a_{\kappa}}$, where $\kappa_0$ is a constant. It is known that $a_{\kappa} \sim 0.6$ around the liquid-gas phase transition point. [cc]{} ![The spectral function at $t\equiv (T-T_c)/T_C=0.5$ (left panel) and at $t=0.1$ (right panel) for $k=0.1$\[1/fm\]. The solid line represents the Landau/Israel-Stewart case, while the dashed line the TKO case. The strength of the Brillouin peaks due to the machanical sound mode becomes smaller as $T$ approaches $T_c$ due to the singularity of the ratio of specific heats; the Brillouin peaks eventually die out as seen from the right panel, where the difference between the Landau and TKO cases is not seen anymore. Note that the scale of the vertical line in the right panle is much bigger than that of the left panel.[]{data-label="fig:t5-1"}](t05.eps){width="45mm"} ![The spectral function at $t\equiv (T-T_c)/T_C=0.5$ (left panel) and at $t=0.1$ (right panel) for $k=0.1$\[1/fm\]. The solid line represents the Landau/Israel-Stewart case, while the dashed line the TKO case. The strength of the Brillouin peaks due to the machanical sound mode becomes smaller as $T$ approaches $T_c$ due to the singularity of the ratio of specific heats; the Brillouin peaks eventually die out as seen from the right panel, where the difference between the Landau and TKO cases is not seen anymore. Note that the scale of the vertical line in the right panle is much bigger than that of the left panel.[]{data-label="fig:t5-1"}](t01.eps){width="45mm"} Unfortunately or fortunately, these singular behaviors of the width of the Brillouin peaks around the QCD CP may not be observed. The strengths of the Rayleigh and the Brillouin peaks are given in terms of $\gamma$ as seen from eq.(\[eq:landau\]), the ratio of the specific heats, which behaves like $\gamma = \tilde{c}_p / \tilde{c}_n \sim t^{-\tilde{\gamma}+\tilde{\alpha}} \rightarrow \infty$, in the critical region. Then the strength of the Brillouin peaks is attenuated and only the Rayleigh peak stands out in the critical region, as shown in Fig. \[fig:t5-1\]. Let $\xi=\xi_0t^{-\nu}$ be the correlation length which diverges as the critical point is approached. If we write the wave length of the sound mode by $\lambda_s$, the fluid dynamic regime is expressed as $\xi << \lambda_s$, with which condition the sound mode can develop. However, in the vicinity of the critical point, the correlation length $\xi$ becomes very large and eventually becomes infinity, so the above inequality can not be satisfied, and the sound mode can not be developed in the vicinity of the critical point. From this argument, we can speculate about the fate of the possible Mach cone formation [@Torrieri:2009mv] by the particle passing through the medium with a speed larger than the sound velocity $c_s$. Such a Mach-cone like particle correlations are observed in the RHIC experiment[@star]. Then the disappearance or suppression of the Mach cone according to the lowering of the incident energy by RHIC would be a signal of the existence of the QCD critical point provided that the incident energy is large enough to make parton jets[@minami09]. Concluding remarks ================== In this report, the density fluctuations is analyzed using the relativistic fluid dynamic equations[@minami09]. We have suggested that a suppression or disappearance of the Mach cone formation with lowering the incident energy at RHIC can be a signal of the detection of the QCD CP. Although we have presented the Israel-Stewart type fluid dynamic equations that are derived rigorously on the basis of the (dynamical) renormalization group method, we omit them here because of a lack of space. For the details, we refer to the submitted paper[@Tsumura:2009vm], where it is shown that the transport coefficients have no frame dependence while the relaxation times are generically frame-dependent in the derived equations. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by a Grant-in-Aid for Scientific Research by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan (No. 20540265), and by the Grant-in-Aid for the global COE program “ The Next Generation of Physics, Spun from Universality and Emergence ” from MEXT. [00]{} T. Kunihiro, Phys. Lett.  B [**271**]{} (1991), 395. H.Fujii, Phys. Rev .D [**67**]{} (2003),094018; H.Fujii and M.Ohtani, Phys. Rev. D [**70**]{} (2004), 014016; D.T.Son and M.A.Stephanov, Phys. Rev. D [**70**]{} (2004), 056001. K. Ohnishi and T. Kunihiro, Phys. Lett.  B [**632**]{} (2006), 252. L. E. Reichl, [*A Modern Course in Statistical Physics*]{}(Wiley-Interscience 1998). Y. Minami and T. Kunihiro, arXiv:0904.2270 \[hep-th\]. L. D. Landau and E. M. Lifshitz, [*Fluid Mechanics*]{} (Pergamon,New York, 1959) C. Eckart, Phys. Rev. [**58**]{} (1940), 919. W. Israel, J.M.Stewart, Ann.Phys.(N.Y.)[**118**]{}, (1979) 341 W. A. Hiscock and L. Lindblom, Phys. Rev. D [**31**]{} (1985), 725. K. Tsumura, T. Kunihiro and K. Ohnishi, Phys. Lett.  B [**646**]{} (2007) 134; K. Tsumura, T. Kunihiro; Phys. Lett. B [**668**]{} (2008), 425. J. Casalderrey-Solana et al J. Phys. Conf. Ser.  [**27**]{} (2005), 22. L. M. Satarov et al, Phys. Lett.  B [**627**]{} (2005), 64. J. Adams [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett.  [**95**]{} (2005), 152301; S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett.  [**97**]{} (2006), 052301; B. I. Abelev [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett.  [**102**]{} (2009), 052302. K. Tsumura and T. Kunihiro, arXiv:0906.0079 \[hep-ph\].
{ "pile_set_name": "ArXiv" }
--- abstract: '[We find the symmetry algebras of cosets which are generalizations of the minimal-model cosets, of the specific form $\frac{SU(N)_{k} \times SU(N)_{\ell}}{SU(N)_{k+\ell}}$. We study this coset in its free field limit, with $k,\ell \rightarrow \infty$, where it reduces to a theory of free bosons. We show that, in this limit and at large $N$, the algebra $\W^e_\infty[1]$ emerges as a sub-algebra of the coset algebra. The full coset algebra is a larger algebra than conventional $\W$-algebras, with the number of generators rising exponentially with the spin, characteristic of a stringy growth of states. We compare the coset algebra to the symmetry algebra of the large $N$ symmetric product orbifold CFT, which is known to have a stringy symmetry algebra labelled the ‘higher spin square’. We propose that the higher spin square is a sub-algebra of the symmetry algebra of our stringy coset.]{}' author: - Dushyant Kumar - Menika Sharma title: Symmetry Algebras of Stringy Cosets --- Introduction ============ The papers [@Gaberdiel:2012] established a duality between the CFT of the coset model \[specialcoset\] , and three-dimensional higher-spin Vasiliev theory [@vas1], in the large $N,\ell$ limit. This duality is characterized by a large symmetry algebra $W_\infty[\mu]$ on the CFT side which is interpreted as the asymptotic symmetry algebra on the bulk side [@Campoleoni:2010]. The parameter $\mu=\frac{N}{\ell+N}$ is the ’t Hooft coupling on the CFT side, while on the bulk side it determines the mass of the scalar field. The $\W_\infty[\mu]$ algebra consists of generators with spin $2$ to $\infty$ with each generator having multiplicity one and the commutation relations of these generators depend on the parameter $\mu$. Our aim is to find the symmetry algebra for the coset \[gencoset\] which is a generalization of the coset in [Eq. (\[specialcoset\])]{}. The central charge for this coset is . The coset in [Eq. (\[gencoset\])]{} has three independent parameters and two interesting limits. The first limit arises on taking $N$ and $\ell$ to infinity while holding $k$ and $\mu=\frac{N}{N+\ell}$ fixed. The central charge reduces to \~k N (1- \^2). Since, the central charge scales as $\sim N$, in this limit the coset in [Eq. (\[gencoset\])]{} is usually referred to as a vector coset model. This vector coset model was studied in Refs. [@Creutzig:2013], where a related, but different, coset model $SU(k+\ell)_N/SU(\ell)_N$ was proposed as the CFT dual to a bulk Vasiliev theory with a matrix extension. There exist other limiting procedures which result in a central charge of the coset in [Eq. (\[gencoset\])]{} which scales as $N^2$. One way is to take $N$ and $k, \ell$ to infinity while holding $k-\ell$ and $\mu=\frac{N}{k+\ell+N}$ fixed. In this case, the central charge scales as $\sim N^2(1- \frac{1}{\mu})$. A variation of this, which is the limiting procedure we use in this paper, is to take $N$ and $k, \ell$ to infinity with $N/\ell$ set to zero and \[lambdac\] = fixed. The coset central charge is then . Since the central charge scales as a matrix model, in this limit we expect the coset in [Eq. (\[gencoset\])]{} to have a string dual and we refer to it in the text as the stringy $SU(N)$ coset. In this paper, we will study the symmetry algebra of this stringy $SU(N)$ coset in the limit where $k$ and $\ell$ go to infinity, but keep $N$ finite. Thus, we are not determining the algebra explicitly at large $N$. However, from the general behavior of the algebra at finite $N$, we can infer many of its properties at infinite $N$ which we will elaborate on in the text. In particular, this method tells us the properties of the large $N$ coset algebra at $\lambda=0$ with $\lambda$ defined as in [Eq. (\[lambdac\])]{}. Historically, the algebra for the coset model in [Eq. (\[specialcoset\])]{} was first studied at finite $N$ before the infinite $N$ case was dealt with. The finite $N$ algebra is called $\W_N$ [@Zamolodchikov:1985] and has generators ranging from $2$ to $N$ with multiplicity one. Indeed, it has taken many years to completely understand the large $N$ limit of the $\W_N$ algebra [@Bakas:1990; @Gaberdiel:2012t; @Linshaw:2017]. The algebra of the coset in [Eq. (\[gencoset\])]{} for small $N$ and large $k,\ell$ has been studied before in Refs. [@Bouwknegt:1992wg; @Bais:1987a; @Bais:1987b], although it has only attracted a fraction of the attention that the $\W_N$ algebra has and perhaps rightly so. $\W_N$ algebras, which are extensions of the Virasoro algebra, have complicated commutation relations but a simple spectrum of fields. In contrast, the symmetry algebras of the coset theories in [Eq. (\[gencoset\])]{} have a spectrum of generators with the multiplicity climbing at a exponential rate with the spin (the algebra still has, of course, a finite number of generators at finite $N$). Unlike their $\W_N$ counterparts, these algebras belong to a class of algebras which are finitely non-freely generated [@deBoer:1993] and thus are less tractable. On the other hand, the fact that this coset model and supersymmetric generalizations have generators whose multiplicity increases with spin makes them prime candidates to be dual to string theories in AdS. It is with this motivation that we study them in this paper. We exclusively work with the bosonic coset in [Eq. (\[gencoset\])]{}, so that we can study the symmetry algebra in its simplest form. A $\N=2$ supersymmetric generalization of the coset in [Eq. (\[gencoset\])]{} was studied in [@Gopakumar:2012]. Related work for a coset with $\N=1$ supersymmetry appears in Refs. [@Ahn:1990; @Ahn:2012]. However, a crucial distinction between our analysis and the supersymmetric cases studied is that we are working in the limit of zero coupling, with a free theory. The coset theories that we study are similar to $SU(N)$ gauge theories in four dimensions which are known to have string duals on the $AdS_5$ background. However, string theories on $AdS_3$ are expected to be dual to a different family of CFTs: symmetric product orbifolds. In this paper, we explore the relation between the symmetry algebra of the bosonic symmetric product orbifold theory and the coset theory. To be able to do this, we explicitly write down the currents of the coset theory. For the coset in [Eq. (\[specialcoset\])]{} with level $k=1$, the currents of the $\W$-algebra correspond to Casimir operators of $SU(N)$. For the more general coset theory, currents of the $\W$-algebra can be generated from the Casimir operators by sprinkling additional derivatives on the constituent currents. We construct these currents in Section \[secCurrents\]. However, as we will see, the coset theory also has additional currents that cannot be constructed from the Casimir operators. Information about the generators of any coset theory resides in the vacuum character of the partition function of the theory. In Section \[secGrowth\], we write down the vacuum character of the coset in [Eq. (\[gencoset\])]{}. Unlike the case $k=1$, it is not possible to formulate this character in closed form for general $k$ and $N$ and it can only be expressed in terms of string functions. We therefore resort to numerical techniques to find the generators of the algebra from its vacuum character for low values of $N$, following Ref. [@Bouwknegt:1992wg]. Later in Section \[secCurrents\], when we explicitly construct the currents for finite $N$, the calculation in Section \[secGrowth\] serves as a touchstone for our results. This paper is organized as follows. In [Sec. \[secGrowth\]]{} we find the low lying spectrum of the symmetry algebra of the coset in [Eq. (\[gencoset\])]{} for small values of $N$ in the large $k,\ell$ limit. In [Sec. \[secCurrents\]]{} we construct the currents for this same coset for the special values $N=2$ and $N=3$. The $N=3$ case is especially important for understanding the structure of the currents at general $N$ and we present this case in some detail. In [Sec. \[secCurrents\]]{}, we also work out the relation of the coset algebra to the algebra $\W_\infty[\mu]$ and also its relation to the higher spin square. The algebra of the symmetric product orbifold at general values of $N$ is worked out in Appendix C. Perturbative growth of states {#secGrowth} ============================= In this section, we compute the vacuum character of the coset model in [Eq. (\[gencoset\])]{} at finite $N$, with $k,\ell \rightarrow \infty$. This computation will tell us at what rate the perturbative states of the current algebra grow with the spin. The density of states of a CFT partition function in the regime of large spin $s$ but $s< c$ determines the dual holographic theory. Since, in this paper, we are only interested in the symmetry algebra of the coset in [Eq. (\[gencoset\])]{}, we will focus on the vacuum character. We will not be able to determine the vacuum character exactly but will compute the low-lying spectrum of the symmetry algebra. We carry this out in Sec. \[growthN\] and our results appear in Table \[t:1\]. It is also of interest to compute the vacuum character at fixed $k$ at finite $N$, since this helps us to understand the nature of the symmetry algebra of our stringy coset. We do this in Sec. \[growthk\] and the results appear in Tables \[t:2\] and  \[t:3\]. In Sec. \[asymptoticN\], we determine the asymptotic growth of states of the vacuum character. In the following, we describe the method we use to compute the vacuum character. It is well known that the $\W$-algebra of the coset $\mathfrak{g}_k/\mathfrak{g}$ is the same as the $\W$-algebra of the coset $(\mathfrak{g}_k \oplus \mathfrak{g}_\ell)/\mathfrak{g}_{k+\ell}$ in the $\ell \rightarrow \infty$ limit. Therefore, to find the symmetry algebra for the coset as $\ell \rightarrow \infty$, we find the algebra of the coset model \[coset2\] . We will use the so-called “character technique”. To find the coset symmetry algebra we need to look at the vacuum character, so we work out the branching function $b_\l^\L(q) $ for the weights: $\L=(k,0,\cdots)$ and $\l=(0,0,\cdots)$. This will give us a series in the variable $q$. We can rearrange this series as \[fvac\] (1- j q\^n ), where F\_[s]{} \_[k=s]{}\^ (1-q\^k). [Eq. (\[fvac\])]{} is the general form of the vacuum character for an algebra with fields of spins $s_i$, where $i$ ranges from $1$ to $l$ and with $j$ null states starting at order $n$. Here, $l$ is the total number of generators of the algebra. Note that the character-technique is not fool-proof. The actual algebra may have additional currents, since we can always add currents to the denominator of [Eq. (\[fvac\])]{}, while at the same time increasing the number of null states to keep the vacuum character unchanged. Nevertheless, studying the vacuum character gives us a good indication of the nature of the algebra. We also restrict our attention to the vacuum character and ignore any extensions of the coset algebra at specific values of the level $k$. The branching function for the coset in [Eq. (\[coset2\])]{} is given by \[branching1\] b\_ł\^Ł(q) \_[L\_[Ł,ł]{}]{} q\^[L\_0 -c/24]{} = \_[wW]{} (w) c\^Ł\_[w(ł+)-+ kŁ\_0]{}(q) q\^[ |w(ł+)-|\^2]{}, where the $c^\L_\l(q)$ are the Kac-Peterson string functions defined in [Eq. (\[stringfunction\])]{}. Algebra for small $N$ {#growthN} ----------------------- We now take $k$ to $\infty$. Then the branching function in [Eq. (\[branching1\])]{} reduces to \[branching2\] b\_ł\^Ł(q) = \_[wW]{} (w) c\^Ł\_[w(ł+)-+ kŁ\_0]{}(q). The string functions are given by \[stringfunction\] c\^Ł\_ł(q) = [q\^[-c/24]{}(1-q\^n)\^[[dim]{}]{} ]{} \_[w]{} (w) q\^[h\_[w\*Ł,ł]{} ]{} \_[ { n\_| \_[\_+]{} n\_= w\*Ł-ł} ]{} (\_[\_+]{} \_[n\_]{}(q) ), where we have introduced \_n = \_[m0]{} (-1)\^m q\^[m(m+1) +nm]{},\_[-n]{}(q) = q\^n \_n(q), h\_[Ł,ł]{} = [ (Ł,Ł+2) 2(k+h\^V)]{} - [(ł,ł)2k]{}, and $w * \L = w(\L+ \rho) - \rho$. Here $\Delta_+$ denotes the set of positive roots of $\mathfrak{g}$. In the large $k$ limit, the sum in [Eq. (\[stringfunction\])]{} over the affine Weyl group elements will reduce to a sum over the finite Weyl group elements. Thus the expression in [Eq. (\[stringfunction\])]{} simplifies to \[stringfunction1\] c\^Ł\_ł(q) = [q\^[-c/24]{}(1-q\^n)\^[[dim]{}]{} ]{} \_[wW]{} (w) \_[ { n\_| \_[\_+]{} n\_= w\*Ł-ł} ]{} (\_[\_+]{} \_[n\_]{}(q) ). ### Algebra for stringy $SU(2)$ coset In this case, there are two Weyl group elements: $1$ and $w_{\alpha_1}$. Thus, the branching function in [Eq. (\[branching2\])]{} becomes b\_[(0)]{}\^[(k,0)]{}(q) =c\^[(k,0)]{}\_[(k,0)]{}(q) - c\^[(k,0)]{}\_[(k-2,2)]{}(q) . In the limit $k\rightarrow \infty$, the string functions appearing in the RHS of the above expression are given by c\^[(k,0)]{}\_[(k,0)]{}(q) = [q\^[-c/24]{} (1-q\^n)\^[3]{} ]{} {\_0(q) - \_[-1]{}(q) } and c\^[(k,0)]{}\_[(k-2,2)]{}(q) = [q\^[-c/24]{}(1-q\^n)\^[3]{} ]{} {\_[-1]{}(q) - \_[-2]{}(q) }. These expressions can be read off from [Eq. (\[stringfunction1\])]{}. Working out (and rearranging) the branching function we get b\_[(0)]{}\^[(k,0)]{}(q) = (1-q\^[13]{} -3 q\^[14]{} -7 q\^[15]{}- ) Thus the coset $\frac{SU(2)_k}{SU(2)}$ has a symmetry algebra with generators of spin \[SU2currents\] 2,4,6\^2,8\^2,9,10\^2, 12 in the large $k$ limit. ### Algebra for stringy $SU(3)$ coset The branching function in [Eq. (\[branching2\])]{} for the case of the coset $\frac{SU(3)_k}{SU(3)}$ becomes b\_[(0,0)]{}\^[(k,0,0)]{}(q) =c\^[(k,0,0)]{}\_[(k,0,0)]{}(q) - 2 c\^[(k,0,0)]{}\_[(k-2,1,1)]{}(q) + c\^[(k,0,0)]{}\_[(k-3,3,0)]{}(q)+ c\^[(k,0,0)]{}\_[(k-3,0,3)]{}(q) - c\^[(k,0,0)]{}\_[(k-4,2,2)]{}(q) . Defining (q)= q\^[-c/24]{} , and calculating the string functions in the limit $k\rightarrow \infty$ we find $$\begin{aligned} c^{(k,0,0)}_{(k,0,0)}(q) &=\zeta(q) \sum_{n\in\mathbb{Z}}\phi_{-n}(q) \big\{ 2\phi_{n-1}(q)\phi_{n-2}(q)-2\phi_{n-1}(q)\phi_{n}(q)-\phi_{n-2}(q)^2+\phi_{n}(q)^2 \big\}\,,\nonumber \\ c^{(k,0,0)}_{(k-2,1,1)}(q) &=\zeta(q) \sum_{n\in\mathbb{Z}}\phi_{-n}(q) \big\{ 2\phi_{n-2}(q)\phi_{n-3}(q)-2\phi_{n-1}(q)\phi_{n-2}(q)-\phi_{n-3}(q)^2+\phi_{n-1}(q)^2 \big\}\,,\nonumber\\ c^{(k,0,0)}_{(k-3,3,0)}(q) & = \zeta(q)\sum_{n\in\mathbb{Z}}\phi_{-n}(q) \big\{\phi_{n-4}(q)\phi_{n-2}(q)-\phi_{n-4}(q)\phi_{n-3}(q)-\phi_{n-3}(q)\phi_{n-1}(q) \nonumber\\[-3\jot] &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\phi_{n-3}(q)^2+ \phi_{n-2}(q)\phi_{n-1}(q)-\phi_{n-2}(q)^2 \big\}\,,\nonumber\\ \nonumber\\ c^{(k,0,0)}_{(k-4,2,2)}(q) &= \zeta(q) \sum_{n\in\mathbb{Z}}\phi_{-n}(q) \big\{2\phi_{n-3}(q)\phi_{n-4}(q)-2\phi_{n-2}(q)\phi_{n-3}(q)-\phi_{n-4}(q)^2+\phi_{n-2}(q)^2 \big\}\,.\nonumber\\ $$ In addition, c\^[(k,0,0)]{}\_[(k-3,0,3)]{}(q) = c\^[(k,0,0)]{}\_[(k-3,3,0)]{}(q). $N$ Vacuum Character Algebra ----- ----------------------------------------------------------------------- --------------------------------------------- $2$ $1 + q^2 + q^3 + 3 q^4 + 3 q^5 + 8 q^6 + 9 q^7 + 19 q^8 + \cdots$ $2, 4, 6^2,8^2,\cdots\,.$ $3$ $1 + q^2 + 2 q^3 + 4 q^4 + 6 q^5 + 15 q^6 + 22 q^7 + 46 q^8 + \cdots$ $2, 3, 4, 5, 6^4, 7^2, 8^7,\cdots$ $4$ $1 + q^2 + 2 q^3 + 5 q^4 + 7 q^5 + 18 q^6 + 29 q^7 + 64 q^8 + \cdots$ $2, 3, 4^2, 5, 6^5, 7^4 , 8^{12},\cdots $ $5$ $1 + q^2 + 2 q^3 + 5 q^4 + 8 q^5 + 19 q^6 + 32 q^7 + 71 q^8 + \cdots$ $2, 3, 4^2, 5^2, 6^5, 7^5, 8^{14},\cdots $ $6$ $1 + q^2 + 2 q^3 + 5 q^4 + 8 q^5 + 20 q^6 + 33 q^7 + 74 q^8+ \cdots$ $2, 3, 4^2, 5^2, 6^6, 7^5, 8^{15},\cdots $ $7$ $1 + q^2 + 2 q^3 + 5 q^4 + 8 q^5 + 20 q^6 + 34 q^7 + 75 q^8+ \cdots$ $2, 3, 4^2, 5^2, 6^6, 7^6, 8^{15},\cdots $ : \[t:1\]The vacuum character for the stringy coset model for small values of $N$. The corresponding algebra appears in the third column. The central charge of the coset is related to $N$ by $c=N^2-1$. Note that the vacuum character (and hence the algebra) stabilizes till order $q^N$: which means that the generators up to spin $N$ do not change on further increasing $N$. Working out the branching function we get \[n3char\] b\_[(0)]{}\^[(k,0)]{}(q) = (1 - 24 q\^[17]{} -137 q\^[18]{}- 404 q\^[19]{}-). The spin of the generators of the algebra and their multiplicity can now be read off from the denominator. Our results for the $N=2$ and $N=3$ cases agree with those in Ref. [@Bouwknegt:1992wg]. We can work out the vacuum character for the coset $\frac{SU(4)_k}{SU(4)}$, in a similar fashion, and we find that the algebra has generators of spin: \[VCSU4\] 2, 3, 4\^2, 5, 6\^5, 7\^4, 8\^[12]{}, 9\^[15]{}, 10\^[28]{}, 11\^[41]{}, 12\^[75]{}, 13\^[103]{}, 14\^[166]{}, 15\^[235]{}, 16\^[313]{}, 17\^[362]{}, 18\^[310]{}. Using Mathematica, we have worked out the symmetry algebra for the coset $\frac{SU(N)_k}{SU(N)}$ in the large $k$ limit till $N=7$ and up to generators of spin $8$. The results appear in Table \[t:1\]. As we can see from the table, the currents up to spin $N$ stop changing as $N$ is further increased. We, therefore, expect the algebra at $N=\infty$ to have the following low-lying spectrum of generators: \[VCSUN\] 2,3,4\^2,5\^2,6\^6,7\^6,. Algebra for cosets with finite $k$ {#growthk} ----------------------------------- To better understand the coset algebra in the infinite $k, \ell$ limit, it is instructive to find the algebra of the coset when $\ell$ is large but $k$ is fixed to a given value. The vacuum character for such a coset is given by [Eq. (\[branching1\])]{} and the string functions continue to be given by [Eq. (\[stringfunction\])]{}. The string functions for a fixed level can be calculated in Mathematica using the package affine.m [@Nazarov:2011mv]. In Table \[t:2\] we list the algebra for various $N$ for $k=3$. As can be seen, even for this low value of the level, the number of generators of the algebra grow quickly with the spin. In fact, for a fixed value of $N$, the coset algebra stabilizes for a small value of the level $k$ — that is the coset generators do not change after a certain level. This fact was earlier reported in [@Blumenhagen:1991]. As we show in Table \[t:3\], for $N=3$ the algebra has stabilized at level $8$. The field content at this level is identical to the field content at level $k=\infty$ calculated in [Eq. (\[n3char\])]{}. Note that the number of null states continue to change and specifically decrease as we increase the level to infinity. The null states, however, never disappear from the spectrum and are present even in the infinite level limit. Note that the growth rate of currents at finite $k$ is sharper than what might expect from the $T$-dual coset $SU(k+\ell)_N/(SU(k)_N \times SU(\ell)_N )$. The symmetry algebra of this dual coset is expected to be a (truncation of) the matrix extension of $\W_N$ in the large $\ell, N$ limit for fixed $k$ — so the multiplicity for a given spin would be at most $k^2$. However, it is possible for dual cosets to have different symmetry algebras [@Bowcock:1988]. Asymptotic growth of vacuum character {#asymptoticN} -------------------------------------- The asymptotic growth of the vacuum character can be determined from the general formula for the asymptotic behaviour of branching functions [@Kac:1988]. Let us write the general branching function as b\_ł\^Ł(q) = \_s a\_s q\^s. Then, asymptotically as $s\rightarrow \infty$, a\_s \~ (c/6)\^[1/4]{} b(Ł,ł) s\^[-]{} ${\pi \sqrt{2/3\, c\, s}}$ where, $ b(\L,\l)$ is a positive real number and $c$ is the central charge. Thus, in our case: a\_s \~s\^[-]{} $ \pi\sqrt{{\tfrac{2}{3}N^2 s}}$ where we have dropped the constants. In spite of the fact that there is an exponential increase in the number of currents, for small spin $s$, a large number of null states occur as $s$ becomes greater than $N^2$ for the vacuum branching function, as we saw in the previous section. Hence, the vacuum character has Cardy growth at large $s$. Note that this asymptotic behaviour holds only for finite $N$ and may change for infinite $N$. Generalized Casimir Current Algebra {#secCurrents} =================================== In this section, we write down the explicit form of the currents for the stringy coset in [Eq. (\[coset2\])]{}. This coset and the associated current algebra have been studied in Refs. [@Bouwknegt:1992wg; @Bais:1987a]. We review the known facts about the current algebra for this coset at arbitrary level $k$ in [Sec. \[sec:generalk\]]{}. We are interested in the current algebra in the limit of large level $k$. We show in [Sec. \[sec:largek\]]{} that in this limit, the coset theory reduces to a theory of free bosons. We also demonstrate that the number of currents grows with spin as expected from the vacuum character calculation in the previous section. We do this for the $N=2$ and $N=3$ cases in Secs. \[sec:su2\] and \[sec:su3\] respectively. Extrapolating from these results, we write down the general form of the current algebra generators in the large $N$ limit in [Sec. \[sec:largeN\]]{}. We will identify the simplest of these generators with the free field realization of the $\W^e_\infty[1]$ algebra. The algebra $\W^e_\infty[\mu]$ is an infinite-dimensional sub-algebra of $\W_\infty[\mu]$, which consists of fields of even spin only [@Candu]. In [Sec. \[sec:Relation\]]{}, we will show that a subset of the generators of our stringy coset can be arranged in representations of $\W^e_\infty[\mu]$. This is evocative of the higher spin square which is the symmetry algebra of the large $N$ symmetric product orbifold theory [@Gaberdiel:2015]. We will remark on this correspondence between the generators of the higher spin square and the coset theory in [Sec. \[sec:Relation\]]{}. Here, we first establish background facts that we need to determine the currents for the coset $SU(N)_k/SU(N)$. For any coset algebra $G/H$, the generators are the currents of $G$ that commute with that of $H$. The generators of our coset algebra are composed from the $SU(N)_k$ generators: $J^a$, where $a$ varies from $1$ to $N^2-1$. These affine algebra generators, that transform in the adjoint representation of $SU(N)$, satisfy the following operator product expansion: \[Jope\] J\^a(z)J\^b(w) = + + . Repeated indices will always imply summation, regardless of the placement of the indices. The generators of the coset algebra are those operators of $SU(N)_k$ that commute with the $SU(N)$ currents, which are given by the zero modes of the affine currents. Thus, if $Q(z)$ is a generator of the coset algebra, then $$J^0_{a}, Q(z)$$=0. As is shown in Sec. 7.2.1 of Ref. [@Bouwknegt:1992wg], this implies that $Q(z)$ must be a differential polynomial invariant in the $SU(N)$ currents. The first such invariant is the stress-energy tensor T(z)\~ [::]{} which is the quadratic Casimir of $SU(N)$ defined up to an overall normalization and $:\cdots:$ symbol denotes normal ordering. It is a quasi-primary field of conformal dimension two. We will find that the coset algebra currents, in general, take a simple form in the quasi-primary basis. A quasi-primary field is defined as having the following commutator with the Virasoro modes of the stress-energy tensor \[qpdef\] $$L_m, Q_n(z)$$ ={ n- (d-1)m}Q\_[n+m]{} where $m\in\{-1,0,1\}$. Here, $d$ is the conformal dimension of the field. A primary field on the other hand obeys [Eq. (\[qpdef\])]{} for all mode numbers $m$. As is well-known, the Casimir invariants are independent symmetric polynomial invariants of $SU(N)$. In general, the number of polynomial [*differential*]{} invariants for a group, which is the set of possible currents for the coset CFT, is much larger. General $k$ and $k=1$ {#sec:generalk} --------------------- At general level $k$ the stress energy tensor is given by \[Tdef\] T=[::]{} The coset currents for spin $3$ and $4$ for general level $k$ have been written down in Ref. [[@Bais:1987a]]{}, which we now review. At any level $k$ and for any $N$ there is always a single spin $3$ current of the form \[Q3def\] Q\_3 = d\_[abc]{} [::]{}. Here, $\alpha$ is a normalization factor that is given up to a constant by \^2 = , and $d_{abc}$ is the third-order invariant symmetric tensor for $SU(N)$. The above operator is, therefore, proportional to the third order Casimir of $SU(N)$. The normal ordering for three fields is defined as [::]{} = [::]{} and in a similar manner for operators consisting of more fields. Two primary spin $4$ currents were found for general $k$. The first field is \[Q41\] {[::]{} - 3 [::]{}} + [::]{} +  [::]{}, where $\beta$ and $\gamma$ are numerical factors dependent on $N$ and $k$. The second field is given by \[Q4\] Q\_4 = (k+N) d\_[abcd]{} [::]{} , where $d_{abcd}$ is the fourth-order invariant symmetric traceless tensor of $SU(N)$. In general, for $SU(N)$, there are $N-1$ primitive $d$-tensors of order $2,\cdots,N$. Each of these corresponds to a current. As we will show below, however, there are other $SU(N)$ tensor invariants relevant to constructing coset currents. The spin $4$ field in [Eq. (\[Q4\])]{} occurs in the OPE of $Q_3(z)$ and $Q_3(w)$: $$\begin{aligned} \label{Q3ope} Q_3(z)Q_3(w) =& \frac{c/3}{(z-w)^6} + \frac{2T(w)}{(z-w)^4} + \frac{\partial T(w)}{(z-w)^3} \nn \\ & + \frac{1}{(z-w)^2} \bigg\{\frac{32}{22+5c} {:\mathrel{\mkern2mu T(w)T(w) \mkern2mu}:} + \frac{3(c-2)}{10c+44}\partial^2 T(w) + Q_4(w) \bigg\} \nn\\ &+ \frac{1}{z-w} \bigg\{\frac{16}{22+5c} {:\mathrel{\mkern2mu \partial(T(w)T(w)) \mkern2mu}:} + \frac{c-6}{15c+66}\partial^3 T(w) + \frac{1}{2} \partial Q_4(w) \bigg\} + \cdots\end{aligned}$$ A well-known result is that for the coset $SU(N)_1/SU(N)$, primary fields with spin higher than $N$ are either null or vanish and that there is only a single field at a given spin, so that the algebra becomes identical to the $W_N$ algebra. Let us show this for $N=3$, for the spin $4$ operators. The field in [Eq. (\[Q4\])]{} vanishes for $SU(3)$ as the tensor $d_{abcd}$ collapses to zero for $N<4$. The second spin $4$ field in [Eq. (\[Q41\])]{} can be written as \[Q34\] Q\^[N=3]{}\_[4]{} =  [::]{}-  [::]{}-   \^2 T . While in Ref. [[@Bais:1987a]]{} it was shown that the field in [Eq. (\[Q34\])]{} vanishes upon using an explicit realization of the Kac-Moody algebra in terms of free boson vertex operators (so that the Sugwara construction for $\W_N$ maps to the free field Miura realization), we can also directly compute the two-point function for $Q^{N=3}_{4}(z)$. This is given by , omitting overall numerical coefficients. The only non-zero integer value for which this vanishes and the primary field $Q^{N=3}_{4}(z)$ becomes null is $k=1$. The limit $k\rightarrow \infty$ {#sec:largek} ------------------------------- In this section, we will construct the coset currents in the large $k$ limit, which is the main objective of this paper. We will work at finite $N$ and then extrapolate our results to large $N$. The coset currents for finite $k$ explicitly depend on $k$. To remove this dependence, we will redefine the $SU(N)_k$ generators as follows J\^a J\^a . As a result of this redefinition the stress-energy tensor in [Eq. (\[Tdef\])]{} becomes \[Tinfdef\] T=- J\^a J\^a in terms of the new generators. Similarly the spin $3$ current in [Eq. (\[Q3def\])]{} (and other higher spin currents) become $k$-independent under this redefinition. The OPE in [Eq. (\[Jope\])]{} between the Kac-Moody currents becomes J\^a(z)J\^b(w) = , since the single-pole term is now suppressed by $\sqrt{k}$. The currents therefore, become essentially free in the large $k$ limit and the theory behaves like a theory of $N^2-1$ free bosons. To maintain continuity with finite $k$, we will continue to take the coset currents to be $SU(N)$ invariants. The behavior of the theory in the large $k$ limit can also be motivated as follows. The $SU(N)_k$ current algebra can be written in terms of $N-1$ bosons and so-called “generating” parafermions [@Gepner:1987]. The bosonic fields are denoted by $\phi_i$, where $1\leq i \leq N-1$ and expressed as a vector $\boldsymbol{\phi}$. The parafermions $\psi_{\a}$ are fractional spin fields associated with the root lattice of $SU(N)$. Thus, here $\a$ labels the roots of $SU(N)$. The conformal dimension of these fields is given by (\_) = 1- = 1- . where we have used the normalization $\a^2=2$. In terms of these parafermions $\psi_{\a}$ and the bosonic field $\boldsymbol{\phi}$ the $SU(N)_k$ generators take the form \[ve1\] J\_i(z) \~ \_i \_z when $J_i(z) $ belongs to the Cartan sub-algebra and the form \[ve2\] J\_(z) \~ \_ for the rest of the generators corresponding to the $N^2-N$ roots of $SU(N)$. For level $k=1$, the parafermions $\psi_{\a}$ have vanishing dimension and the generators reduce to the usual vertex operator representation of the current algebra in terms of $N-1$ bosonic fields. As $k\rightarrow \infty$, the parafermions are promoted to bosons (of spin one). The $\exp$ term in [Eq. (\[ve2\])]{} reduces to one and the form of the generators $J_{\a}(z)$ become similar to $J_i(z)$. This is often referred to in the literature [@Bakas:1990] as flattening of the $SU(N)_k$ algebra in the large level limit to a $U(1)^{N^2-1}$ algebra. In the following, we will write down the generators of the coset algebra in the infinite level limit for the cases $N=2$ and $N=3$. We will denote a quasi-primary field of spin $s$ by $Q_s$ and a primary field by $P_s$. The associated $N$ value should be clear from context. We will always define the fields up to an overall normalization. To find the quasi-primary and primary currents, we have used the Mathematica package OPEdefs [@Thielemans:1991]. ### $N=2$ {#sec:su2} The $N=2$ case was studied in Ref. [@deBoer:1993]. A set of classical currents for the stringy $SU(2)$ coset can be obtained by acting on the Casimir invariant $\Tr(J J)$ by derivatives. These currents are of the form \[bilinear\] (\^J \^J). In addition to the Casimir invariant, $SU(2)$ has cubic invariants given by \[trilinear\] (\[\^J ,\^J\]\^J) . We can count the number of these invariants. The number of bilinear terms is the number of ways one can divide an integer into exactly two parts. The number of trilinear terms is given by the generating series for the number of ways to divide an integer into three [*distinct*]{} parts. The number of ways to divide an integer into $p$ distinct parts is given by the generating function: \[kdistinct\] . To get the independent terms we remove the total derivatives. Then the generating function for the classical currents is given by + = q\^2+ q\^4+ 2 q\^6 + 2 q\^8 + q\^9 + 2 q\^[10]{} + q\^[11]{} + 2 q\^[12]{} + . To find the quantum algebra of the coset, we have to find the primary completion of the classical currents. Relations between the invariant tensors of $SU(2)$ will make some of these currents vanish. This together with the presence of null states will truncate the set of infinite currents to the finite set listed in [Eq. (\[SU2currents\])]{}, derived from the vacuum character. In fact, it can be shown that the first vanishing state arises at the order at which a syzygy (a relation between the invariants) is present. For more details regarding this, the reader is referred to [@deBoer:1993]. We now write down the explicit form of the currents up to an overall normalization. The stress energy tensor of the coset is given by [Eq. (\[Tinfdef\])]{}. There is no spin $3$ current. We have a single primary field of dimension $4$ which is of the form \[n2p4\] P\_4 = (:\^2 J\^a J\^a: -  :J\^a J\^a: )- :T T: + \^2 T . Note that the first term in brackets is a quasi-primary field of dimension $4$ while the rest are correction terms to make the field primary. Next we have two primary fields of spin $6$. The first is of the form in [Eq. (\[bilinear\])]{} and is given by $$\begin{aligned} \label{n2p61} P_{6,1} = {:\mathrel{\mkern2mu \partial^3 J^a \partial J^a \mkern2mu}:} &-{:\mathrel{\mkern2mu \partial^2 J^a \partial^2 J^a \mkern2mu}:} - \tfrac{1}{10}{:\mathrel{\mkern2mu \partial^4 J^a J^a \mkern2mu}:}\ \nn \\ &+ ~\alpha {:\mathrel{\mkern2mu TTT \mkern2mu}:} + ~\beta{:\mathrel{\mkern2mu \partial^2T T \mkern2mu}:} +~ \gamma {:\mathrel{\mkern2mu \partial T \partial T \mkern2mu}:} +~ \delta {:\mathrel{\mkern2mu Q_4 T \mkern2mu}:} +~\epsilon \,\partial^2 Q_4+ \zeta\, \partial^4 T \,.\end{aligned}$$ The term in the first line of the RHS is the associated quasi-primary field. The coefficients $\a,\b,\cdots$ are given in Appendix B. The second field is of the form [Eq. (\[trilinear\])]{} and is given by \[prim3\] P\_[6,2]{} = \_[abc]{} [::]{} where $\epsilon_{abc}$ is the Levi-Civita tensor. The expression as written above is already a primary field of dimension $6$ and does not need any correction terms. Continuing in this manner, one can write down all operators of the algebra. As stated above, the primary fields start becoming null from spin $11$ and the algebra thus consists of a finite number of fields. ### $N=3$ {#sec:su3} For the stringy $SU(3)$ coset, the classical currents related to the Casimir invariants are of the form $$\begin{aligned} \label{su3casimirs} &\Tr(\partial^\mu J\partial^\nu J) \equiv \partial^\mu J^a \partial^\nu J^a \,,\nn\\ &\Tr(\{\partial^\mu J, \partial^\nu J\} \partial^\gamma J) \equiv d_{abc} \partial^\mu J^a \partial^\nu J^b \partial^\gamma J^c \,.\end{aligned}$$ Below we list all the currents up to spin $6$. The lowest spin operators having the structure in [Eq. (\[su3casimirs\])]{} are the stress-energy tensor and the spin $3$ Casimir current: P\_3 = d\_[abc]{} [::]{}. The lowest spin currents with additional derivatives are the spin $4$ current of the form \[n3q4\] Q\_4= [::]{} - [::]{} and the spin $5$ current of the form Q\_5 = d\_[abc]{} ( [::]{} - [::]{} ) . At the next level there are two additional spin $6$ currents of the form \[b6\] Q\_[6,1]{} =[::]{} -[::]{} - [::]{} and \[s6\] Q\_[6,2]{} = d\_[abc]{} ( [::]{} -6 [::]{} + 6[::]{} ) . The currents as written are all quasi-primary, save for the spin $3$ current $P_3$ which is primary. The primary completion of these fields appear in Appendix B. Apart from the Casimir invariants, there are other invariants as well for $SU(3)$. The tensor $f_{abc}$ is a skewsymmetric invariant and it leads to the currents of the form f\_[abc]{} \^J\^a \^J\^b \^J\^c     . Indeed the first such current is \[as6\] P\_[6,3]{} = f\_[abc]{}[::]{}, which is a primary field of dimension $6$. This accounts for three of the four spin $6$ currents predicted by the vacuum character. To write the final spin $6$ current it is useful to look at the primary fields in the theory. Primary fields of the coset CFT can be divided into two categories: those that are $SU(3)$ singlets and those that are not. The vacuum character contains information about primary fields that are also $SU(3)$ singlets. Taking the vacuum character in [Eq. (\[n3char\])]{} and expanding in Virasoro characters gives \[VirExp\] (1-q)V\_0(q) + V\_2(q) + V\_3(q) + V\_4(q) + V\_5(q) + 5 V\_6(q) + 3 V\_7(q) + 11 V\_8(q) + . where $V_h(q)$ is the character of the Virasoro algebra Verma module V\_h(q) = q\^h \_[j=1]{}\^ . and we have assumed that the Virasoro characters are irreducible. From [Eq. (\[VirExp\])]{}, we can see that there should be a total of five primary fields of spin $6$, that are also $SU(3)$ singlets. These include the fields that are composite operators. Composite operators can also be divided into two categories: those that are composed of $SU(3)$ singlets and those that are composed of fields transforming non-trivially under $SU(3)$. An example of the first kind of operator is the following composite spin $6$ operator: P\_[6,4]{}=[::]{}+ [::]{} + [::]{}+ [::]{} + [::]{} + \^2 Q\_4 + \^4 T. This is a primary field if the coefficients $\a,\b,\gamma,\delta,\epsilon,\zeta$ take the values = ,  = - ,  =,  =,  =,  =-. To find the other $SU(3)$ invariants we look for primary fields that are not $SU(3)$ singlets. A spin $2$ field of this nature was introduced in Ref. [@Bais:1987a], and takes the form P\^a\_2 = d\^[a]{}\_[bc]{}[::]{}. This is not the only possible primary field transforming in the $SU(3)$ adjoint rep. Fields of the schematic form $d_{abc} {:\mathrel{\mkern2mu \partial^\mu J^a \partial^\nu J^b \mkern2mu}:}$ can be potential primaries. The operator P\^a\_4 = d\^[a]{}\_[bc]{}$ {:\mathrel{\mkern2mu \partial^2 J^b J^c \mkern2mu}:} - \tfrac{3}{2} {:\mathrel{\mkern2mu \partial J^b \partial J^c \mkern2mu}:}$ + [::]{} - [::]{} . is a (non-null) spin $4$ primary. In fact, we can generate primaries from the skew-symmetric tensor invariant $f_{abc}$ in the same manner. The field P\^a\_3 = f\^[a]{}\_[bc]{}[::]{} is a primary operator of dimension three. We can construct new $SU(3)$ singlets from such primary fields. It will not, however, always be the case that the operators generated in this way are distinct to the singlets already constructed or are not null. The composite spin $4$ primary [::]{} - \^2 T - [::]{} is the same as the primary completion of the spin $4$ field in [Eq. (\[n3q4\])]{}. However, the composite field \[p65\] P\_[6,5]{} =[::]{} +[::]{} - [::]{} + [::]{} + [::]{} + \^4 T - \^2 Q\_4 is a new spin $6$ primary field. It can be easily verified, using Mathematica, that the operators $P_{6,1}, P_{6,2}, P_{6,3}, P_{6,4}$ and $P_{6,5}$ are linearly independent. Note that we can construct another spin $6$ singlet of the form ${:\mathrel{\mkern2mu P_3^a P_3^a \mkern2mu}:}$, but this operator is a linear combination of $P_{6,1}, P_{6,2}$ and $P_{6,5}$. The primary field in [Eq. (\[p65\])]{} is a composite field as far as the CFT is concerned, but is an independent $SU(N)$ invariant. It will, therefore, be counted by the vacuum character of the coset CFT. The spin $6$ fields that contribute to the vacuum character are thus: $P_{6,1}, P_{6,2}, P_{6,3}$ and $P_{6,5}$. Note that the field $P_{6,4}$ is a “double-trace” $SU(N)$ invariant and its contribution to the vacuum character has already been accounted for by $P_3$. The number of independent currents at spin $6$ is thus four as predicted by the vacuum character. We see that unlike for the $N=2$ case, the $N=3$ stringy coset also needs composite currents to generate the full set of currents. Of course, the currents $T,P_4$ etc are also composites of the primary field $J^a$, but we use the word composite here to mean operators that are composites of primary fields that are themselves composite in $J^a$. It is not difficult to estimate the number of such composite currents, although it is hard to discern which of them are non-redundant [@Dittner:1972]. For $SU(3)$, invariant tensors that can be formed out of the composite operators, denoted here by $A^a,B^a,C^a$, take the form d\_[ab]{} A\^a B\^b,     d\_[abc]{}A\^a B\^b C\^c. The growth rate for such composite operators (with increasing spin) exceeds the growth rate for generators predicted by the vacuum character in [Eq. (\[n3char\])]{}. However, it has to be checked on a case-by-case basis which of these fields are independent and contribute to the vacuum character. All the generators that we have constructed for infinite $k$ are also present at finite $k$ (for $k\geq3$). It is worthwhile to write down the exact form of some of these generators for finite $k$. At finite $k$, the spin $4$ primary field takes the form P\_4(k)= Q\_4 - T + [::]{}. The bilinear spin $6$ primary current takes the form $$\begin{aligned} P_{6,1}(k)= Q_{6,1} &- \tfrac{21 \a ^2(-129+106k +55k^2)}{5 \b\g\delta} {:\mathrel{\mkern2mu \partial^2T T \mkern2mu}:} - \tfrac{42 \a^2(1+k)(6+k)}{ \b\g\delta} {:\mathrel{\mkern2mu \partial T \partial T \mkern2mu}:} + \tfrac{21 \a }{5\b} {:\mathrel{\mkern2mu Q_4 T \mkern2mu}:} + \nn \\ &+ \tfrac{2 \a^2 (-219+ 1126 k +790 k^2) }{5 \b\g\delta}\partial^4 T - \tfrac{49 \a }{10\b} \partial^2 Q_4 - \tfrac{42 \a^3 (3-7k)}{\b\g\delta}{:\mathrel{\mkern2mu T T T \mkern2mu}:}\,\end{aligned}$$ where $\a= 3+k, \b= 9+4k, \g= 1+5k, \delta= 51+31 k$. In the above equations, the stress-energy tensor is defined as in [Eq. (\[Tdef\])]{} and the the currents $J^a$ are rescaled as $\sqrt{k}J^a$. The form of the quasi-primary operators is thus independent of $k$ for the bilinear currents. This is not true for the generator $P_{6,3}$. This current is not primary at finite $k$ and has to be modified in the following way to stay primary $$\begin{aligned} P_{6,3}(k)=& f_{abc} {:\mathrel{\mkern2mu \partial^2 J^a \partial J^b J^c \mkern2mu}:} -\tfrac{21 \a^2(1353+6518k +2225k^2)}{5 \b\g\delta} {:\mathrel{\mkern2mu \partial^2T T \mkern2mu}:} - \tfrac{42 \a^2 (159+160k+61k^2)}{5 \b\g\delta} {:\mathrel{\mkern2mu \partial T \partial T \mkern2mu}:} + \nn \\& + \tfrac{23 \a }{5\b} {:\mathrel{\mkern2mu Q_4 T \mkern2mu}:} + \tfrac{2 \a (-2529 +8379k +9668 k^2 +2150 k^3) }{5 \b\g\delta}\partial^4 T + \tfrac{474 +157k}{30\b} \partial^2 Q_4 - \tfrac{2 \a^3 (3+193k)}{\b\g\delta}{:\mathrel{\mkern2mu T T T \mkern2mu}:}\,.\end{aligned}$$ Note that as $k \rightarrow \infty$ the fields $P_4(k)$ and $P_{6,1}(k)$ reduce to their corresponding counterparts in [Eq. (\[An3p4\])]{} and [Eq. (\[An3p61\])]{}, while $P_{6,3}(k)$ becomes identical to [Eq. (\[as6\])]{}. ### Large $N$ {#sec:largeN} ![\[rectfig\] The operators in the top-most row correspond to the $SU(N)$ Casimir invariants. A subset of generators for the stringy coset algebra is generated by acting by derivatives on the constituent terms of each Casimir operator. The cross-hatched area denotes the operators that are null and not part of the algebra for a given level $k$ and $N$. For $k=1$, only the top row of operators is not null and corresponds to the $W_N$ algebra. Increasing the level $k$ reduces the cross-hatched area but does not eliminate it completely even as $k$ tends to $\infty$. Increasing $N$ corresponds to adding more columns and also reducing the cross-hatched area. As we take $N$ to $\infty$, with $k$ already taken to $\infty$, null states disappear from the first column and the algebra becomes $\W_\infty^e[1]$. []{data-label="figrect"}](rectangle.pdf) At large $N$, the currents that follow from the symmetric $d$-tensor invariants are given by \[genericC\] d\_[abc]{}\^J\^a \^J\^b \^J\^c . For $SU(N)$ there is a single Casimir invariant at orders $2,\cdots,N$. Hence for infinite $N$, the generating function for the generalized Casimir currents is \[SCF\] \_[p=2]{}\^ = \_[n=2]{}\^ -q. Since, the tensors $d_{abc\cdots}$ are totally symmetric, the generating function for an order $p$ current of the form in [Eq. (\[genericC\])]{} is given by the number of ways to divide an integer into $p$ parts. Classically, thus, the number of independent currents grows at least as fast as $\exp({\sqrt{n})}$. If one assumes that at large $N$, there are no null fields in this set of currents, then the growth of this set of currents matches that of the higher spin square algebra. Clearly as we saw from the above examples of $SU(2)$ and $SU(3)$, these are not the only possible currents. For the group $SU(N)$, there are skew-symmetric invariant tensors $f_{abc\cdots}$ of order $3,5,\cdots,2N-1$ which lead to antisymmetric currents. In the literature, these skew-symmetric tensors are also known as $\Omega$ tensors [@deAzcarraga:1997; @deAzcarraga:2000]. The total number of such states as $N \rightarrow \infty$, assuming no null states is given by \[ASCF\] \_\^ = {\_[n=1]{}\^ (1+q\^n) - \_[n=1]{}\^ (1-q\^n) } -q. As we saw for finite $N$, composite currents are also present for the coset CFT. In general, it is hard to count these currents. Invariants composed of primary fields that transform non-trivially under $SU(N)$ do not always lead to new or non-null currents. This happens because of the identities that exist between $SU(N)$ tensors (see, for example, Appendix B of [@Bais:1987a]). Relation with $\W_\infty^e[1]$ and the higher spin square {#sec:Relation} --------------------------------------------------------- We now focus on the currents bilinear in the $J^i$’s. As we saw for finite $N$, the independent bilinear currents at any value of the level $k$ have even spin only with multiplicity one at each spin. In the quasi-primary basis, these currents do not change with increasing $k$ or $N$, since only the overall normalization changes. (In the primary basis, this is no longer true). In the large level limit and at finite $N$, the coset theory reduces to that of $N^2-1$ free bosons, hence the bilinear currents can be identified with a realization of the $\W_\infty^e[1]$ algebra at central charge $N^2-1$. As is well known, such a realization is a finite truncation of the $\W_\infty^e[1]$ algebra [@Blumenhagen:1994]. As we take $N\rightarrow \infty$, we recover the full $\W_\infty^e[1]$ algebra. It is natural to ask whether the higher-order generators of the stringy coset algebra as arranged in Fig. (\[figrect\]) can be identified with representations of the $\W_\infty^e[1]$ algebra. The OPE of any $\W_\infty^e[1]$ operator, which are the bilinear operators, with a generator of order $p$ gives rise to operators of the same order. For example, the OPE of any bilinear term of the form ${:\mathrel{\mkern2mu \partial^\mu J^a \partial^\nu J^a \mkern2mu}:}$ with a generic trilinear term is $$\begin{aligned} &{:\mathrel{\mkern2mu \partial^\mu J^a \partial^\nu J^a (z) \mkern2mu}:}{:\mathrel{\mkern2mu d_{bcd} \partial^\a J^b \partial^\b J^c \partial^\gamma J^d(w) \mkern2mu}:} \nn\\ \sim\, &\frac{\delta^{ab} \delta^{ac} d_{bcd} \partial^\gamma J^d(w)}{(z-w)^{\mu+\nu+\a+\b+4}}~+~\frac{\delta^a_{b} d_{bcd} {:\mathrel{\mkern2mu \partial^\nu J^a(z) \partial^\b J^c(w) \partial^\gamma J^d(w) \mkern2mu}:}}{(z-w)^{\mu+\a+2}} \nn\\ \sim\, &\frac{ d_{bcd} {:\mathrel{\mkern2mu \partial^\nu J^b(w) \partial^\b J^c(w) \partial^\gamma J^d(w) \mkern2mu}:}}{(z-w)^{\mu+\a+2}} + \frac{ d_{bcd} {:\mathrel{\mkern2mu \partial^{\nu+1} J^b(w) \partial^\b J^c(w) \partial^\gamma J^d(w) \mkern2mu}:}}{(z-w)^{\mu+\a+1}} +\cdots\end{aligned}$$ We have written the OPE schematically omitting numerical coefficients in all terms and writing only a [*single*]{} representative term for different possible ways of contracting. The first term in the second line vanishes because the tensor $d_{bcd}$ is traceless. Thus we are only left with trilinear operators in the OPE. The same logic applies to the OPE of any $p$-th order operator with a bilinear operator. The operators in each column of Fig. (\[figrect\]) thus fall into a representation of $\W_\infty^e[1]$ . Let us identify the $\W_\infty^e[1]$ representation that corresponds to each column of operators in Fig. (\[figrect\]). We use the standard coset notation for representations of the $\W_\infty^e[1]$ alegbra. Conventional $\W$-algebras that we deal with in this paper are the symmetry algebras of cosets of the form $\frac{\mathfrak{g}_{k} \otimes \mathfrak{g}_{1}}{\mathfrak{g}_{k+1}}$. The notation $(\Lambda_+; \Lambda_-)$ is used to denote a representation of a $\W$-algebra where $\Lambda_+$ is a representation of $\mathfrak{g}_{k}$ and $\Lambda_-$ is a representation of $\mathfrak{g}_{k+1}$. Then, a order $p$ column corresponds to the representation ${([0^{p-1},1,0,\ldots,0];0)}$ of $\W_\infty^e[1]$. The wedge character of a representation $R$ of $\W_\infty^e[\mu]$ is given by q\^[ B(R\^T) ]{}\^[U()]{}\_[R\^T]{}, where $R^T$ is the transpose of the representation $R$, $B(R^T)$ is the number of boxes in the Young tableaux of $R^T$ and $\chi^{U(\infty)}$ is the associated Schur function (See [@Gaberdiel:2011; @Gaberdiel:2015] for details). Thus the wedge character of the representation ${([0^{p-1},1,0,\ldots,0];0)}$ is given by \[bwedge1\] b\^[([wedge]{}) \[=1\]]{}\_[(\[0\^[p-1]{},1,0,…,0\];0)]{} = which is the same as the generating function of a column of operators with order $p$. Note that these are the same representations that constitute the higher spin square, the algebra of the large $N$ symmetric product orbifold. The operators of the coset algebra look very different to corresponding operators of the higher spin square algebra. Nevertheless, the number of operators (and hence the character of the representation) is the same in a column of the coset algebra whose highest weight state is an operator of the form $d_{ab..}J^a J^b ..$ of order $p$ and in a column of the higher spin square (See Fig. \[squafig\]) whose highest weight state is of the form $J^a J^a..$ of the same order $p$. Identical characters imply identical representations, since a representation of an algebra has a unique character. Thus, the subset of generators of the coset algebra that are present in Fig. (\[figrect\]) must be isomorphic to the higher spin square. Generators of the higher spin square can also be organized in terms of a $\W_{\infty}[0]$ algebra which is called the horizontal sub-algebra in Ref. [@Gaberdiel:2015]. In the coset case, this means that there should exist a change of basis for the $SU(N)$ currents, such that in the new basis the generators in the top row of Fig. (\[figrect\]) close at infinite $k$. It is possible that such a basis exists, since the coset theory also has a free fermion formulation at $c=N^2-1$. As is well known, the $\W_{\infty}[0]$ algebra can be expressed in terms of free fermions [@Pope:1991]. Next we look at operators of the generic form ${:\mathrel{\mkern2mu f_{bcd\cdots} \partial^\a J^b \partial^\b J^c \partial^\gamma J^d(w)\cdots \mkern2mu}:}$. The generating function for this set of operators is given by [Eq. (\[kdistinct\])]{}. Interpreting this as a wedge character of $\W_\infty^e[1]$, we find = = b\^[([wedge]{}) \[=1\]]{}\_[(\[p,0,…,0\];0)]{} . that it corresponds to the representation ${([p,0,\ldots,0];0)}$. All the “elementary operators” of the stringy coset algebra can thus be organized into representations of $\W_\infty^e[1]$. Discussion ========== In this paper, we have examined the coset in [Eq. (\[gencoset\])]{} in the free field limit, which is equivalent to its zero coupling limit. On the basis that the central charge of this coset scales as $N^2$, it has generally been expected that this coset is dual to a string theory in the bulk. We computed the vacuum character for the coset at finite $N$ and found that the currents of the symmetry algebra indeed exhibit an exponential growth with the spin. The $\W$-algebra of the coset in [Eq. (\[gencoset\])]{} in the free field limit is the same as the $\W$-algebra of the coset in [Eq. (\[coset2\])]{} in the large $k$ limit. We have written down the explicit form of the low-dimension currents for the second coset model. As we saw in the text, the currents of this coset are simply $SU(N)$ invariants composed out of differential operators of the form $\partial^\mu J^a$, where the $J^a$ are $SU(N)_k$ currents. As such, this coset theory is an exact analog in two dimensions of $SU(N)$ Yang-Mills theories in higher dimensions, with the addition of Virasoro symmetry. The zero coupling limit of supersymmetric Yang-Mills theory in four dimensions is expected to be dual to tensionless string theory on the $AdS_5$ background [@Sundborg:2000]. It is, therefore, of interest to ask what the bulk dual of our coset model in the free field limit is. We know, from considerations of the D1-D5 system in Type IIB string theory, that it is a symmetric product orbifold theory that is expected to be dual to string theory on the $AdS_3$ background. Indeed, there has been much recent work in this direction clarifying aspects of this duality between tensionless strings and the free symmetric product orbifold theory [@Eberhardt:2018]. It is not clear where the coset theories, we have considered in this paper, fit into this picture. A more detailed understanding of the moduli space of $AdS_3$ string theories would be useful [@OhlssonSax:2018]. Further hints may come from integrability [@Sax:2014]. How does the more general $\W$-algebra of the stringy coset models relate to the $\W_\infty[\mu]$ symmetry of the vector coset models? We found that in the free field limit, the algebra $\W^e_\infty[1]$ is a sub-algebra of the full coset algebra. Further, there is a distinguished set of generators of the coset algebra that can be arranged into representations of $\W^e_\infty[1]$. Operators that are directly derived from the symmetric tensor invariants of $SU(N)$ can be arranged in the ${([0^{p-1},1,0,\ldots,0];0)}$ representations of $\W^e_\infty[1]$. Operators that are related to antisymmetric invariants of $SU(N)$ can be arranged in the ${([p,0,\ldots,0];0)}$ representations of $\W^e_\infty[1]$. Since the first set of operators can be organized in the same set of representations of $\W^e_\infty[1]$ as the operators of the higher spin square, we propose that this set of operators is identical to the higher spin square. The higher spin square also has a $\W_{\infty}[0]$ horizontal algebra, in addition to the vertical $\W^e_\infty[1]$ algebra. We have not explicitly identified this horizontal algebra in the coset case. It is important to do so, in order to cement the identification of the coset sub-algebra with the higher spin square. In addition to the “elementary” generators, the coset theory also has a large number of composite operators at general values of $N$. In this paper, we have not attempted to classify them in representations of $\W^e_\infty[1]$. It is obviously of interest to understand the nature of these generators to comprehend the full symmetry algebra of the stringy coset theory. The coset in [Eq. (\[gencoset\])]{} is $T$-dual to a coset which is holographically dual to Vasiliev theory with a matrix extension. It would be interesting to explore the exact relation between this stringy coset and the matrix cosets [@Creutzig:2018]. In this paper, we have worked in the limit of zero coupling. However, in general the coset algebra depends on the parameter $\lambda$. It would be interesting to find how the algebra changes once the coupling is switched on. At certain values of non-zero $\lambda$, for example at $\lambda=1$, the coset theory has a formulation in terms of free bosons/fermions. This is also the point, where the symmetry algebra of the coset enhances to an $\N=1$ supersymmetric algebra. It would be nice to do an analysis similar to this paper for the $\lambda=1$ theory. More generally, Wolf space coset generalizations of [Eq. (\[gencoset\])]{} can be studied in a similar manner. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Rajesh Gopakumar, Yang-Hui He, Bogdan Stefanski and Alessandro Torrielli for discussions. We thank Biswajit Ransingh for running a computer program for us at HRI, Allahabad. The $SU(N)$ tensor invariants ============================= The tensors $d_{abcd..}$ are totally symmetric tensors which we have also chosen to be traceless. Suitable traceless symmetric tensors are defined in [@deAzcarraga:1997], where they are referred to as $t$-tensors. We use the notation $\d$ for the standard symmetric invariant tensors of $SU(N)$. They are defined recursively [@Sudbery:1990] starting from the standard third-order symmetric tensor $\d_{ijk}$ . One can construct the tensor \[recursion\] \^[(r+1)]{}\_[[i\_1]{} … [i\_[r+1]{}]{}]{}= \^[(r)]{}\_[[i\_1]{} … [i\_[r-1]{}]{}j]{} \^[(3)]{}\_[j[i\_r]{} [i\_[r+1]{}]{}]{} , r=3,4, …. For $r \geq 3$, the above construction does not define totally symmetric tensors. The $\d$-family of symmetric tensors is obtained by symmetrising over all free indices in (\[recursion\]). The $SU(n)$ $d$ tensors are related to members of the $\d$-family in the following way $$\begin{aligned} \label{tdefs} d{}_{ij} & \sim \delta_{ij} \,, \nn \\ d{}_{ijk} & \sim \d_{ijk} \,, \nn \\ d{}_{ijkl} & \sim n(n^2+1) \d^{(4)}{}_{(ijkl)} -2(n^2-4) \delta_{(ij}\delta_{kl)} \,, \nn \\ d{}_{ijklm} & \sim n(n^2+5) \d^{(5)}{}_{(ijklm)}-2(3n^2-20) \d_{(ijk}\delta_{lm)} \,, \cdots\end{aligned}$$ up to numerical coefficients dependent on $n$. The $d$-tensors vanish when their order is larger than $n$. The $d$-tensors are totally symmetric and are orthogonal to all other $d$-tensors of different order. For instance, for the fourth-order tensor this means d\_[ijkl]{} \_[ij]{}=0 , d\_[ijkl]{} d\_[ijk]{}=0 . Thus, the maximal contraction of the indices of two $d$-tensors of [*different*]{} order is zero. Using trace formulas for $\d$-tensors \_[(ijkl)]{} \_[ijm]{}= \_[klm]{} , we can find the contraction of two indices for the third-order and fourth-order $d$-tensor d\_[ijkl]{} d\_[ijm]{} \~d\_[klm]{} . Combinations of $d$-tensors provide a basis for the vector space of symmetric invariant polynomials of $SU(n)$. For $N=3$, the tensor $d_{abc}$ takes the following values [1.5]{}[1.6]{} [lllll]{} d\_[118]{}= & d\_[228]{}= & d\_[338]{}= & d\_[888]{}= &\ d\_[448]{}= & d\_[558]{}= & d\_[668]{}= & d\_[778]{}= &\ d\_[146]{}= & d\_[157]{}= & d\_[247]{}=- & d\_[256]{}= &\ d\_[344]{}= & d\_[355]{}= & d\_[366]{}=- & d\_[377]{}=- & The anti-symmetric tensor $f_{abc}$ takes the following values [1.5]{}[1.6]{} [lll]{} f\_[123]{} = 1 & f\_[147]{} = & f\_[156]{} = -\ f\_[246]{} = & f\_[257]{} = & f\_[345]{} =\ f\_[367]{} = - & f\_[458]{} = & f\_[678]{} = Primary fields ============== Here, we write down the full primary operators corresponding to the quasi-primary operators in [Sec. \[sec:su2\]]{} and [Sec. \[sec:su3\]]{}. Note that all fields are defined only up to an overall normalization factor. $N=2$ primaries {#n2-primaries .unnumbered} --------------- The primary field of [Eq. (\[n2p61\])]{} is $$\begin{aligned} P_{6,1}= \big({:\mathrel{\mkern2mu \partial^3 J^a \partial J^a \mkern2mu}:} &-{:\mathrel{\mkern2mu \partial^2 J^a \partial^2 J^a \mkern2mu}:} - \tfrac{1}{10}{:\mathrel{\mkern2mu \partial^4 J^a J^a \mkern2mu}:}\big)+ \tfrac{15288}{16465} {:\mathrel{\mkern2mu \partial^2 T T \mkern2mu}:} + \tfrac{5838}{16465} {:\mathrel{\mkern2mu \partial T \partial T \mkern2mu}:}+ \nn \\ & + \tfrac{ 653}{16465} \partial^4 T- \tfrac{56}{135} \partial^2 Q_4 + \tfrac{112}{45} {:\mathrel{\mkern2mu Q_4 T \mkern2mu}:} - \tfrac{22176}{16465} {:\mathrel{\mkern2mu T T T \mkern2mu}:}\,. \end{aligned}$$ $N=3$ primaries {#n3-primaries .unnumbered} --------------- The primary completion of the quasi-primary field $Q_4$ is \[An3p4\] P\_4 = Q\_4 - \^2 T+ [::]{}. The spin $5$ primary field is \[An3p5\] P\_5 = Q\_5 + \^2 Q\_3 - [::]{} . The bilinear spin $6$ primary is \[An3p61\] P\_[6,1]{} = Q\_[6,1]{} - [::]{} - [::]{} + [::]{} + \^4 T - \^2 Q\_4 + [::]{} . The trilinear spin $6$ primary is \[An3p62\] P\_[6,2]{} = Q\_[6,2]{} - \^3 P\_3 + [::]{} - [::]{} . Algebra of symmetric product orbifold CFT ========================================= The most straightforward way to find the spin and multiplicity of the generators of the symmetry algebra for the symmetric product orbifold theory is by looking at its vacuum character. Let us denote the chiral vacuum character of a seed theory by \[seed\] \_1(q) = \_[m=0]{}\^a\_m q\^m. Then the vacuum character of the $N$’th symmetric product orbifold can be read off from the following plethystic exponential [@elliptic] (,q) = \_[m=0]{}\^ = $\sum_{k=0}^{\infty} \frac{1}{k} \chi_1(q^k)\nu^k $. Expanding the exponential in the RHS gives a series in powers of $\nu$, we get: $$\begin{aligned} \label{symmProdN} \chi(\nu,q) &= \sum_{N=0}^{\infty} \chi_N(q) \nu^N \nn \\ &= 1 + \chi_1(q) \, \nu + \frac{{\chi_1(q)}^2 + \chi_1(q^2)} {2} \, \nu^2 + \frac{\chi_1(q)^3 + 3 \chi_1(q) \chi_1(q^2) + 2 \chi_1(q^3)}{6} \, \nu^3 + \cdots\,.\end{aligned}$$ We can find the vacuum character for the $N$’th symmetric orbifold CFT by reading off the coefficients of $\nu^N$. In our case, the seed theory is the single boson theory whose chiral character is given by \[seedboson\] \_1(q) = \_[m=0]{}\^a\_m q\^m = \_[n=1]{}\^ . Using this expression for $\chi_1$, we can compute the the characters and the corresponding symmetry algebra of the symmetric product orbifold CFT using [Eq. (\[symmProdN\])]{}. These chiral characters agree with standard results for $S_N$ orbifolds [@Bantay:1999]. From these characters, we can compute the spectrum of the algebra for small values of $N$: $$\begin{aligned} N=2: \qquad& 1,2,4\,.\nn\\ N=3: \qquad&1, 2, 3, 4, 5, 6^2 \,.\nn\\ N=4: \qquad&1,2, 3, 4^2, 5, 6^3, 7^2, 8^3, 9 \,. \nn\\ N=5: \qquad& 1, 2, 3, 4^2, 5^2, 6^3, 7^3, 8^5,9^4,10^5,11\,.\nn \\ N=6: \qquad &1, 2, 3, 4^2, 5^2, 6^4, 7^3, 8^6, 9^6, 10^8, 11^7, 12^8, 13\,.\nn\end{aligned}$$ As for the coset case, there can be more generators present. Despite the initial exponential growth in the number of operators with spin, at finite $N$, null states start appearing is the spectrum at some finite value of the spin and thus the algebra truncates. This is reflected in the asymptotic growth of the vacuum character: n\^[-]{}${\pi \sqrt{\tfrac{2}{3} n N}}$. which exihibits Cardy growth as $n\rightarrow \infty$. As $N\rightarrow \infty$, the vacuum character is given by \_[1]{}(1-) (,q) =\_[m=0]{}\^ , which is again the plethystic exponential of [Eq. (\[seedboson\])]{}. This can be rewritten as \_[n = 1]{}\^\_\^ \_[n = m]{}\^. The generators of the infinite $N$ algebra are thus enumerated by the generating function \[bosonpart\] \_[n=2]{}\^ . along with a spin one field. ![\[squafig\] The higher spin square is generated by acting by derivatives on the operators in the top row. The second column corresponds to the $\W^e_\infty[1]$ algebra while subsequent columns correspond to its representations.](square.pdf) We now write down the exact form of the currents. In the large $N$ limit, the “single-particle” generators for the symmetric product orbifold are symmetrized products of the form \[spgen\] \_[i=1]{}\^[N]{} (\^[m\_1]{} \_i) (\^[m\_p]{} \_i )  , m\_1,…, m\_p1  . Because of the symmetrization over $N$, this set of generators is in one-to-one correspondence with the chiral sector of a single boson. Removing the terms that are total derivatives, and in the $N\rightarrow \infty$ limit, they also constitute a set of linearly independent operators. Out of these generators, the subset of generators of [Eq. (\[spgen\])]{} of the form \[bgen\] \_[i=1]{}\^[N]{} (\^[m\_1]{} \_i) (\^[m\_2]{} \_i)  , m\_1,m\_21  , define quasiprimary generators of spin $s=m_1+m_2$, in specific linear combinations and when $s$ is even. In fact, only one independent current can be constructed at each even spin, meaning that it is not a linear combination of derivatives of lower-spin currents, and there are no independent odd-spin currents. This set of independent currents generate the even spin $\W$-algebra ${\cal W}^{e}_\infty[1]$. The generators in [Eq. (\[bgen\])]{} are of order two, i.e., they are bilinear in the $\phi$s. The currents in [Eq. (\[spgen\])]{} are of arbitrary order $p\geq 1$. However, it turns out that the currents of a fixed order $p$, suitably corrected by lower-order terms, form a representation of the wedge algebra of ${\cal W}^{e}_\infty[1]$. This is captured in Fig. (\[squafig\]) where currents of a given order correspond to columns. The operators of the symmetric product orbifold algebra are, therefore, organized into representations of ${\cal W}^{e}_\infty[1]$. The statement that operators of the higher spin square can be organized in representations of ${\cal W}^{e}_\infty[1]$ is captured by the following identity: \[VIfull\] \_[n=1]{}\^ = 1+\_[p=1]{}\^ b\^[([wedge]{}) \[=1\]]{}\_[(\[0\^[p-1]{},1,0,…,0\];0)]{} , where $b^{({\rm wedge}) [\lambda=1]}_{([0^{p-1},1,0,\ldots,0];0)} $ denotes the wedge character of the $([0^{p-1},1,0,\ldots,0];0)$ representation of ${\cal W}^{e}_\infty[1]$. The $b^{({\rm wedge}) [\lambda=1]}_{([0^{p-1},1,0,\ldots,0];0)} $ character is defined in [Eq. (\[bwedge1\])]{}. The LHS of [Eq. (\[VIfull\])]{} is the normalized partition function for a single boson. Combinatorially, the LHS is just the generating function for the number of ways one can partition an integer varying from $1$ to $\infty$ into an arbitrary number of parts. Each term in the sum on the RHS is number of ways one can partition an integer into exactly $p$ parts. In terms of the operators in [Eq. (\[spgen\])]{}, this is the spin $s$ of the operator, varying from $p$ to $\infty$, being partitioned into $m_1,m_2,\ldots,m_p$ at fixed $p$. An alternate way to organize the operators of the higher spin square is in terms of representations of ${\cal W}_{1+\infty}[0]$. [99]{} M. R. Gaberdiel and R. Gopakumar, J. Phys. A [**46**]{}, 214002 (2013) doi:10.1088/1751-8113/46/21/214002 \[arXiv:1207.6697 \[hep-th\]\].\ M. R. Gaberdiel and R. Gopakumar, Phys. Rev. D [**83**]{}, 066007 (2011) doi:10.1103/PhysRevD.83.066007 \[arXiv:1011.2986 \[hep-th\]\].\ M. R. Gaberdiel, R. Gopakumar and A. Saha, JHEP [**1102**]{}, 004 (2011) doi:10.1007/JHEP02(2011)004 \[arXiv:1009.6087 \[hep-th\]\]. E. S. Fradkin and M. A. Vasiliev, Annals Phys.  [**177**]{}, 63 (1987). doi:10.1016/S0003-4916(87)80025-8\ M. P. Blencowe, Class. Quant. Grav.  [**6**]{}, 443 (1989). doi:10.1088/0264-9381/6/4/005\ S. F. Prokushkin and M. A. Vasiliev, Nucl. Phys. B [**545**]{}, 385 (1999) doi:10.1016/S0550-3213(98)00839-6 \[hep-th/9806236\]. M. Henneaux and S. J. Rey, JHEP [**1012**]{}, 007 (2010) doi:10.1007/JHEP12(2010)007 \[arXiv:1008.4579 \[hep-th\]\]. A. Campoleoni, S. Fredenhagen, S. Pfenninger and S. Theisen, JHEP [**1011**]{}, 007 (2010) doi:10.1007/JHEP11(2010)007 \[arXiv:1008.4744 \[hep-th\]\].\ A. Campoleoni, S. Fredenhagen and S. Pfenninger, JHEP [**1109**]{}, 113 (2011) doi:10.1007/JHEP09(2011)113 \[arXiv:1107.0290 \[hep-th\]\]. T. Creutzig, Y. Hikida and P. B. Ronne, JHEP [**1311**]{}, 038 (2013) doi:10.1007/JHEP11(2013)038 \[arXiv:1306.0466 \[hep-th\]\].\ C. Candu and C. Vollenweider, JHEP [**1404**]{}, 145 (2014) doi:10.1007/JHEP04(2014)145 \[arXiv:1312.5240 \[hep-th\]\]. A. B. Zamolodchikov, Theor. Math. Phys.  [**65**]{}, 1205 (1985) \[Teor. Mat. Fiz.  [**65**]{}, 347 (1985)\]. doi:10.1007/BF01036128 I. Bakas and E. Kiritsis, Nucl. Phys. B [**343**]{}, 185 (1990) Erratum: \[Nucl. Phys. B [**350**]{}, 512 (1991)\]. doi:10.1016/0550-3213(90)90600-I, 10.1016/0550-3213(91)90269-4 M. R. Gaberdiel and R. Gopakumar, JHEP [**1207**]{}, 127 (2012) doi:10.1007/JHEP07(2012)127 \[arXiv:1205.2472 \[hep-th\]\]. A. R. Linshaw, arXiv:1710.02275 \[math.RT\]. P. Bouwknegt and K. Schoutens, Phys. Rept.  [**223**]{}, 183 (1993) doi:10.1016/0370-1573(93)90111-P \[hep-th/9210010\]. F. A. Bais, P. Bouwknegt, M. Surridge and K. Schoutens, Nucl. Phys. B [**304**]{}, 348 (1988). doi:10.1016/0550-3213(88)90631-1 F. A. Bais, P. Bouwknegt, M. Surridge and K. Schoutens, Nucl. Phys. B [**304**]{}, 371 (1988). doi:10.1016/0550-3213(88)90632-3 J. de Boer, L. Feher and A. Honecker, Nucl. Phys. B [**420**]{}, 409 (1994) doi:10.1016/0550-3213(94)90388-3 \[hep-th/9312049\]. R. Gopakumar, A. Hashimoto, I. R. Klebanov, S. Sachdev and K. Schoutens, Phys. Rev. D [**86**]{}, 066003 (2012) doi:10.1103/PhysRevD.86.066003 \[arXiv:1206.4719 \[hep-th\]\]. C. h. Ahn, K. Schoutens and A. Sevrin, Int. J. Mod. Phys. A [**6**]{}, 3467 (1991). doi:10.1142/S0217751X91001684 C. Ahn, JHEP [**1304**]{}, 033 (2013) doi:10.1007/JHEP04(2013)033 \[arXiv:1211.2589 \[hep-th\]\].\ C. Ahn, Phys. Rev. D [**94**]{}, no. 12, 126014 (2016) doi:10.1103/PhysRevD.94.126014 \[arXiv:1604.00756 \[hep-th\]\].\ C. Ahn, JHEP [**1307**]{}, 141 (2013) doi:10.1007/JHEP07(2013)141 \[arXiv:1305.5892 \[hep-th\]\]. M. R. Gaberdiel and R. Gopakumar, J. Phys. A [**48**]{}, no. 18, 185402 (2015) doi:10.1088/1751-8113/48/18/185402 \[arXiv:1501.07236 \[hep-th\]\]. A. Nazarov, Comput. Phys. Commun.  [**183**]{}, 2480 (2012) doi:10.1016/j.cpc.2012.06.014 \[arXiv:1107.4681 \[math.RT\]\]. R. Blumenhagen, “W algebras in conformal quantum theory,” BONN-IR-91-06. P. Bowcock and P. Goddard, Nucl. Phys. B [**305**]{}, 685 (1988). doi:10.1016/0550-3213(88)90122-8 V. G. Kac and M. Wakimoto, Adv. Math.  [**70**]{}, 156 (1988). doi:10.1016/0001-8708(88)90055-2 M. R. Gaberdiel and C. Vollenweider, JHEP [**1108**]{}, 104 (2011) doi:10.1007/JHEP08(2011)104 \[arXiv:1106.2634 \[hep-th\]\].\ C. Candu, M. R. Gaberdiel, M. Kelm and C. Vollenweider, JHEP [**1301**]{}, 185 (2013) doi:10.1007/JHEP01(2013)185 \[arXiv:1211.3113 \[hep-th\]\].\ C. Candu and C. Vollenweider, JHEP [**1311**]{}, 032 (2013) doi:10.1007/JHEP11(2013)032 \[arXiv:1305.0013 \[hep-th\]\].\ D. Kumar and M. Sharma, Phys. Rev. D [**95**]{}, no. 6, 066015 (2017) doi:10.1103/PhysRevD.95.066015 \[arXiv:1606.00791 \[hep-th\]\]. K. Thielemans, Int. J. Mod. Phys. C [**2**]{}, 787 (1991). doi:10.1142/S0129183191001001 J. A. de Azcarraga, A. J. Macfarlane, A. J. Mountain and J. C. Perez Bueno, Nucl. Phys. B [**510**]{}, 657 (1998) doi:10.1016/S0550-3213(97)00609-3 \[physics/9706006\]. J. A. de Azcarraga and A. J. Macfarlane, Int. J. Mod. Phys. A [**16**]{}, 1377 (2001) doi:10.1142/S0217751X01003111, 10.1142/S0217751X0100311X \[math-ph/0006026\]. D. Gepner, Nucl. Phys. B [**290**]{}, 10 (1987). doi:10.1016/0550-3213(87)90176-3 T. Procházka, JHEP [**1509**]{}, 116 (2015) doi:10.1007/JHEP09(2015)116 \[arXiv:1411.7697 \[hep-th\]\]. P. Dittner, Commun. Math. Phys.  [**27**]{}, 44 (1972). doi:10.1007/BF01649658 R. Blumenhagen, W. Eholzer, A. Honecker, K. Hornfeck and R. Hubel, Int. J. Mod. Phys. A [**10**]{}, 2367 (1995) doi:10.1142/S0217751X95001157 \[hep-th/9406203\].\ R. Blumenhagen, W. Eholzer, A. Honecker, K. Hornfeck and R. Hubel, Phys. Lett. B [**332**]{}, 51 (1994) doi:10.1016/0370-2693(94)90857-5 \[hep-th/9404113\]. M. R. Gaberdiel, R. Gopakumar, T. Hartman and S. Raju, JHEP [**1108**]{}, 077 (2011) doi:10.1007/JHEP08(2011)077 \[arXiv:1106.1897 \[hep-th\]\]. C. N. Pope, hep-th/9112076. B. Sundborg, Nucl. Phys. Proc. Suppl.  [**102**]{}, 113 (2001) doi:10.1016/S0920-5632(01)01545-6 \[hep-th/0103247\].\ P. Haggi-Mani and B. Sundborg, JHEP [**0004**]{}, 031 (2000) doi:10.1088/1126-6708/2000/04/031 \[hep-th/0002189\]. L. Eberhardt, M. R. Gaberdiel and R. Gopakumar, arXiv:1812.01007 \[hep-th\].\ M. R. Gaberdiel and R. Gopakumar, JHEP [**1805**]{}, 085 (2018) doi:10.1007/JHEP05(2018)085 \[arXiv:1803.04423 \[hep-th\]\].\ G. Giribet, C. Hull, M. Kleban, M. Porrati and E. Rabinovici, JHEP [**1808**]{}, 204 (2018) doi:10.1007/JHEP08(2018)204 \[arXiv:1803.04420 \[hep-th\]\].\ M. R. Gaberdiel, R. Gopakumar and C. Hull, JHEP [**1707**]{}, 090 (2017) doi:10.1007/JHEP07(2017)090 \[arXiv:1704.08665 \[hep-th\]\]. O. Ohlsson Sax and B. Stefanski, JHEP [**1805**]{}, 101 (2018) doi:10.1007/JHEP05(2018)101 \[arXiv:1804.02023 \[hep-th\]\]. O. Ohlsson Sax, A. Sfondrini and B. Stefanski, JHEP [**1506**]{}, 103 (2015) doi:10.1007/JHEP06(2015)103 \[arXiv:1411.3676 \[hep-th\]\].\ M. Baggio, O. Ohlsson Sax, A. Sfondrini, B. Stefanski and A. Torrielli, JHEP [**1704**]{}, 091 (2017) doi:10.1007/JHEP04(2017)091 \[arXiv:1701.03501 \[hep-th\]\]. T. Creutzig and Y. Hikida, arXiv:1812.07149 \[hep-th\]. A. Sudbery, “Computer-friendly d-tensor identities for SU(n),” PRINT-90-0325 (YORK), Journal of Physics A: Mathematical and General [**23**]{} [15]{} [L705]{} (1990) R. Dijkgraaf, G. W. Moore, E. P. Verlinde and H. L. Verlinde, Commun. Math. Phys.  [**185**]{}, 197 (1997) doi:10.1007/s002200050087 \[hep-th/9608096\].\ S. Benvenuti, B. Feng, A. Hanany and Y. H. He, JHEP [**0711**]{}, 050 (2007) doi:10.1088/1126-6708/2007/11/050 \[hep-th/0608050\].\ B. Feng, A. Hanany and Y. H. He, JHEP [**0703**]{}, 090 (2007) doi:10.1088/1126-6708/2007/03/090 \[hep-th/0701063\].\ P. Bantay, Nucl. Phys. B [**633**]{}, 365 (2002) doi:10.1016/S0550-3213(02)00198-0 \[hep-th/9910079\].\ A. Jevicki and J. Yoon, J. Phys. A [**49**]{}, no. 20, 205401 (2016) doi:10.1088/1751-8113/49/20/205401 \[arXiv:1511.07878 \[hep-th\]\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using archival data from the Chandra X-ray telescope, we have measured the spatial extent of the hot interstellar gas in a sample of 49 nearby interacting galaxy pairs, mergers, and merger remnants. For systems with SFR $>$ 1 M$_{\sun}$ yr$^{-1}$, the volume and mass of hot gas are strongly and linearly correlated with the star formation rate (SFR). This supports the idea that stellar/supernovae feedback dominates the production of hot gas in these galaxies. We compared the mass of X-ray-emitting hot gas M$_{\rm X}$(gas) with the molecular and atomic hydrogen interstellar gas masses in these galaxies (M$_{\rm H_2}$ and M$_{\rm HI}$, respectively), using published carbon monoxide and 21 cm HI measurements. Systems with higher SFRs have larger M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) ratios on average, in agreement with recent numerical simulations of star formation and feedback in merging galaxies. The M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) ratio also increases with dust temperature on average. The ratio M$_{\rm X}$(gas)/SFR is anti-correlated with the IRAS 60 $\mu$m to 100 $\mu$m flux ratio and with the Spitzer 3.6 $\mu$m to 24 $\mu$m color. These trends may be due to variations in the spatial density of young stars, the stellar age, the ratio of young to old stars, the initial mass function, and/or the efficiency of stellar feedback. Galaxies with low SFR ($<$1 M$_{\sun}$ yr$^{-1}$) and high K band luminosities may have an excess of hot gas relative to the relation for higher SFR galaxies, while galaxies with low K band luminosities (and therefore low stellar masses) may have a deficiency in hot gas, but our sample is not large enough for strong statistical significance.' author: - 'Beverly J. Smith' - Peter Wagstaff - Curtis Struck - Roberto Soria - Brianne Dunn - Douglas Swartz - 'Mark L. Giroux' title: | The Hot Gas Exhaust of Starburst Engines in Mergers:\ Testing Models of Stellar Feedback and Star Formation Regulation --- Introduction ============= Feedback from stellar winds, radiation pressure, and supernovae play a major role in regulating star formation, by heating, ionizing, and accelerating the interstellar gas and adding turbulence. However, the details of these processes are not well-understood. Computer simulations are frequently used to study stellar feedback and star formation, using various prescriptions to model the feedback. These processes are complicated to model because different feedback mechanisms help regulate star formation in different ways, multiple mechanisms operate simultaneously, and the different mechanisms affect each other. Radiation pressure from young stars disrupts molecular clouds, decreasing the amount of dense gas and preventing too-rapid gravitational collapse of clouds [@hopkins11; @hopkins13b; @hopkins14], while shock heating by supernovae and stellar winds is responsible for most of the hot X-ray-emitting gas in galaxies [@hopkins12b]. Before supernovae begin in a young star-forming region, radiation pressure and stellar winds clear out dense gas in star forming regions, heating and stirring the gas; later supernovae thus occur in lower density gas, causing the hot gas to survive longer and inhibiting subsequent star formation [@hopkins12a; @agertz13]. The more efficient the feedback, the lower the efficiency of subsequent star formation [@cox06c; @hopkins13b]. Supernovae provide both thermal energy, heating the gas, as well as kinetic feedback, which increases turbulence and thus affects later star formation [@springel00; @hopkins14]. Another way feedback regulates star formation is by removing gas from the main disk of the galaxy, either temporarily or permanently (e.g., [@muratov15]). Supernova-driven winds may drive gas out into the halo; this hot halo material may then cool and fall back in on the galaxy, triggering delayed star formation [@hopkins13a]. Winds due to supernovae may remove gas from the galaxy entirely; in some simulations the mass loss rate from supernovae-driven winds is greater than the star formation rate (SFR) [@hopkins12a; @hopkins13a]. The latest generation of simulations include multi-phase interstellar gas, to follow both the dense cores of molecular clouds where star formation occurs, the warmer atomic gas, and the hot intracloud medium [@hopkins13b; @hopkins14; @renaud13; @renaud14; @renaud15; @renaud19; @sparre16; @fensch17; @moreno19]. The results of such simulations sometimes depend upon the resolution of the simulation and the details of the calculations, with higher resolution models producing more efficient star formation [@teyssier10; @hopkins13a; @hayward14; @sparre16], and the duration and intensity of a starburst depending upon the prescription for feedback assumed in the model [@hopkins12a; @fensch17]. How stellar feedback is implemented in these codes has profound cosmological consequences. Stellar feedback is needed in cosmological simulations of galaxy formation and evolution to explain the observed galaxy mass function [@keres9], the galaxy stellar mass-halo mass relation [@hopkins14; @agertz15; @agertz16; @trujillo15] and the galaxian mass-metallicity relation [@finlator08; @ma16; @torrey19]. For cosmological models to reproduce the so-called galaxy main sequence (a correlation between stellar mass and star formation; [@brinchmann04; @noeske2007; @salim07]) or the Kennicutt-Schmidt Law (a relation between SFR and molecular gas mass; [@schmidt59; @kennicutt98; @kennicutt12]), stellar feedback is necessary [@hopkins14; @orr18]. To test these feedback models, X-ray observations are required. With high resolution X-ray imaging, the distribution, temperature, and mass of the hot gas within galaxies can be studied, and compared to other properties of the galaxies. In star-forming galaxies, the bulk of the hot gas is attributed to feedback from Type II supernova and young stars [@strickland00; @strickland04a; @strickland04b; @grimes05; @owen09; @li13; @mineo12; @smith18]. @hopkins12a model the X-ray production due to stellar feedback in different types of galaxies, and conclude that for normal spirals and dwarf galaxies, supernovae and stellar winds dominate, but in intense starbursts radiation pressure dominates. The soft X-rays from galactic winds originate from a small fraction of the total hot gas; the bulk of the hot gas is such low density it is difficult to observe directly [@strickland_stevens00]. Freely-flowing hot gas produces little X-ray emission, in contrast to hot gas confined by surrounding cooler gas [@hopkins12a]. Observational studies show that for star-forming galaxies, the X-ray luminosity from hot gas, L$_{\rm X}$(gas), is proportional to the SFR [@strickland04b; @grimes05; @mineo12; @smith18]. This is in contrast to some theoretical estimates, which predict that L$_{\rm X}$(gas) should be proportional to SFR$^2$ [@chevalier85; @zhang14]. More modern theoretical calculations including gravitational forces and improved radiative cooling are able to reproduce the observed L$_{\rm X}$(gas) $\propto$ SFR relation for star-forming galaxies if the mass-loading factor (mass outflow rate/SFR) decreases as SFR increases [@bustard16; @meiksin16]. The recent cosmological hydrodynamical simulations of @vandevoort16 including feedback find a constant L$_{\rm X}$(gas)/SFR ratio for galaxies with halo masses between 10$^{10.5}$ $-$ 10$^{12}$ M$_{\sun}$, where the Milky Way has a halo mass of $\sim$ 10$^{12}$ M$_{\sun}$. Over timescales of many gigayears, virialization of gas provided by stellar mass loss from older stars can contribute to the X-ray-emitting hot gas in galaxies, particularly in massive galaxies with low SFRs (e.g., [@ciotti91; @pellegrini98; @mathews03]). In quiescent early-type galaxies, this contribution dominates, as L$_{\rm X}$(gas) increases with mass rather than with SFR (e.g., [@osullivan01; @kim13; @su15; @goulding16]). The possible existence of this additional source of hot gas may need to be taken into account in interpreting X-ray data in terms of stellar feedback, particularly in galaxies with low SFRs and high masses. In the current study, our goal is to track the evolution of the hot gas in galaxies compared to the other components of the galaxies, particularly the molecular and atomic gas, and compare with expectations from theoretical models. This study is a follow-up to our earlier archival Chandra study of 49 nearby major mergers in a range of merger stages ([@smith18], hereafter Paper I). In the earlier study, we removed the resolved point sources and extracted the spectrum of the diffuse X-ray emission. We then separated this spectrum into a power law and a thermal component and corrected for internal absorption. Assuming the thermal component was due to hot gas, we compared the thermal luminosity L$_{\rm X}$(gas) with the global SFR as derived from UV/optical data. Although there is considerable system-to-system variation in the L$_{\rm X}$(gas)/SFR ratio, we did not see any trends of L$_{\rm X}$(gas)/SFR with merger stage, active galactic nuclei (AGN) activity, or SFR for galaxies with SFR $>$ 1 M$_{\sun}$ yr$^{-1}$. These results suggest that in star-forming galaxies, stellar feedback reaches an approximately steady-stage condition. In Paper I, we concluded that, for star forming galaxies, about 2% of the total energy output from supernovae and stellar winds is converted into X-ray flux; this result is in agreement with earlier results from smaller samples of galaxies [@grimes05; @mineo12]. In the current study, we revisit the same sample of mergers, and use the Chandra data to derive the spatial extent of the hot gas in these galaxies and therefore the mass of hot X-ray-emitting gas M$_{\rm X}$(gas). We compare M$_{\rm X}$(gas) with the amount of cold molecular and atomic hydrogen gas in these galaxies, as obtained from published carbon monoxide and 21 cm HI observations. Our goal is to better understand how interstellar gas cycles between hot and cold phases due to star formation and stellar feedback, and how this cycle affects the efficiency of star formation (SFE). In Section 2 of this paper, we review the selection of the sample and the available ultraviolet, infrared, and optical data. In Section 3, we explain the molecular and atomic hydrogen gas data. In Section 4, we determine the spatial extent of the hot gas in the galaxies. We obtain the volume and mass of hot X-ray-emitting gas and the electron density in Section 5. These values are then compared with other parameters of the systems in Section 6. The results are discussed in Section 7, and conclusions are provided in Section 8. Sample Selection and UV/IR/Optical Data ======================================= The sample selection is described in detail in Paper I. Briefly, the sample includes 49 pre-merger interacting pairs, post-merger remnants, and mid-merger systems in the nearby Universe (distance $<$ 180 Mpc). Initially, galaxies were chosen based on their morphologies from the @arp66 Atlas of Peculiar Galaxies, or from other published surveys of mergers and merger remnants, selecting approximately equal-mass interacting pairs or the remnants of the merger of such pairs. The final sample was then selected based on the availability of suitable Chandra data. See Paper I for details. The sample of galaxies is given in Table 1. Table 1 also provides basic data on these systems from Paper I, including distances assuming a Hubble constant of 73 km s$^{-1}$ Mpc$^{-1}$, correcting for peculiar velocities due to the Virgo Cluster, the Great Attractor, and the Shapley Supercluster. The median distance for our sample galaxies is 51.5 Mpc. Table 1 also provides the far-infrared luminosity L$_{\rm FIR}$ and the near-infrared K band luminosity L$_{\rm K}$, obtained from Infrared Astronomical Satellite (IRAS) and 2-micron All-Sky Survey (2MASS) data, respectively, as described in Paper I. In addition, Table 1 includes SFRs, derived from a combination of Spitzer infrared and GALEX UV photometry as described in Paper I. When available, the far-UV (FUV) is used; otherwise, near-UV (NUV) photometry is used. These SFRs correspond to the SFR averaged over a time period of $\sim$100 Myrs [@kennicutt12]. Table 1 also identifies the 13 galaxies in the sample that are classified in the NASA Extragalactic Database (NED[^1]) as Seyfert 1, Seyfert 2, or Low Ionization Nuclear Emission Region (Liner) galaxies. Detailed descriptions of the individual galaxies in the sample are provided in the Appendix of Paper I. Based on their morphologies, in Paper I we classified the systems into seven merger stages. These stages are: (1) separated but interacting pair with small or no tails, (2) separated pair with moderate or long tails, (3) pair with disks in contact, (4) common envelope, two nuclei, and tails, (5) single nucleus and two strong tails, (6), single nucleus but weak tails, and (7) disturbed elliptical with little or no tails. The staging is approximate, with an uncertainty of $\pm$1 stage. In Figure 1, various properties of the galaxies (distance, L$_{\rm FIR}$, the L$_{\rm FIR}$/L$_{\rm K}$ ratio, and the IRAS 60 $\mu$m to 100 $\mu$m flux ratio F$_{60}$/F$_{100}$) are plotted against the merger stage. The black open squares in Figure 1 are the data for the individual galaxies; the blue filled diamonds that are offset slightly to the left of the stage show the median value for that stage. The errorbars on the blue diamonds show the semi-interquartile range, equal to half the difference between the 75th percentile and the 25th percentile. As discussed in Paper I, this sample is inhomogeneous because it was selected based on the availability of archival Chandra data. As illustrated in Figure 1, the sample has some biases. The galaxies in the middle of the merger sequence tend to be more distant and so tend to have higher FIR luminosities. This means they have higher SFRs, since L$_{\rm FIR}$ is an approximate measure of the SFR for galaxies with high SFRs (e.g., [@kennicutt98; @kennicutt12]). The late-stage mergers tend to be closer and have lower L$_{\rm FIR}$. Late-stage mergers are difficult to identify at large distances, thus confirmed examples tend to be nearby. The late-stage mergers also tend to have lower L$_{\rm FIR}$/L$_{\rm K}$ ratios. This ratio is an approximate measure of the specific SFR (sSFR), defined as the SFR/stellar mass, since the K band luminosity L$_{\rm K}$ is an approximate measure of the stellar mass [@maraston98; @bell00; @into2013; @andreani2018], although it is affected by age and possible AGN contributions. The mid-merger systems also tend to have higher dust temperatures, as traced by the IRAS F$_{60}$/F$_{100}$ ratio (last panel Figure 1). The uncertainty in the staging, the biases in the sample, and the small number of systems in each stage means trends with merger stage are uncertain. As seen in Figure 1, the AGN tend to be mid-merger systems with high L$_{\rm FIR}$ and F$_{60}$/F$_{100}$. Although AGN can contribute to the heating of interstellar dust in galaxies, for most of our AGNs published studies of the IR spectra of the galaxies conclude that dust heating is dominated by star formation rather than the AGN (see the detailed discussions on the individual galaxies in the Appendix of Paper I). Figure 2 displays some well-known correlations between these basic parameters. The observed correlation between SFR and L$_{\rm K}$ (top left panel) or its equivalent has been seen many times before for star forming galaxies (e.g., [@smith98; @andreani2018]). This relation is a consequence of the correlation between SFR and stellar mass, which is known as the ‘galaxy main sequence’ for star forming galaxies (e.g., [@brinchmann04; @salim07]). For our sample, this correlation is only a weak correlation, because of the inclusion of some systems with low SFRs compared to L$_{\rm K}$. Galaxies with low SFR compared to the best-fit ‘galaxy main sequence’ relation are considered quenched, quenching, or post-starburst galaxies. In our sample, our post-starburst galaxies are all late-stage mergers, and have low L$_{\rm FIR}$/L$_{\rm K}$ ratios. Figure 2 shows that the SFR is correlated with both L$_{\rm FIR}$/L$_{\rm K}$ and the Spitzer \[3.6 $\mu$m\] $-$ \[24 $\mu$m\] color for our sample galaxies[^2]. The majority of our galaxies fall in a narrow range of L$_{\rm FIR}$/L$_{\rm K}$, $-$1 $\le$ log L$_{\rm FIR}$/L$_{\rm K}$ $\le$ 0, but a handful have lower L$_{\rm FIR}$/L$_{\rm K}$ ratios (the post-starburst systems with low SFRs) and a few have higher L$_{\rm FIR}$/L$_{\rm K}$ ratios. The \[3.6\] $-$ \[24\] color is an approximate measure of the ratio of the number of young-to-old stars (e.g., [@smith07]), increasing with increasing proportions of young stars. This means that \[3.6\] $-$ \[24\] is another approximate measure of the sSFR. Figure 2 also shows that F$_{60}$/F$_{100}$ is weakly correlated with SFR, with considerable scatter. This relation or its equivalent has been noted before (e.g., [@soifer1987; @smith1987]). Higher F$_{60}$/F$_{100}$ ratios imply hotter dust on average and more intense UV interstellar radiation fields (ISRF) (e.g., [@desert90]), which are correlated but not perfectly with the overall SFR of the galaxy. Atomic and Molecular Interstellar Gas ===================================== In the current study, we compare the hot X-ray-emitting gas mass in these galaxies with the interstellar molecular and atomic hydrogen gas masses. We obtained published measurements of the 2.6 mm CO (1 $-$ 0) fluxes of the sample galaxies from the literature, and used these to derive molecular gas masses. Since there is some uncertainty as to the relation between the CO luminosity and the molecular gas mass, we converted the CO fluxes into molecular gas masses M$_{\rm H_2}$ by two methods. First, we calculated M$_{\rm H_2}$ for all galaxies assuming a constant conversion equal to the Galactic conversion factor between H$_2$ column density N(H$_2$)(cm$^{-2}$) and CO intensity I(CO) of N(H$_2$)(cm$^{-2}$) = 2.0 $\times$ 10$^{20}$ I(CO)(K km s$^{-1}$) [@dame01; @bolatto13]. The Galactic conversion is thought to be appropriate for most galaxies, however, low metallicity systems may be deficient in CO relative to H$_2$, while extreme starburst galaxies may have enhanced CO/H$_2$ ratios (e.g., [@downes98; @bolatto13]). Thus for comparison we made a second estimate of M$_{\rm H_2}$ using a variable CO/H$_2$ ratio. For galaxies with L$_{\rm FIR}$ $>$ 10$^{11}$ L$_{\sun}$ (e.g., extreme starbursts), we used a lower conversion factor of 4 $\times$ 10$^{19}$ cm$^{-2}$/(K km s$^{-1}$) (e.g., [@Ueda2014]). For galaxies with low K band luminosities (e.g., possible low metallicity systems), L$_{\rm K}$ $<$ 10$^{10}$ L$_{\sun}$, we used an enhanced ratio of 5 $\times$ 10$^{20}$ cm$^{-2}$/(K km s$^{-1}$) (e.g., [@bolatto13]). For all other galaxies we used the standard Galactic value given above. Since accurate metallicities are not available for all of the galaxies in our sample and because there is some uncertainty as to how the CO/H$_2$ ratio varies with metallicity, we do not use a more complicated metallicity-dependent conversion in this study. In Section 6 of this paper, we compare various properties of the galaxies. We do the correlation analysis with both CO/H$_2$ ratios, to test whether our conclusions are influenced by our choice of CO/H$_2$ conversion factors. Molecular gas masses calculated with a constant CO/H$_2$ ratio equal to the Galactic value are provided in column 2 of Table 2. Molecular gas masses calculated with the variable CO/H$_2$ ratio are given in column 3 of Table 2. The reference for the original CO measurement is given in column 4 of Table 2. Note that molecular masses are not available for all of the galaxies in the sample. In some cases, no CO observations have been published. In other cases, only measurements of the central region have been made, where the beamsize is significantly smaller than the optical extent of the galaxy. In those cases, we are not able to get reliable upper limits to the global molecular gas content so no molecular gas mass is listed in Table 2. Followup CO observations would be useful to complete the molecular gas census of the sample galaxies. In the bottom row of Figure 3, the star formation efficiency, which we define as the global SFR/M$_{\rm H_2}$ ratio for the galaxy[^3], is plotted against the merger stage. The left panel of Figure 3 has SFE calculated with a constant CO/H$_{2}$ ratio and the right with the variable CO/H$_{2}$ ratio. These two determinations of the SFE are included in Table 2 in columns 5 and 6, respectively. As in Figure 1, the black open squares in Figure 3 are the data for the individual galaxies; the blue filled diamonds that are offset slightly to the left of the stage show the median value for that stage. The errorbars on the blue diamonds show the semi-interquartile range, equal to half the difference between the 75th percentile and the 25th percentile. Systems in the middle merger stages tend to have higher SFEs than those in the early stages. This is consistent with earlier surveys that found that L$_{\rm FIR}$/M$_{\rm H_2}$ is enhanced near nuclear coalescence [@casoli91; @georgakakis00]. Given the small numbers of galaxies per merger stage in our sample and the spread in the data per merger stage, however, this result is uncertain for our sample, especially if one also takes into account the uncertainties in the CO/H$_2$ ratio, and the selection effects. Because of these factors any trends with merger stage are uncertain for our sample. We also scoured the literature for measurements of the global HI masses of our galaxy sample. These values are tabulated in column 7 of Table 2, and the reference for the HI data is given in column 8. In Figure 3, we provide plots of merger stage vs. quantities derived from the CO and HI data. The top row of Figure 3 compares M$_{\rm HI}$/M$_{\rm H_2}$ with the merger stage. In the left panel, we assume a constant CO/H$_{2}$ ratio in calculating M$_{\rm H_2}$, while in the right panel we use the variable CO/H$_{2}$ ratio. An apparent increase in the HI gas fraction in the late stages of the merger sequence (left panel) is weakened when a variable CO/H$_2$ ratio is used (right panel). The SFE is plotted against dust temperature as measured by the IRAS 60 $\mu$m to 100 $\mu$m flux ratio in the two top panels of Figure 4, for the two CO/H$_2$ conversion factors. A trend is clearly visible, in that hotter dust is correlated with more efficient star formation. This relation is well-known (e.g., [@young86; @Sanders1991]). Note that the scatter is larger with the variable CO/H$_2$ ratio than for the constant conversion factor. In the bottom two panels of Figure 4, we compare the SFE with the SFR for the two conversion factors. There is a trend, in that systems with the highest SFRs have high SFEs, however, there is a lot of scatter, and there are some low SFR systems with high SFE. A spread in the SFE for a given SFR has been observed before (e.g., [@young86; @Sanders1991; @Young1996; @sanders96; @daddi10]). The scatter in the SFE vs. SFR correlation may be due to variations in the fraction of the CO-emitting gas involved in star formation. This would lead to variations in the SFE according to our definition, SFR/M$_{\rm H_2}$, where H$_2$ is derived from CO observations. Larger SFE may mean that a larger fraction of the CO-emitting cold molecular gas is in a dense state, an idea that is supported by both observations [@solomon92; @gao04; @juneau09; @wu10] and simulations [@teyssier10; @renaud14; @renaud19; @sparre16]. These simulations show that an increase in turbulent compression during an interaction can cause the gas probability density function to shift to higher densities, producing an increase in the amount of very high density gas. Thus the variations in SFE from galaxy to galaxy may be caused by differences in the dynamical state of the galaxies. X-Ray Spatial Extent ===================== All of the sample galaxies were observed with the Chandra ACIS-S array, and all of the galaxies fit well within the 83 $\times$ 83 field of view of the S3 chip of this array. Details of the individual observations, including exposure times and ObsID numbers, are provided in Paper I. In Paper I, we identified point sources in the field using the [*ciao*]{} software tool [*wavdetect*]{}. The point sources themselves and their statistics were the subject of another paper [@smith12]. After removing the point sources, in Paper I we used the Chandra Interactive Analysis of Observations ([*ciao*]{}) software routine [*specextract*]{} to extract the diffuse X-ray spectrum within the optical B band 25 mag arcsec$^{-2}$ isophote. This optical isophote was measured on Sloan Digitized Sky Survey (SDSS) g images using standard g-to-B conversion factors, or, if SDSS images weren’t available, equivalent levels on GALEX near-UV images were used (see Smith et al. 2018 for details). In Paper I, we used the [*xspec*]{}[^4] software to fit the 0.3 $-$ 8.0 keV background-subtracted point-source-removed spectrum within the $\mu$$_{\rm B}$ = 25 mag arcsec$^{-2}$ isophote (e.g., D25) to a combination power law plus thermal (MEKAL) spectrum, assuming a power law photon index of 1.8 and correcting for both Galactic and internal absorption. The power law component is assumed to be caused by faint unresolved point sources. The absorption-corrected 0.3 $-$ 8.0 keV luminosities for the MEKAL component are provided in Table 1 of the current paper; we assume that the MEKAL component is from hot gas. These X-ray luminosities have been corrected for absorption within the galaxies as described in Paper I. In the current study, we measured the spatial extent of the diffuse soft X-ray flux in these galaxies, and use these estimates to calculate the electron densities and masses of the hot X-ray-emitting gas. Our procedure is as follows. After initial processing and deflaring of the data as described in Paper I, we constructed 0.3 $-$ 1.0 keV maps of each galaxy. We then made an initial estimate of the spatial extent of the low energy diffuse X-ray emission by eye from the 0.3 $-$ 1.0 keV maps assuming an elliptical distribution, estimating the centroid of the emission, the radial extent, the ellipticity, and the position angle of the emission. For some of the pre-merger systems, two distinct regions of diffuse light are seen, associated with the two galaxies in the pair, so two elliptical regions were marked and the two regions were treated separately. We then divided these ellipses into a set of concentric elliptical annuli, and determined background-subtracted 0.3 $-$ 1.0 keV counts and photon flux surface brightness in each annuli using the [*ciao*]{} routine [*dmextract*]{}, excluding the point sources detected by the [*wavdetect*]{} software. For background subtraction, we used large areas outside of the optical extent of the galaxies excluding bright point sources. All of our target galaxies have small enough angular size(s) such that we can obtain sufficient background regions on the same chip. The flux calibration was done using a 0.8 keV monoenergetic exposure map. When multiple datasets were available, each set was calibrated individually and the results combined. We then produced radial profiles for each galaxy. The derivation of the radial profiles was done iteratively, modifying the initial region on the sky and the annuli widths until good radial profiles were produced. We started by dividing the initial preliminary ellipse into 10 radial annuli, adding three more annuli outside of the initial radius for a total of 13 annuli. If there were too few counts to get a good radial profile with 13 annuli, we divided the initial ellipse into only five annuli, and added 2 outside the initial region for a total of 7 annuli. In some low S/N cases, to get sufficient counts it was necessary to divide the initial ellipse into only three annuli, plus two additional annuli outside, for a total of five annuli. In total, we were able to derive radial profiles for 28 systems by this method, with 16 using 13 annuli, six using seven annuli, and and six using five annuli. The final background-subtracted radial profiles of the diffuse emission as obtained from [*dmextract*]{} are displayed in Figures 5 - 7, after conversion into 0.3 $-$ 1.0 keV surface brightness in units of photons s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$. In most cases, these radial profiles are centrally-peaked, but there are a few exceptions, most notably Arp 244 and Arp 299 (see Figure 6). As a check on these results, we also obtained radial profiles using a different method. Instead of [*dmextract*]{}, we used the [*ciao*]{} routine [*specextract*]{} to extract the soft (0.3 $-$ 1.0 keV) X-ray spectra for each annulus. When multiple datasets are available, the “combine = yes" option was used, which calibrates each dataset individually, then the weighted spectra were coadded. The [*ISIS*]{} (Interactive Spectral Interpretation System) software[^5] [@houck00] was then used to derive background-subtracted 0.3 $-$ 1.0 keV counts and photon fluxes in each annuli, taking into account the calibrated response function of the detector. These two procedures give reasonably consistent radial profiles, with the [*dmextract*]{} method giving lower fluxes by about a factor of 1.2 and somewhat smaller uncertainties. In the subsequent determination of the radial extent of the X-ray emission in the galaxies and the following analysis, we used the [*dmextract*]{}-determined radial profiles. Our goal in this paper is to obtain the physical size of the hot gas distribution within the galaxies, to derive electron densities and hot gas masses. An issue, however, is that how far out in the galaxy the X-ray emission can be detected depends upon the exposure time for the observations and the width of the annulus that is used. For the same annulus width, more sensitive observations can detect gas further out in the galaxy. This would give larger radii for the hot gas extent, although the gas in the outskirts may contribute little to the overall X-ray luminosity of the galaxy. This could lead to a bias in the analysis, producing larger volumes of hot gas for longer observations, which will affect the derivation of the electron densities and therefore the masses of hot gas. Because this is an archival sample, there is a large galaxy-to-galaxy variation in the observing times used. To get around this issue, it is desirable to use a consistent definition for the radius from galaxy to galaxy. In past studies of the hot gas distribution of galaxies, a number of different methods have been used to determine the volume of hot gas. For example, @boroson11 and @goulding16 measured the extent out to where the diffuse emission equals the background. @mcquinn18 used a similar method, measuring the extent out to when the diffuse emission is detected at a 2$\sigma$ level. Other groups measured the emission within the optical D25 isophote or the optical effective radius, and used this extent as the X-ray size in deriving electron densities [@mineo12; @su15; @gaspari2019]. A third method was used by @strickland04a and @grimes05, who used the radius which encloses a given fraction of the total 0.3 $-$ 1.0 keV flux. They find that the 90% enclosed-light fraction corresponds to an 0.3 $-$ 1.0 keV surface brightness between approximately $\sim$10$^{-9}$ $-$ 10$^{-8}$ photons s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ for their sample galaxies. Because our dataset is so heterogeneous, after some experimentation we chose to measure the radial extent out to a consistent 0.3 $-$ 1.0 keV surface brightness level for all of the sample galaxies. To decide on this level, we explored how the enclosed-light fraction varies with different surface brightness cutoffs, assuming that the counts within the optical B band 25 mag arcsec$^{-2}$ isophote is the ‘total’ flux (this issue is discussed further below). Upon experimentation, we found that for most galaxies a 0.3 $-$ 1.0 keV surface brightness cutoff of 3 $\times$ 10$^{-9}$ photons s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ produced counts that agreed with the total counts within 10%. This is consistent with the @grimes05 and @strickland04a results for their 90% enclosed-light fractions. There were 18 systems that were detected in the MEKAL component in Paper I but had too few counts for us to derive an acceptable radial profile. For these galaxies, we derived approximate sizes by starting with the initial by-eye elliptical regions, then iteratively increasing the size of the ellipse by 30% until the galaxy is detected at the $\ge$2$\sigma$ level and the 0.3 $-$ 1.0 keV counts in the expanded ellipse equaled those in the $\mu$$_{\rm B}$ = 25 mag arcsec$^{-2}$ isophote within the uncertainties. For the widely separated pre-merger pairs with two distinct regions of hot gas within the two optical galaxies, the two galaxies in the pair were treated separately in this procedure. Four of the galaxies for which we could not find a radial profile (Arp 163, Arp 235, Arp 243, and Arp 263) were undetected in the MEKAL component in Paper I. These four galaxies are not included in any of the subsequent plots in this paper which involve quantities derived from the spatial size of the X-ray emission. Another galaxy, UGC 02238, was nominally detected in the MEKAL component at the 2.6$\sigma$ level in the spectral decomposition in Paper I, however, in the 0.3 $-$ 1.0 keV map it was not detected within the optical extent of the galaxy at the 2$\sigma$ level. It is also omitted from the subsequent analysis in the current paper. Another system, UGC 05189, was undetected in Paper I in the MEKAL component, however, we detected the inner disks of both galaxies in the pair at the 5$\sigma$ level in the 0.3 $-$ 1.0 keV map. The area covered by the diffuse gas is considerably smaller than the optical extent, which might explain the non-detection in the spectral decomposition. Except for the five systems for which we could not derive radial profiles, the Chandra 0.3 $-$ 1.0 keV maps of the sample galaxies are displayed in the Appendix of this paper (Figures 20 - 27). In Table 3, we provide the central coordinates, major and minor axis radii, and position angles of the final ellipses derived using the methods described above, with the [*dmextract*]{}-derived sizes at 3 $\times$ 10$^{-9}$ photons s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ level. Table 3 also gives the number of annuli used in the radial profile (13, 7, 5, or 1). For systems with two distinct regions of diffuse emission, two ellipses are given in Table 3. In those cases, the name of the specific galaxy in the pair associated with the particular region is identified in the second column of Table 3. When the X-ray flux only comes from one galaxy in a pair, the name of that individual galaxy is listed in Table 3. If both galaxies in a pair are covered by a single region of diffuse emission, both names are given in the second column of Table 3. If there is only one galaxy in the system, the second column gives an alternative name for the galaxy. Table 3 also provides the point-source subtracted, background-subtracted 0.3 $-$ 1.0 keV counts in the final ellipse. Table 3 does not include UGC 02238 or the four systems without radial profiles that are undetected in the thermal component in Paper I. The final ellipses are superimposed on images of the galaxies in the Appendix of the paper. In Figure 8, we compare the background-subtracted 0.3 $-$ 1.0 keV counts obtained within the $\mu$$_{\rm B}$ = 25 mag arcsec$^{-2}$ isophote with those extracted within the Table 3 radial extents. The solid line on this plot is the one-to-one relation, and the dashed lines mark $\pm$ 10% differences. The systems marked by green hexagons in Figure 8 are those for which we were not able to find a radius using a set of concentric annuli. For most of the galaxies in the sample, the two measurements of the X-ray counts agree within the uncertainties with the range marked by the dotted lines. For only one system, IRAS 17208-0014, does our total counts in the 3 $\times$ 10$^{-9}$ photons s$^{-1}$ arcsec$^{-2}$ isophote exceed that in the optical isophotes by 10% or more, taking into account the uncertainties (i.e., only one system lie below the bottom dotted line). For IRAS 17208-0014, the X-ray radial extent in Table 3 exceeds the optical D25 size by a factor of 1.5, and the 0.3 $-$ 1.0 keV counts within the Table 3 ellipse are about 2.2 times those within the optical isophotes. Taking into account the uncertainties, four systems in our sample have counts within the ‘best’ radii that are 60% $-$ 80% of the counts within the optical extent, and one (UGC 05189) has counts within the ‘best’ radii that are 50% of the counts in the optical isophotes. These systems lie above the top dotted line in Figure 8. Most of these systems are galaxy pairs which have two distinct regions of X-ray emission within the $\mu$$_{\rm B}$ = 25 mag arcsec$^{-2}$ isophote. Very faint diffuse emission outside of these regions may contribute to the total counts in the optical extent. This faint emission likely doesn’t contribute much to the overall mass of hot gas in the system. For all but one of our systems, in our [*dmextract*]{} radial profiles we can measure X-ray emission beyond the 3 $\times$ 10$^{-9}$ photons s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ isophote. For completeness, for these systems we provide the full (2$\sigma$) extent of the X-ray emission in another table, Table 4. In all but 11 of these cases, the 2$\sigma$ extent of the diffuse X-ray emission is 20% or more larger than the optical D25 size. The most extreme case is Markarian 231 for which the ratio of the 2$\sigma$ X-ray radius divided by the maximum $\mu$$_{\rm B}$ = 25 mag arcsec$^{-2}$ radius is 2.6. For Markarian 231, the counts within the 2$\sigma$ extent are about 1.5 times those in the D25 radius. Although the measured 2$\sigma$ X-ray sizes are often larger than the D25 extent, the 0.3 $-$ 1.0 keV counts within the 2$\sigma$ radius are generally less than or consistent with the counts within the D25 radius. This means that the emission outside of D25 does not contribute significantly to the total flux. Volume and Mass of Hot Gas, Electron Densities, and Filling Factor ================================================================== We calculated the volume of hot gas for each system in the sample, assuming that the hot gas distribution has an ellipsoidal structure with the third dimension equal to the average of the other two. For these calculations, we use the X-ray sizes at the 3 $\times$ 10$^{-9}$ photons s$^{-1}$ arcsec$^{-2}$ isophote as discussed above (Table 3). For the pre-merger systems for which we could measure two distinct regions of hot gas, we calculated the sum of the two volumes. The uncertainty in the geometry of the hot gas likely contributes scatter to the relationships shown below. Although the true 3-dimensional distribution of the hot gas in a particular galaxy is unknown, assuming random orientations in space we can use the statistics of the observed ellipticities of the diffuse X-ray emission on the sky (Table 3) to make a rough estimate of the average uncertainty in the volume. The average major/minor axial ratio of the diffuse X-ray emission on the sky is 1.50, with an rms of 0.39. We therefore assume that the line-of-sight dimension on average will range from 1.5 times bigger than the average of the other two dimensions, to 1.5 times smaller. Thus we assume that our estimates of the volume are uncertain by a factor of 1.5. Using the derived volumes of hot gas, we estimated electron densities in the hot gas as a function of filling factor. For this calculation, we used the relation L$_{\rm X}$(gas) = $\Lambda$n$_{\rm e}$$^2$fV, where L$_{\rm X}$(gas) is the absorption-corrected 0.3 $-$ 8.0 keV X-ray luminosity of the hot gas from Paper I (the MEKAL component), $\Lambda$ is the cooling function [@mckee77; @mccray87], V is the volume of gas, n$_{\rm e}$ is the electron density, and f is the filling factor. In this calculation, we assumed that the number of hydrogen atoms $\sim$ n$_{\rm e}$. The derived gas masses depend upon the temperature of the X-ray-emitting gas. Unfortunately, for only 15 systems were we able to obtain a fit for the gas temperature in Paper I (see Table 5 in that paper). For the remaining systems, we assumed a temperature of 0.3 keV. In Section 6.5 of this paper, we investigate how this assumption affects our results. In calculating n$_{\rm e}$, we neglect X-ray emission outside of the 0.3 $-$ 8.0 keV Chandra bandpass, however, emission outside of this range may also contribute to cooling the gas. In Section 6.5, we discuss this approximation and how it depends upon temperature. From the X-ray luminosity, the volume, and the temperature we derive n$_{\rm e}$$\sqrt{f}$ for our sample galaxies; we are not able to independently determine n$_{\rm e}$ and f. We find that n$_{\rm e}$$\sqrt{f}$ ranges from 1.1 $\times$ 10$^{-3}$ $-$ 2.2 $\times$ 10$^{-2}$ cm$^{-3}$, similar to the values found by @mineo12 for their spirals. Accounting for the uncertainty in volume and conservatively assuming a factor of two uncertainty in L$_{\rm X}$(gas) (due in part to uncertainties in separating the thermal and non-thermal emission; see Section 5.3 in Paper I), propagation of errors implies that our estimates of n$_{\rm e}$$\sqrt{f}$ are uncertain by a factor of 1.8 on average. The radiative cooling times for the hot gas (i.e., total thermal energy divided by L$_{\rm X}$(gas)) in these galaxies range from 16 to 700 Myrs, with a median time of about 60 Myrs. These are similar to the @mineo12 estimates for disk galaxies. We then calculated the mass of the hot X-ray-emitting gas M$_{\rm X}$(gas) = m$_{\rm p}$n$_{\rm e}$V, when m$_{\rm p}$ is the mass of a proton. Accounting for the uncertainties in V and n$_{\rm e}$, we adopt an uncertainty in our estimates of M$_{\rm X}$(gas) of a factor of two, not including the uncertainty in the filling factor. Correlation Analysis ==================== From the Chandra data we derived a set of parameters for our sample galaxies, including the volume of hot gas, n$_{\rm e}$$\sqrt{f}$, L$_{\rm X}$(gas), and M$_{\rm X}$(gas). From data at other wavelengths, we have another set of values for our galaxies, including SFR, SFE, L$_{\rm FIR}$, L$_{\rm K}$, L$_{\rm FIR}$/L$_{\rm K}$, \[3.6\] $-$ \[24\], F$_{60}$/F$_{100}$, M$_{\rm HI}$/M$_{\rm H_2}$, and the merger stage. Combining these two sets, we derive additional parameters, including L$_{\rm X}$(gas)/SFR, M$_{\rm X}$(gas)/SFR, and M$_{\rm X}$(gas)/(M$_{\rm H_2}$+M$_{\rm HI}$). In this section, we correlate these parameters against each other, and calculate the best-fit linear log vs. log relations for various combinations of these parameters. In Paper I, we found that some trends change at low SFRs, so we did these fits for two cases: the full range of SFRs and the subset of systems with SFR $>$ 1 M$_{\sun}$ yr$^{-1}$. For each relation, we calculated the root mean square (rms) deviation from the best-fit line and the Spearman rank order coefficient. These values are compiled in Table 5, along with the best-fit parameters. For comparison with the Spearman coefficients, Table 5 also provides Pearson correlation coefficients, which assumes a linear relationship between the two parameters. The two types of correlation coefficients agree fairly well for our sample (see Table 5). In Table 5, we classified the relations, into “strong correlation", “weak correlation", or “no correlation". We defined a “strong correlation" as one in which the Spearman coefficient is greater than 0.55 (i.e., $\le$0.1% likelihood of happening by chance), and a “strong anti-correlation" is one in which the Spearman coefficient is less than $-$0.55. A “weak correlation" is one in which the Spearman coefficient is between 0.35 and 0.55, where 0.35 corresponds to a $\sim$5% probability of happening by chance. A “weak anti-correlation" implies a Spearman coefficient between $-$0.35 and $-$0.55, and “no correlation" means a Spearman coefficient between $-$0.35 and 0.35. The most important of the correlations are plotted in Figures 9 $-$ 19. For convenience, when a plot is shown, the number of the figure which displays each correlation is provided in Table 5 along with the best-fit parameters and the Spearman/Pearson coefficients. For clarity of presentation, we do not include errorbars on the plots. As discussed above, we estimate that our values of M$_{\rm X}$(gas) are uncertain by about a factor of two. This means that the uncertainty in log(M$_{\rm X}$(gas)) is about 0.3 dex. The rms uncertainties on some of the fits involving log(M$_{\rm X}$(gas)) are close to or slightly larger than this estimate (see Table 5), so the uncertainty in M$_{\rm X}$(gas) may be a limiting factor in this analysis. Because the uncertainty in the CO/H$_2$ ratio is potentially an even larger factor, we do the correlations for both CO/H$_2$ ratios. This provides a test of whether the results are biased by the choice of CO/H$_2$ ratios. As another test, we also ran the correlation analysis using radii and volumes determined from the [*specextract/ISIS*]{} radial profiles rather than [*dmextract*]{}. The best-fit relations and correlation coefficients changed slightly but the basic conclusions of the paper were unchanged. The relations given below were derived from the [*dmextract*]{} results. To see if our results are affected by the inclusion of Seyfert galaxies in the sample, we also calculated the correlations excluding the AGNs. The correlation coefficients tend to be somewhat smaller with the smaller sample, however, the basic results do not change and the important correlations discussed below still hold. Relations with Volume and with n$_{\rm e}$ ------------------------------------------ In the top row of Figure 9, we plot the volume of hot gas as a function of merger stage, L$_{\rm X}$(gas)/SFR, and SFR. The bottom row of panels in this figure displays the ratio of the maximum radial extent of the X-ray emission to the maximum optical size as measured by the B band 25 mag arcsec$^{-2}$ isophote against merger stage, L$_{\rm X}$(gas)/SFR, and SFR. The first column of Figure 9 shows that stage 3 and stage 4 mergers tend to have large X-ray sizes and large X-ray/optical size ratios. This is a consequence of the bias towards higher SFRs in the mid-merger stages. The second column shows little correlation between the volume of hot gas and L$_{\rm X}$(gas)/SFR, or between the X-ray to optical size and L$_{\rm X}$(gas)/SFR. Systems in which the X-ray extent exceeds that in the optical tend to have high SFRs (Figure 9; bottom right panel), but with a lot of scatter. In Figure 9, the strongest correlation is seen in the top right panel: larger volumes of hot X-ray-emitting gas are found in higher SFR systems. For the full set of galaxies, the best-fit slope is less than one. However, once low SFR systems are excluded the slope is consistent with one. Thus low SFR systems tend to have larger volumes than expected based on their SFRs. We have marked the location of the late-stage merger NGC 1700 on the two upper right panels in Figure 9. It stands out in the sample for having a large volume of hot gas, compared to its SFR. This galaxy has a high L$_{\rm K}$ and a low SFR, with an elliptical-like appearance and tidal debris. It may be a system for which virialized hot gas in the gravitational potential contributes significantly to the observed diffuse X-ray-emitting gas. This system is discussed further in Section 7.4. In Figure 10, we plot the volume of hot gas (top row) and the X-ray/optical size ratio (bottom row) against the SFE (first and second columns) and the F$_{60}$/F$_{100}$ ratio (last column). The first column utilizes a constant CO/H$_2$ ratio, while the second uses the variable CO/H$_2$ ratio. This Figure shows that the systems with large volumes tend to have large SFEs (when a variable CO/H$_2$ ratio is used). However, this is a very weak correlation with a lot of scatter; some systems with large SFEs have only moderate volumes and size ratios. No significant correlation is seen between volume and F$_{60}$/F$_{100}$ (Figure 10, upper right), in spite of the fact that volume is correlated with SFR (Figure 9), and F$_{60}$/F$_{100}$ is correlated with SFR (Figure 2). However, the correlation between F$_{60}$/F$_{100}$ and SFR is weak, with considerable scatter. The sample galaxies only cover a small range in log F$_{60}$/F$_{100}$ (0.3 dex) while the SFR varies over two orders of magnitude. This means that uncertainties in the IRAS fluxes can make it difficult to detect a correlation. Instead of being directly dependent on SFR itself, theoretical models (e.g., [@desert90]) suggest that the F$_{60}$/F$_{100}$ ratio depends on the intensity of the ISRF. The average ISRF in a galaxy can vary greatly for a given global SFR of the galaxy, depending upon the spatial density of young stars. In contrast, the volume of hot gas in these galaxies depends directly upon the total number of young stars rather than on the spatial density of those stars. The upper left panel of Figure 11 shows that the volume of hot gas is well-correlated with L$_{\rm X}$(gas). The slope in this log-log plot is close to one. In the right panel of Figure 11, the volume is plotted against the derived n$_{\rm e}$$\sqrt{f}$. There is a weak trend of decreasing n$_{\rm e}$$\sqrt{f}$ with increasing volume. In the lower left panel of Figure 11, we show that the volume is also correlated with L$_{\rm K}$. However, the correlation is weaker than for volume vs. SFR, and the scatter is larger (Table 5). This supports the idea that in this sample of galaxies the volume of hot gas is largely determined by the number of young stars, with the correlation of volume with L$_{\rm K}$ being a by-product of the SFR $-$ L$_{\rm K}$ correlation. The volume of hot gas is weakly correlated with L$_{\rm FIR}$/L$_{\rm K}$ (bottom right panel of Figure 11) and with \[3.6\] $-$ \[24\] (Table 5). As noted earlier, both \[3.6\] $-$ \[24\] and L$_{\rm FIR}$/L$_{\rm K}$ are approximate measures of the sSFR. This correlation may be a consequence of the correlation between volume and SFR, since sSFR tends to increase with increasing SFR for this sample (see Figure 2 and Table 5). Notice that NGC 1700 is particularly discrepant in these plots compared to the other galaxies, with a low sSFR (i.e., low \[3.6\] $-$ \[24\] and low L$_{\rm FIR}$/L$_{\rm K}$) and a large volume of hot gas. In Figure 12, SFR is plotted against n$_{\rm e}$$\sqrt{f}$ (upper left panel), F$_{60}$/F$_{100}$ against n$_{\rm e}$$\sqrt{f}$ (upper right panel), SFE with constant CO/H$_2$ ratio vs.  n$_{\rm e}$$\sqrt{f}$ (middle left), SFE with a variable CO/H$_2$ ratio vs.  n$_{\rm e}$$\sqrt{f}$ (middle right), and \[3.6\] $-$ \[24\] vs.  n$_{\rm e}$$\sqrt{f}$. No trends are seen in these five panels. In the lower right panel of Figure 12, we plot the derived ratio M$_{\rm X}$(gas)/L$_{\rm X}$(gas) for the 15 galaxies with temperature measurements. The blue solid line on this plot is the relation assuming a constant temperature of 0.3 keV. From the equations given in Section 5, M$_{\rm X}$(gas)/L$_{\rm X}$(gas) = m$_{\rm P}$/($\Lambda$n$_{\rm e}$f), so the conversion from L$_{\rm X}$(gas) to M$_{\rm X}$(gas) is a function of n$_{\rm e}$, with M$_{\rm X}$(gas)/L$_{\rm X}$(gas) $\propto$ 1/n$_{\rm e}$ if the temperature and filling factor are constant. The 15 data points lie above the blue line in this plot because they have temperatures higher than 0.3 keV (see Section 6.5), and $\Lambda$ decreases as temperature increases. The question of the assumed electron temperature is discussed further in Section 6.5. M$_{\rm X}$(gas)/SFR vs. Other Properties ----------------------------------------- The mass of hot gas is strongly correlated with SFR (Figure 13, upper left). For SFR $>$ 1 M$_{\sun}$ yr$^{-1}$, the slope of log M$_{\rm X}$(gas) vs. log SFR is consistent with one. However, the relationship flattens when lower SFR systems are included, suggesting an excess of hot gas in low SFR systems. Even when NGC 1700 is excluded, this flattening is seen. We note that the other two galaxies with low SFR and high M$_{\rm X}$(gas)/SFR in this figure, NGC 2865 and NGC 5018, both have moderately high K band luminosities (Table 1). As with NGC 1700, virialized gas in the gravitational potential may be contributing to the observed hot gas in these galaxies (see Paper I for detailed discussions of these galaxies). Unfortunately, our sample only has a few low SFR, high L$_{\rm K}$ systems, so separating out this additional component to the hot gas is is uncertain. M$_{\rm X}$(gas) is also correlated with L$_{\rm K}$ (Figure 13, upper right). However, this relation has a lower correlation coefficient than M$_{\rm X}$(gas) vs.SFR. This suggests that the M$_{\rm X}$(gas)-L$_{\rm K}$ relation is a consequence of the SFR-L$_{\rm K}$ correlation for our sample galaxies, and the hot gas in most of our galaxies is mainly due to SFR rather than older stars. When both M$_{\rm X}$(gas) and SFR are normalized by a tracer of stellar mass, L$_{\rm K}$, they still show a strong correlation (Table 5). This indicates that the relation between SFR and M$_{\rm X}$(gas) is not simply a richness effect. In contrast, when both M$_{\rm X}$(gas) and L$_{\rm K}$ are normalized by SFR, the correlation is significantly weaker (Table 5). This again implies that M$_{\rm X}$(gas) is more closely tied to young stars than to old stars. The correlation between M$_{\rm X}$(gas) and SFR is displayed in another way in the bottom left panel of Figure 13, where we show M$_{\rm X}$(gas)/SFR vs. SFR. Although a weak anti-correlation is seen for the full sample, once systems with SFR $<$ 1 M$_{\sun}$ yr$^{-1}$ are removed no correlation is seen and the rms scatter is relatively small (0.37 dex). This is close to the expected scatter based on the uncertainty in M$_{\rm X}$(gas) alone, which supports the contention that processes associated with a young stellar population are the main factors responsible for the hot gas in these galaxies, at least when low SFR systems are excluded. M$_{\rm X}$(gas)/SFR is plotted against L$_{\rm K}$ in the lower right panel of Figure 13. A very weak trend is seen when low SFR systems are excluded. The post-merger NGC 1700 stands out as having a high M$_{\rm X}$(gas)/SFR. After NGC 1700, the next two highest M$_{\rm X}$(gas)/SFR galaxies in this plot, NGC 2865 and NGC 5018, are both stage 7 merger remnants with moderately high L$_{\rm K}$ and low sSFR. In contrast to these three galaxies, galaxies with low K band luminosities ($\le$10$^{10}$ L$_{\sun}$) have moderately low M$_{\rm X}$(gas)/SFR values, though not extreme. In Paper I, we found that low L$_{\rm K}$ systems have low L$_{\rm X}$(gas)/SFR. Now, we are able to show that M$_{\rm X}$(gas)/SFR is also somewhat low for these systems. This may indicate escape of hot gas from lower gravitational fields. However, only a few galaxies fall in this range, so the statistics are very uncertain. In the upper left and upper middle panels of Figure 14, weak anti-correlations are seen between M$_{\rm X}$(gas)/SFR and SFE, but these trends disappear for the variable CO/H$_2$ ratio when low SFR systems are not included. The fact that our dataset is incomplete in CO makes these conclusions somewhat uncertain. M$_{\rm X}$(gas)/SFR is anti-correlated with the two tracers of sSFR, \[3.6\] $-$ \[24\] and L$_{\rm FIR}$/L$_{\rm K}$ (Figure 14, upper right and lower left panel, respectively). However, when low SFR systems are excluded the trend with \[3.6\] $-$ \[24\] weakens and the trend with L$_{\rm FIR}$/L$_{\rm K}$ disappears. This again suggests that low sSFR systems sometimes have excess hot gas. M$_{\rm X}$(gas)/SFR is shown to be strongly anti-correlated with F$_{60}$/F$_{100}$ for the full sample (Figure 14, lower middle panel). This trend is weakened when only the high SFR sample is included, but is still detected. The cause of this anti-correlation is uncertain; some possible interpretations are discussed in Section 7.2. In addition, a weak anti-correlation is visible between M$_{\rm X}$(gas)/SFR and n$_{\rm e}$$\sqrt{f}$, particularly when low SFR systems are omitted (lower right panel Figure 14). In contrast, M$_{\rm X}$(gas)/SFR is not correlated with either the volume or the H I-to-H$_2$ ratio (Table 5). Ratio of Mass Hot Gas to Mass Cold Gas vs. Other Properties ----------------------------------------------------------- In Figure 15, the ratio of the mass of hot X-ray-emitting gas to the mass of cold gas (H I + H$_2$) is plotted against SFR (top row) and SFE (bottom row). The left panel in each row was calculated using a constant CO/H$_2$ ratio, while the right panel was calculated with a variable CO/H$_2$. Figure 15 shows that M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) increases with increasing SFR, with a better correlation when a variable CO/H$_2$ ratio is used. A higher Spearman coefficient and a steeper relation are found when the low SFR systems are omitted. The slope is consistent with one when a variable CO/H$_2$ ratio is used and low SFR systems are omitted. The flatter relation when low SFR systems are included again points to excess M$_{\rm X}$(gas) for low SFR systems. A large amount of scatter is evident in a plot of M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) vs. SFE (Figure 15, bottom row), but a reliable correlation is present when a variable CO/H$_2$ ratio is used. The lack of a full set of CO data makes these results uncertain. The scatter in M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) may be due in part to variations in the stellar mass. In Figure 16, M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) is plotted against L$_{\rm K}$ (top panel) and F$_{60}$/F$_{100}$ (bottom panel). In the left column, the quantities were calculated using a constant CO/H$_2$ ratio, while the right panel was calculated with a variable CO/H$_2$. A weak correlation is visible in the upper right panel when low SFR galaxies are excluded and a variable CO/H$_2$ ratio is used. The two lowest L$_{\rm K}$ systems have moderately low M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$), and the highest M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) system, NGC 6240, has a very high L$_{\rm K}$. In the lower panels of Figure 16, weak correlations between M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$), and F$_{60}$/F$_{100}$ are seen, but only if low SFR systems are excluded. Correlations are visible between M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) and our two tracers of sSFR (Figure 17), especially when low SFR systems are excluded and a variable CO/H$_2$ ratio is used. The steepening of the slope when low SFR systems are excluded again signals possible excess of hot gas in low SFR systems. Merger Stage vs. Gas Properties ------------------------------- We plot the inferred mass of X-ray-emitting gas M$_{\rm X}$(gas) against merger stage in the top left panel of Figure 18. The mid-merger stages have higher quantities of hot gas, on average, than the early or late stages. However, this is largely due to the fact that the mid-merger galaxies tend to have higher SFRs. When the mass of hot gas is normalized by the SFR (Figure 18, top right), no strong trend is seen. The stage 7 galaxy NGC 1700 stands out as having a high M$_{\rm X}$(gas)/SFR. The next two highest M$_{\rm X}$(gas)/SFR systems, the stage 7 galaxies NGC 2865 and NGC 5018, also have low sSFR. The bottom row of Figure 18 compares the merger stage with the ratio of hot gas to cold gas M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$), using the standard Galactic CO/H$_2$ ratio (left panel) or the variable CO/H$_2$ ratio (right panel). Stages 3 and 4 tend to have proportionally more hot gas. This is likely a consequence of the fact that galaxies in those stages tend to have higher SFRs. Because of the inhomogeneity of the sample, the small number of systems in each merger stage, and the lack of a full set of CO data, trends with merger stage in our sample are uncertain. In these plots, the galaxy with the highest M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) is NGC 6240. NGC 1700 is not plotted in the bottom row of Figure 18 because it lacks a full set of CO data. Gas Temperature --------------- As mentioned earlier, our derivations of electron density and M$_{\rm X}$(gas) depend upon the assumed temperature, and temperatures are available for only 15 of our sample galaxies from the X-ray spectra (see Paper I). For the remaining galaxies, we assumed a constant temperature of 0.3 keV. For comparison, for the 15 systems for which temperatures are available in Paper I, kT ranges from 0.37 keV to 1.0 keV. In some cases, we were able to use a two-temperature model for the hot gas; in those cases, we used the luminosity-weighted temperature in the subsequent analysis. For comparison, @mineo12 found lower temperatures on average for their sample galaxies (mean of 0.24 keV for single-temperature models). The derived temperatures depend upon the assumptions used in modeling the X-ray spectrum, including how the power law component is modeled, so they are somewhat uncertain (see Paper I). Because our sample is an archive-selected sample, there is a selection bias in the subset of galaxies with derived temperatures. Compared to the galaxies in the sample without measured temperatures, the galaxies with temperatures tend to have longer, more sensitive exposures, and they tend to be more extreme systems with higher luminosities. In contrast, the @mineo12 sample, with lower temperatures on average, contains more normal spiral galaxies and irregulars as well as some mergers. We therefore assume the more modest temperature of 0.3 keV for our galaxies without temperature measurements, assuming that they are less extreme than the other systems. However, this is quite uncertain. To test whether our conclusions are affected by our assumption of 0.3 keV for the galaxies without derived temperatures, we re-ran our correlation analysis with four alternative assumptions. First, we re-ran the analysis using a constant kT = 0.3 keV for all the galaxies, even those for which we have a direct measure of the temperature. Second, we did the calculations assuming a constant kT = 0.6 keV for all galaxies. Third, we re-ran the analysis assuming that the temperature is correlated with SFR. In Paper I we did not find any correlation of temperature with SFR. @mineo12 also did not find a correlation between temperature and SFR for their sample of star-forming galaxies. However, @grimes05 noted that the ULIRGs in their sample tend to have higher temperatures, up to about 0.8 keV. Therefore, as a limiting case to investigate how temperature may potentially affect our results, we assume that log T$_{\rm X}$ increases linearly with log SFR, and we set kT = 0.2 keV for the systems with the lowest SFRs (0.1 M$_{\sun}$ yr$^{-1}$), increasing to 1.0 keV for systems with SFR = 100 M$_{\sun}$ yr$^{-1}$. As a fourth test, we investigated how our results changed if we assumed that the temperature depends upon L$_{\rm X}$(gas) rather than on SFR. In contrast to actively star-forming galaxies, ellipticals show a steep relation between L$_{\rm X}$(gas) and temperature of L$_{\rm X}$(gas) $\propto$ T$_{\rm X}$$^{4.5}$ [@goulding16]. As a limiting case, we assumed that L$_{\rm X}$(gas) $\propto$ T$_{\rm X}$$^{4.5}$ as found for ellipticals [@goulding16]. Assuming a temperature of 0.2 keV for the galaxies with the lowest L$_{\rm X}$(gas) in our sample, this gives 1.0 keV for the highest L$_{\rm X}$(gas) system. This is a more extreme range than typically found for star-forming galaxies, thus it is a limiting case. For each of the above cases, we also explored how our results change when we include a correction from the observed 0.3 $-$ 8.0 keV L$_{\rm X}$(gas) to the bolometric luminosity of the gas including light outside of the 0.3 $-$ 8.0 keV Chandra window. This conversion is a function of temperature. Using the PIMMS[^6] software, we find that L$_{\rm bol}$(gas)/L$_{\rm X}$(gas)(0.3 $-$ 8.0 keV) drops from 2.39 at 0.3 keV to 1.39 at 1.0 keV. In re-running the correlation analysis, we find that the basic conclusions of this paper do not change dramatically with these different assumptions about the temperature. The Spearman coefficients and the best-fit relations change slightly with different assumptions about the temperature, but the basic conclusions remain the same. For a few of the relations that have correlation coefficients near our ‘weak’/‘none’ boundary or our ‘strong’/‘weak’ boundary, small changes in the correlation coefficient may reclassify the relation. The most notable case is the very weak correlation between M$_{\rm X}$(gas)/SFR and L$_{\rm K}$, which drops below the cutoff for a ‘weak’ correlation for some of these alternative cases, but increases slightly in significance for the linear log(T$_{\rm X}$)-log(SFR) case including the correction for light outside of the Chandra bandpass. This emphasizes that the M$_{\rm X}$(gas)/SFR $-$ L$_{\rm K}$ correlation is very marginal, and more data is needed to confirm or refute it. For most of the relations discussed above, however, although the correlation coefficients change slightly with different assumptions about the temperatures, the classification of the relation does not change. Thus the conclusions of this paper are not strongly influenced by our lack of temperature measurements. Discussion ========== We calculated the volume, mass, and electron density of the hot X-ray-emitting gas in our sample galaxies, and compared with other properties of the galaxies, including the SFR, L$_{\rm K}$, the mass of cold gas, and the SFE. We have searched for correlations between a large number of variables, and discovered several new correlations and anti-correlations in our data. These, and many apparent non-correlations, are listed in Table 5. Volume and M$_{\rm X}$(gas) vs.SFR and L$_{\rm K}$ --------------------------------------------------- Some of the most important correlations are: - \(1) The volume of hot gas increases as the SFR goes up, with a high correlation coefficient (Figure 9). When galaxies with SFR $<$ 1 M$_{\sun}$ yr$^{-1}$ are excluded, the slope of the best fit log volume $-$ log SFR line is 0.97 $\pm$ 0.15 (Figure 9). Including low SFR systems flattens this relation. - \(2) The volume of hot gas is also correlated with L$_{\rm K}$, but with a smaller correlation coefficient (Figure 11). - \(3) The volume of hot gas also correlates with SFE, L$_{\rm FIR}$/L$_{\rm K}$, and \[3.6\] $-$ \[24\], but only weakly (Figures 10 and 11, and Table 5). - \(4) There is a strong correlation between M$_{\rm X}$(gas) and SFR (Figure 13). The slope of the log-log plot is 0.88 $\pm$ 0.10 when low SFR galaxies are excluded, consistent with a simple M$_{\rm X}$(gas) $\propto$ SFR relation. This relation flattens when low SFR systems are included. - \(5) M$_{\rm X}$(gas) is also correlated with L$_{\rm K}$ (Figure 13), but with a lower correlation coefficient than M$_{\rm X}$(gas) and SFR. - \(6) As the SFR increases, M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) goes up (Figure 15), especially when a variable CO/H$_2$ ratio is used and when low SFR systems are excluded. For the latter case, the correlation is strong and the slope of the log-log plot is consistent with one. - \(7) M$_{\rm X}$(gas)/(M$_{H_2}$ + M$_{HI}$) is weakly correlated with L$_{\rm K}$ when a variable CO/H$_2$ ratio is used (Figure 16). - \(8) There is a trend of increasing M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) ratio with increasing SFE, especially when a variable CO/H$_2$ ratio is used (Figure 15). This trend is weaker than the relation with SFR. - \(9) M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) is weakly correlated with F$_{60}$/F$_{100}$ (Figure 16). For high SFR systems, the linear relations between volume and SFR, and between M$_{\rm X}$(gas) and SFR, can be explained in a straight-forward manner: a larger SFR means more supernovae and more stellar winds, which produce a larger volume of hot gas and a larger M$_{\rm X}$(gas). For galaxies with SFR $>$ 1 M$_{\sun}$ yr$^{-1}$, hot gas associated with star formation dominates M$_{\rm X}$(gas), and any contribution from processes associated with the older stellar population is negligible. However, for galaxies with lower SFRs and high K band luminosities (and therefore low sSFRs) we find evidence for excess hot gas relative to the linear M$_{\rm X}$(gas)$-$SFR relation. This may be due to contributions to the X-ray-emitting hot gas from other sources, perhaps mass loss from older stars that has been virialized in the gravitational potential. The weaker correlation between volume and SFE compared to volume vs. SFR is accounted for by the fact that some high SFE systems have only moderate SFRs, and it is the SFR that controls the number of supernovae and the amount of stellar wind, not the SFE. The weakness of the correlation between volume and the sSFR as measured by \[3.6\] $-$ \[24\] and L$_{\rm FIR}$/L$_{\rm K}$ may be explained in a similar manner. The correlations between L$_{\rm K}$ and the hot gas mass, and between L$_{\rm K}$ and the volume of hot gas, may be indirect results of the correlation between SFR and L$_{\rm K}$. The SFR-L$_{\rm K}$ correlation, in turn, is a consequence of the fact that most of the galaxies in our sample are star-forming galaxies on the galaxy main sequence. Because the volume$-$L$_{\rm K}$ and M$_{\rm X}$(gas)$-$L$_{\rm K}$ correlations are weaker than the volume$-$SFR and M$_{\rm X}$(gas)-SFR correlations, we conclude that star formation is more directly responsible for the hot gas, not the older stellar population. The strong correlation between the hot-to-cold gas mass ratio and the SFR, in contrast to the weak correlation between the hot-to-cold gas mass ratio and L$_{\rm K}$, confirms that the younger stellar population is primarily responsible for the hot gas, not older stars. The amount of hot gas in our galaxies is small compared to the amount of colder gas (see Table 2), so conversion of colder material into hot gas affects the numerator in M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) but not noticeably the denominator. The higher the SFR, the more hot gas that is produced, thus the M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) ratio is directly correlated with the SFR. The linear log M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) vs. log SFR relation for high SFR systems provides additional support for the idea that the hot gas is mainly due to young stars in these galaxies. The flattening of this relation at lower SFR again indicates excess hot gas in low SFR, low sSFR systems. The strong correlation between M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) and SFR is consistent with the recent @moreno19 simulations of star formation and feedback in galaxy mergers, in which they investigate the relative amounts of hot, warm, cool, and cold-dense gas. In their models, the interaction causes an increase in the amount of cold ultra-dense interstellar gas by a factor of about three on average. This enhances the SFR. The amount of hot gas increases during the starburst (by about 400%), while the total amount of cold and warm gas mass decreases only slightly or remains constant. The net effect would be an increase in M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) during the burst, consistent with our correlation with SFR. In the @moreno19 models, the hot gas is produced solely by stellar/supernovae feedback; they do not include AGN feedback or a pre-existing hot halo. The larger scatter in the M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) vs. SFE correlation and its weaker correlation compared to M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) vs. SFR is likely due to some low SFR systems having high SFEs; it is the SFR that directly controls the amount of hot gas rather than the SFE. The weak trend of increasing M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) with increasing F$_{60}$/F$_{100}$ ratio may be another indirect consequence of the M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) vs. SFR correlation. Since F$_{60}$/F$_{100}$ increases with SFR on average for our sample galaxies (Figure 2), galaxies with higher M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) and SFR tend to have larger F$_{60}$/F$_{100}$. Another factor that may affect M$_{\rm X}$(gas) is escape of hot gas from the gravitational field of the galaxy, particularly in low mass systems. Our data shows a hint of lower M$_{\rm X}$(gas)/SFR for low L$_{\rm K}$ systems (Figure 13). However, this is uncertain since our sample only includes a few low mass systems. The majority of galaxies in our sample lie in only a small range of L$_{\rm K}$ (10$^{10}$ L$_{\sun}$ $-$ 10$^{11.5}$ L$_{\sun}$), thus it is difficult to find trends with L$_{\rm K}$ in our sample. Low mass galaxies may have lower ratios of baryonic mass M$_{\rm baryon}$ to dynamical mass M$_{\rm dyn}$ compared to high mass systems (e.g., [@cote2000; @torres2011]). The lower M$_{\rm baryon}$/M$_{\rm dyn}$ in low mass systems has been attributed to either mass loss from galactic winds (e.g., [@vandenbosch2000; @brook2012]) or less efficient infall into lower mass dark halos (e.g., [@sales2017]). A deficiency in hot gas in low mass systems, if confirmed, may point to increased escape of baryons via winds. A larger Chandra imaging survey including more low mass systems would be helpful to better characterize M$_{\rm X}$(gas)/SFR and the scatter in this ratio for low mass galaxies. To search for additional evidence that the hot gas content in our sample galaxies is affected by the mass and/or the older stellar population in addition to the SFR, we calculated the residuals from the best-fit linear relations for log SFR vs. log L$_{\rm K}$, log M$_{\rm X}$(gas) vs. log SFR, and log M$_{\rm X}$(gas) vs. log L$_{\rm K}$. We then searched for correlations between these residuals (top three panels in Figure 19). Strong correlations between these residuals might suggest the existence of a ‘fundamental plane’ of M$_{\rm X}$(gas) vs. log SFR vs. log L$_{\rm K}$. A weak correlation is seen between the residuals of log M$_{\rm X}$(gas) vs. SFR and those of M$_{\rm X}$(gas) vs. L$_{\rm K}$ (Figure 19, upper right). A strong correlation is seen between the residuals of log M$_{\rm X}$(gas) vs. log L$_{\rm K}$ and those of log SFR vs. log L$_{\rm K}$ (Figure 19, left panel, middle row). The most discrepant galaxies in this plot are NGC 5018, NGC 2865, and Arp 222 (the three stage 7 mergers in the lower left corner). All of these were identified in Paper I as post-starbursts, and all have low sSFR (i.e., have large negative residuals in the log SFR vs. log L$_{\rm K}$ relation). They also have large negative residuals compared to the best-fit log M$_{\rm X}$(gas) vs. log L$_{\rm K}$ relation. What is discrepant about these galaxies is their K band luminosities, which are high relative to their SFRs. Figure 19 shows that NGC 1700 has a high relative M$_{\rm X}$(gas) compared to these other low sSFR galaxies. In the bottom row of Figure 19, we also compared these residuals with L$_{\rm K}$ and the SFR. There is a positive correlation between the residuals of the M$_{\rm X}$(gas) vs. L$_{\rm K}$ relation, and the SFR (Figure 19, bottom left). Galaxies with low SFRs tend to be deficient in M$_{\rm X}$(gas) compared to the M$_{\rm X}$(gas) vs. L$_{\rm K}$ relation. That is because they also tend to have low sSFRs, and it is the SFR that determines the mass of hot gas, not L$_{\rm K}$. The galaxies in the lower corner of that plot have low SFRs compared to their K band luminosities, and therefore they have low M$_{\rm X}$(gas) compared to their L$_{\rm K}$. We also see a positive correlation between the residuals of the M$_{\rm X}$(gas) vs. SFR relation, and L$_{\rm K}$ (Figure 19, bottom right panel). NGC 1700 stands out as having excess hot gas, while systems with low K band luminosities tend to have less hot gas relative to the M$_{\rm X}$(gas) vs. SFR relation. This suggests that some gas may have been lost from these systems. Unfortunately, our sample only contains a few galaxies with low K band luminosities, and only a few galaxies with low sSFR, so this result is uncertain. We conclude that only a few galaxies in our sample deviate from a straight line in the log M$_{\rm X}$(gas) $-$ log SFR $-$ log L$_{\rm K}$ plane. To better understand these deviations, it would be helpful to increase the number of post-starburst galaxies in our sample, as well as the number of low mass galaxies. Trends with M$_{\rm X}$(gas)/SFR and n$_{\rm e}$$\sqrt{f}$ ----------------------------------------------------------- In addition to the strong positive correlations discussed above, some weak anti-correlations are also seen in the data: - \(1) There are weak anti-correlations between M$_{\rm X}$(gas)/SFR and F$_{60}$/F$_{100}$, and between M$_{\rm X}$(gas)/SFR and \[3.6\] $-$ \[24\] (Figure 14). These anti-correlations hold even when low SFR systems are excluded. - \(2) As M$_{\rm X}$/SFR goes up, n$_{\rm e}$$\sqrt{f}$ goes down, even when low SFR systems are excluded (Figure 14). This is also a weak trend. - \(3) There is a weak trend of decreasing volume with increasing n$_{\rm e}$$\sqrt{f}$ (Figure 11). Some parameters are neither correlated or anti-correlated: - \(1) M$_{\rm X}$(gas)/SFR is not correlated with SFR, if low SFR systems are excluded (Figure 13). M$_{\rm X}$(gas)/SFR is not correlated with for a variable CO/H$_2$ ratio (Figure 14). - \(2) The SFR, \[3.6\] $-$ \[24\], and F$_{60}$/F$_{100}$ do not correlate with n$_{\rm e}$$\sqrt{f}$ (Figure 12). - \(3) No significant correlations are found between M$_{\rm X}$(gas)/SFR and volume (Table 5), and no correlation between M$_{\rm X}$(gas)/SFR and L$_{\rm FIR}$/L$_{\rm K}$ when low SFR systems are excluded (Figure 14). Although M$_{\rm X}$(gas)/SFR is anti-correlated with F$_{60}$/F$_{100}$ and with \[3.6\] $-$ 24\], M$_{\rm X}$(gas)/SFR is not correlated (either positively or negatively) with SFR, in spite of the fact that F$_{60}$/F$_{100}$ and \[3.6\] $-$ \[24\] are both (weakly) correlated with SFR. Furthermore, although volume and n$_{\rm e}$$\sqrt{f}$ are weakly anti-correlated, and SFR is correlated with volume, n$_{\rm e}$$\sqrt{f}$ is not correlated with SFR. These results suggest that another factor contributes to the observed variations in M$_{\rm X}$(gas)/SFR and n$_{\rm e}$$\sqrt{f}$ besides SFR. One possibility is differences in timescale; variations in the age of an on-going starburst or the time since the end of a starburst may affect n$_{\rm e}$$\sqrt{f}$ and M$_{\rm X}$(gas)/SFR, as well as other parameters of the system. Numerical simulations show that interaction-triggered starbursts can last for extended periods ($\ge$100 Myrs; [@lotz00; @dimatteo08; @bournaud11; @fensch17]). This timescale is similar to the radiative cooling times for the gas (median of 60 Myrs, see Section 5); it is also similar to the timescale over which the UV data is measuring the SFR ($\sim$100 Myrs; [@kennicutt12]). If the cooling time is less than the timescale over which the SFR is measured, and if the cooling time is less than the age of the burst, then late in a burst the M$_{\rm X}$(gas)/SFR may decrease (i.e., some hot gas has cooled, but the UV-bright stars contributing to our SFR estimate have not yet died). The sSFR as measured by \[3.6\] $-$ \[24\] and L$_{\rm FIR}$/L$_{\rm K}$ may also vary with time during a burst. Presumably the electron density and/or filling factor also evolve with time during a burst, along with F$_{60}$/F$_{100}$, the volume of hot gas, and M$_{\rm X}$(gas)/SFR. Further theoretical modeling is needed to better understand the relationships between these parameters in evolving starbursts. A second factor that may contribute to variations in M$_{\rm X}$(gas)/SFR and n$_{\rm e}$$\sqrt{f}$ may be the efficiency of early feedback. According to numerical simulations, stellar winds and radiation pressure early in a starburst disrupt molecular clouds, making it easier for subsequent supernovae to produce hot gas [@hopkins12a; @agertz13; @hopkins13b]. The efficiency of early feedback might be related to the spatial density of star formation; more concentrated distributions of young stars may have more early UV radiation per volume, allowing quicker destruction of molecular gas. This may lead to easier escape for hot gas from the region, and thus less diffuse X-ray emission. More concentrated distributions of young stars would presumably lead to more intense UV interstellar radiation fields and therefore hotter dust and higher F$_{60}$/F$_{100}$ ratios (e.g., [@desert90]). The F$_{60}$/F$_{100}$ ratio is weakly anti-correlated with M$_{\rm X}$(gas)/SFR, consistent with this scenario. The \[3.6\] $-$ \[24\] color may also increase with higher spatial concentrations of young stars, and \[3.6\] $-$ \[24\] is also weakly anti-correlated with M$_{\rm X}$(gas)/SFR. Further study is needed to investigate how all of these parameters vary with the density of OB stars in a galaxy. A third factor that might affect M$_{\rm X}$(gas)/SFR is the initial mass function (IMF). A top-heavy IMF may lead to an increase in supernovae compared to lower mass stars, which might produce a larger M$_{\rm X}$(gas)/SFR when the SFR is derived from the UV continuum. It has been suggested that high SFR and/or high SFE galaxies may have IMFs skewed to high mass stars [@rieke80; @elbaz95; @brassington07; @koppen07; @weidner13; @brown19]. Thus one might expect higher M$_{\rm X}$(gas)/SFR for higher SFR or higher SFE systems. However, we do not see a correlation between M$_{\rm X}$(gas)/SFR and SFR, or between M$_{\rm X}$(gas)/SFR and SFE. This means that either IMF variations are not responsible for the spread in M$_{\rm X}$(gas)/SFR, or the IMF is not correlated with SFR or SFE. Another factor that might affect M$_{\rm X}$(gas)/SFR and n$_{\rm e}$$\sqrt{f}$ is metallicity. A number of studies have concluded that the SFR of star-forming galaxies depends upon metallicity in addition to stellar mass; for the same stellar mass, lower metallicity systems have higher SFRs ([@ellison08; @mannucci10; @lara10; @hirschauer18], but see [@izotov14; @izotov15]). This result has been explained by infall of low metallicity gas, fueling star formation. Our M$_{\rm X}$(gas)/SFR values may be artificially skewed by metallicity, since the value of L$_{\rm X}$(gas) that is derived from the Chandra spectra is affected by metallicity (see Paper I). In addition, the fraction of the supernovae and stellar wind energy converted into X-ray flux may be a function of metallicity. A larger sample of galaxies including more low metallicity systems would be helpful to investigate this issue further. Unfortunately, we do not have a measure of the volume filling factor of the hot gas, f, independently of n$_{\rm e}$, to determine whether f varies significantly from system to system. Based on theoretical arguments and/or hydrodynamical simulations, for a range of systems f has variously been estimated to be 70$-$80% [@mckee77], 20$-$40% [@breitschwerdt12], 30$-$40% [@kim17], or anywhere between 10$-$90% depending upon the supernovae rate and the average gas density [@li15]. In general, according to simulations the higher the density of star formation, the larger the expected hot gas filling factor [@breitschwerdt12; @li15]. One might expect higher SFRs to produce faster winds, as has been found for the warm ionized medium (e.g., [@heckman15]). A faster wind may lead to lower n$_{\rm e}$ values. If the filling factor increases with SFR but n$_{\rm e}$ decreases, this might explain the lack of a trend between n$_{\rm e}$$\sqrt{f}$ and SFR. Independent determinations of n$_{\rm e}$ and f (e.g., [@kregenow06; @jo19]) are needed to test this possibility. The Scatter in M$_{\rm X}$(gas) ------------------------------- One of the major conclusions of the current paper is that, excluding low SFR systems, M$_{\rm X}$(gas)/SFR is constant with SFR with an rms spread of only 0.34 dex. A number of factors may contribute to this scatter, in addition to age, metallicity, or IMF differences. First, as discussed in Section 5, the decomposition of the X-ray spectrum into a thermal and a non-thermal component introduces some uncertainty, adding uncertainty to our determination of L$_{\rm X}$(gas) (see Paper I). Second, as also discussed in Section 5, the unknown extent of the hot gas along our line of sight leads to uncertainties in the volume of the hot gas, which contributes to the scatter in the parameters derived from the volume. Systematic variations in the geometry of the hot gas may further affect the observed relations. For example, systems with lower SFR may have disk-like distributions of cold gas, with coronal gas extending out of the galactic plane, while higher SFR systems, which are more likely to be in the midst of a merger, may have gas distributions which are more spherical. Another factor that may contribute to the scatter are system-to-system variations in the gravitational masses of the galaxies, which likely affect outflow rates and potential loss of hot gas. We found that galaxies with low K band luminosities tend to have lower M$_{\rm X}$(gas)/SFR ratios compared to other galaxies (Figure 16), suggesting that low mass galaxies may lose some hot gas. The large-scale environment may also affect the M$_{\rm X}$(gas)/SFR ratio, however, L$_{\rm X}$(gas)/SFR is not correlated with local galaxy density (Paper I); M$_{\rm X}$(gas)/SFR and M$_{\rm X}$(gas)/(M$_{\rm H_2}$+M$_{HI}$) are also not correlated with local galaxy density. Another factor that may contribute to the observed scatter in these plots is our assumption of a temperature of kT = 0.3 keV for the hot gas in the systems without an X-ray determination of temperature. Longer Chandra exposures would be useful to spectroscopically determine the temperature of the gas in more of the galaxies. NGC 1700 --------- As noted several times in this paper, the late-stage merger NGC 1700 does not fit some of the strong relations seen in this study. NGC 1700 has a large X-ray size relative to its SFR. It also has a high X-ray luminosity and a large mass of hot X-ray-emitting gas. This suggests that either NGC 1700 is in a special evolutionary state compared to the other systems in our sample, or it acquired its hot gas via a different process. Maybe NGC 1700 was a pre-existing elliptical that already had a large amount of hot gas, which then swallowed a gas-rich galaxy. It is sometimes difficult to distinguish between the remnant of a spiral-spiral merger and the remnant of an elliptical-spiral merger. In appearance, NGC 1700 is an elliptical-like galaxy surrounded by tidal debris, but its merger history is uncertain. It was classified as the remnant of a spiral-spiral major merger by @schweizer92 and @brown00, however, @statler96 and @kleineberg11 conclude that it is the result of the merger of at least three galaxies, two large spirals and a third smaller galaxy. If NGC 1700 is the product of a single major merger, perhaps it is in a later stage in the conversion from a major merger to an elliptical than the other post-merger galaxies in our sample. Theory suggests that ellipticals produced by major mergers can build a large quantity of hot gas by the virialization of gas lost from red giants in the gravitational potential well, with possible heating by Type Ia supernovae and/or AGN feedback [@ciotti91; @ciotti17; @pellegrini98; @mathews03]. This process is expected to be very slow, with timescales of many gigayears. Expanding our sample to include more galaxies like NGC 1700 would be helpful to better understand how hot gas grows in such systems. More generally, increasing the number of low sSFR galaxies in our sample is needed to investigate how the hot gas in galaxies evolves as star formation fades in a quenched or quenching galaxy. Summary ======= We have measured the spatial extent of the hot interstellar gas in a sample of 49 interacting and merging galaxies in the nearby Universe. For systems with SFR $>$ 1 M$_{\sun}$ yr, we found strong near-linear correlations between the volume of hot gas and the SFR, and between M$_{\rm X}$(gas) and SFR. This supports the idea that supernovae and stellar winds are responsible for the hot gas. As expected, the M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) ratio also increases linearly with increasing SFR for high SFR systems. These results are consistent with recent hydrodynamical simulations of interactions including feedback. The M$_{\rm X}$(gas)/(M$_{\rm H_2}$ + M$_{\rm HI}$) ratio also increases with dust temperature on average, perhaps due to a larger proportion of dust associated with the hot gas. In low SFR, low sSFR systems, we find evidence for an excess of hot gas relative to the relations for higher SFR systems. This excess may be associated with mass loss from older stars. However, our sample only includes a few galaxies with low sSFR rates, so this result is uncertain. In addition, we see a possible deficient of hot gas in low mass systems, perhaps due to escape from the gravitational field of the galaxy. However, this result is also uncertain due to the small number of low mass systems in our sample. The M$_{\rm X}$(gas)/SFR is weakly anti-correlated with F$_{60}$/F$_{100}$, \[3.6\] $-$ \[24\], and n$_{\rm e}$$\sqrt{f}$. The inferred electron density decreases with increasing volume of hot gas assuming a constant filling factor. These results may be a consequence of variations in the spatial density of young stars, age of the stars, metallicity, IMF, and/or efficiency of feedback in these galaxies. This research was supported by NASA Chandra archive grant AR6-17009X, issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. Support was also provided by National Science Foundation Extragalactic Astronomy Grant ASTR-1714491. The scientific results reported in this article are based on data obtained from the Chandra Data Archive. This research has also made use of the NASA/IPAC Extragalatic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. This work also utilizes archival data from the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory (JPL), California Institute of Technology under a contract with NASA. This study also uses archival data from the NASA Galaxy Evolution Explorer (GALEX), which was operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. For the 44 systems for which we can measure X-ray radial profiles, the unsmoothed Chandra 0.3 $-$ 1.0 keV maps are displayed in the right panels of Figures 20 $-$ 27. When only one Chandra dataset is available for the galaxy, the [*ciao*]{} command [*fluximage*]{} was used to convert into units of photons s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$, using an exposure-correction map with a 0.8 keV effective energy. When multiple Chandra datasets are available for one system, the datasets have been merged together using the [*ciao*]{} command [*merge$\_$obs*]{}, which also does the exposure correction and flux calibration. The left panels of Figures 20 $-$ 27 show either the SDSS g band image (when available) or the GALEX NUV image. Contours of the X-ray surface brightness are overlaid on the Chandra images. These have been lightly smoothed using the ds9 software[^7], with the smooth parameter set to 6. Agertz, O. & Kravtsov, A. V. 2015, ApJ, 804, 18 Agertz, O. & Kravtsov, A. V. 2016, ApJ, 824, 79 Agertz, O., Kravtsov, A. V., Leitner, S. N., & Gnedin, N. Y. 2013, ApJ, 770, 25 Andreani, P., Boselli, A., Ciesla, L., Vio, R., Cortese, L., Buat, V., & Miyamoto, Y. 2018, A&A, 617, 33 Arp, H. C. 1966, Atlas of Peculiar Galaxies (Pasadena, CA: Caltech) Balick, B. & Heckman, T. 1981, A&A, 96, 271 Bell, E. F. & de Jong, R. S. 2000, ApJ, 550, 212 Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARAA, 51, 207 Boroson, B., Kim, D.-W., & Fabbiano, G. 2011, ApJ, 729, 12 Bournaud, F., Chapon, D., Teyssier, R. et al. 2011, ApJ, 730, 4. Brassington, N. J., Ponman, T. J., & Read, A. M. 2007, , 337, 1439 Breitschwerdt, D., de Avillez, M. A., Feige, J., & Dettbarn, C. 2012, AN, 333, 486 Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, MNRAS, 351, 1151 Brook, C. B., Stinson, G., Gibson, B. K., Wadsley, J., & Quinn, T. 2012, MNRAS, 424, 1275 Brown, R. J. N., Forbes, D. A., Kissler-Patig, M., & Brodie, J. P. 2000, MNRAS, 317, 406 Brown, T. & Wilson, C, 2019, ApJ, in press (arXiv: 1905.06950). Bushouse, H. A. 1987, ApJ, 320, 49 Bushouse, H. A., Lord, S. D., Lamb, S. A., Werner, M. W., & Lo, K. Y. 1999, astro-ph/9911186 Bustard, C., Zweibel, E. G., & D’Onghia, E. 2016, ApJ, 819, 29 Casoli, F., Dupraz, C., Combes, F., & Kazes, I. 1991 A&A, 251, 1 Chevalier, R. A. & Clegg, A. W. 1985, Nature, 317, 44 Ciotti, L., D’Ecole, A., Pellegrini, S., & Renzini, A. 1991, ApJ, 376, 380 Ciotti, L., Pellegrini, S., Negri, A., & Ostriker, J. P. 2017, ApJ, 835, 15 Côté, S., Carignan, C., & Freeman, K. C. 2000, AJ, 120, 3027 Cox, A. L. & Sparke, L. S. 2004, AJ, 128, 2013 Cox, T. J., Dutta, S. N., Di Matteo, T., et al. 2006a, ApJ, 650, 791 Cox, T. J., Di Matteo, T, Hernquist, L, et al. 2006b, ApJ, 643, 692 Cox, T. J., Jonsson, P., Primack, J. R., & Somerville, R. 2006c, MNRAS, 373, 1013 Daddi, E., Elbaz, D., Walter, F., et al. 2010, ApJ, 714, 418 Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, ApJ, 547, 792 Desert, F.-X., Boulanger, F., & Puget, J. L. 1990, A&A, 237, 215 Di Matteo, P., Bournaud, F., Martig, M., Combes, F., Melchior, A. -L., & Semelin, B. 2008, A&A, 492, 31. Downes, D. & Solomon, P. M. 1998, ApJ, 507, 615 Doyle, M. T., Drinkwater, M. J., Rohde, D. J., et al. 2005, MNRAS, 361, 34 Elbaz, D., Arnaud, M., & Vangioni-Flam, E. 1995, A&A, 303, 345 Ellison, S. L., Patton, D. R., Simard, L., & McConnachie, A. W. 2008, ApJ, 672, L107 Elmegreen, B. G., Kaufman, M., Bournaud, F., et al. 2016, ApJ, 823, 26 English, J., Norris, R. P., Freeman, K. C., & Booth, R. S. 2003, AJ, 125, 1134 Fensch, J., Renaud, F., Bournaud, F., et al. 2017, MNRAS, 465, 1934 Fernández, X., Petric, A. O., Schweizer, F., & van Gorkom, J. H. 2014, AJ, 147, 74 Finlator, K. & Davé, R. 2008, MNRAS, 385, 2181 Goulding, A., Greene, J. E., Ma, C.-P., et al.  2016, ApJ, 826, 167 Gao, Y. & Solomon, P. M. 2004, ApJS, 152, 63 Gaspari, M., Eckert, D., Ettori, S., et al. 2019, astro-ph/1904.10972 Georgakakis, A., Forbes, D. A., & Norris, R. P. 2000, , 318, 124 Georgakakis, A., Hopkins, A. M., Caulton, A., Wiklind, T., Terlevich, A. I., & Forbes, D. A. 2001, MNRAS, 326, 1431 Gordon, S., Koribalski, B., & Jones, K. 2001, MNRAS, 326, 578 Grimes, J. P., Heckman, T., Strickland, D., & Ptak, A. 2005, ApJ, 628, 187 Hayward, C. C., Torrey, P., Springel, V., Hernquist, L., & Vogelsberger, M. 2014, MNRAS, 442, 1992 Heckman, T. M., Alexandroff, R. M., Borthakur, S., Overzier, R., & Leitherer, C. 2015, ApJ, 809, 147 Hibbard, J. E. & van Gorkom, J., 1996, , 111, 655 Hirschauer, A. S., Salzer, J. J., Janowiecki, S., & Wegner, G. A. 2018, AJ, 155, 82 Hopkins, P. F., Quataert, E., & Murray, N. 2011, MNRAS, 417, 950 Hopkins, P. F., Quataert, E., & Murray, N. 2012b, MNRAS, 421, 3488 Hopkins, P. F., Quataert, E., & Murray, N. 2012a, MNRAS, 421, 3522 Hopkins, P. F., Kereš, D., Norman, M., Hernquist, L., Narayanan, D., & Hayward, C. C. 2013, MNRAS, 433, 78 Hopkins, P., Kereš, D., Önorbe, J. 2014, MNRAS, 445, 581 Hopkins, P. F., Narayanan, D., Murray, N., & Quataert, E. 2013b, MNRAS, 433, 69 Horellou, C., Casoli, F., Combes, F., & Dupraz, C. 1995, A&A, 298, 743 Houck, J. C. & Denicola, L. A. 2000, Astronomical Data Analysis Software and Systems IX, ASP Conference Proceedings, Vol. 216, ed. N. Manset, C. Veillet, & D. Crabtree, Astronomical Society of the Pacific, p. 591 Huchtmeier, W. K. & Richter, O.-G. 1989, A General Catalog of HI Observations of Galaxies (New York: Springer-Verlag) Into, T. & Portinari, L. 2013, MNRAS, 430, 2715 Inami, H., Armus, L., Charmandaris, V., et al. 2013, ApJ, 777, 156 Israel, F. P. 2005, A&A, 438, 8551 Izotov, Y. I., Guseva, N. G., Fricke, K. J., & Henkel, C. 2014, A&A, 561, A33 Izotov, Y. I., Guseva, N. G., Fricke, K. J., & Henkel, C. 2015, MNRAS, 451, 2251 Jester, S., Schneider, D. P., Richards, G. T., et al. 2005, AJ, 130, 873 Jo, Y.-S., Seon, K., Min, K.-W., Edelstein, J., & Han, W. 2019, ApJS, arXiv:1905.07823 Juneau, S., Narayanan, D. T., Moustakas, J., Shirley, Y. L., Bussmann, R. S., Kennicutt, R. C., Jr., & Vander Bout, P. A. 2009, ApJ, 707, 1217 Kennicutt, R. C., Jr. 1998, ARAA, 36, 189 Kennicutt, R. C. & Evans, N. J. 2012, ARAA, 50, 531 Kennicutt, R. C., Jr., Hao, C.-N., Calzetti, D., et al. 2009, ApJ, 703, 1672 Kereš, D., Katz, N., Davé, R., Fardal, M., & Weinberg, D. H. 2009, MNRAS, 396, 2332 Kim, D.-W. & Fabbiano, G. 2013, ApJ, 776, 116 Kim, C.-G. & Ostriker, E. C. 2017, ApJ, 846, 133 Kleineberg, K., Sánchez-Blázquez, P., & Vazdekis, A. 2011, ApJ, 732, 33 Köppen, J., Weidner, C., & Kroupa, P. 2007, MNRAS, 375, 673 Koski, A. T. 1978, ApJ, 223, 56 Kotilainen, J. K., Moorwood, A. F. M., Ward, M. J., & Forbes, D. A. 1996, A&A, 305, 107 Kregenow, J. M., Sirk, M., Sankrit, R., et al. 2006, AAS, 209, 1702 Lara-López, M. A., Cepa, J., Bongiovanni, A., et al. 2010, A&A, 521, 53 Larson, K., Sanders, D. B., Barnes, J. E., et al. 2016, ApJ, 825, 128 Li, J.-T. & Wang, Q. D. 2013, MNRAS, 435, 3071 Li, M., Ostriker, J. P., Cen, R., Bryan, G. L., & Naab, T. 2015, ApJ, 814, 4 Lotz, J. M., Jonsson, P., Cox, T. J., & Primack, J. R. 2000, MNRAS, 391, 1137 Ma, X., Hopkins, P. F., Faucher-Giguere, C.-A., et al. 2016, MNRAS, 456, 2140 Mannucci, F., Cresci, G., Maiolini, R., Marconi, A., & Gnerucci, A. 2010, MNRAS, 408, 2115 Martin, J. M., Bottinelli, L., Dennefeld, M., & Gouguenheim, L. 1991, A&A, 245, 393 Maraston, C. 1998, MNRAS, 300, 872 Mathews, W. G. & Brighenti, F. 2003, ARA&A, 41, 191 McCray, R. 1987, in Spectroscopy of Astrophysical Plasmas, ed. A. Dalgarno & D. Layzer (Cambridge: Cambridge Univ. Press), 260 McKee, C. F. & Cowie, L. L. 1977, ApJ, 215, 213 McQuinn, K. B. W., Skillman, E. D., Heilman, T. N., Mitchell, N. P., & Kelley, T. 2018, 477, 3164 Meiksin, A. 2016, MNRAS, 461, 2762 Mineo, S., Gilfanov, M., & Sunyaev, R. 2012b, , 426, 1870 Mirabel, I. F., Booth, R. S., Garay, G., Johansson, L. E. B., & Sanders, D. B. 1990, A&A, 236, 327 Moreno, J., Torrey, P., Ellison, S. L., et al. 2019, MNRAS, 485, 1320 Muratov, A. L., Kereš, D., Faucher-Giguère, C.-A., et al. 2015, MNRAS, 454, 2691 Noeske, K. G., Weiner, B. J., Faber, S. M., et al. 2007, ApJ, 660, L43 Obreschkow, D. & Rawlings, S. 2009, MNRAS, 394, 1857 Owen, R. A. & Warwick, R. S. 2009, , 394, 1741 Orr, M. E., Hayward, C. C., Hopkins, P. F., et al. 2018, MNRAS, 478, 3653 O’Sullivan, E., Forbes, D. A., & Ponman, T. J. 2001, MNRAS, 328, 461 [^1]: http://ned.ipac.caltech.edu [^2]: \[3.6\] $-$ \[24\] is defined as the magnitude in the 3.6 $\mu$m filter minus that in the 24 $\mu$m filter, using zero magnitude flux densities of 277.5 Jy and 7.3 Jy, respectively. [^3]: With this definition, the SFE is equal to 1/$\tau$$_{\rm dep}$, where $\tau$$_{\rm dep}$ is the global depletion timescale, the time to use up the molecular gas. [^4]: https://heasarc.gsfc.nasa.gov/xanadu/xspec/ [^5]: https://space.mit.edu/ASC/ISIS/ [^6]: Portable Interactive Multi-Mission Simulator; http://asc.harvard.edu/toolkit/pimms.jsp [^7]: SAOImageDS9 development has been made possible by funding from the Chandra X-ray Science Center (CXC) and the High Energy Astrophysics Science Archive Center (HEASARC) with additional funding from the JWST Mission office at Space Telescope Science Institute.
{ "pile_set_name": "ArXiv" }
--- author: - 'Zhen-Tao Zhang[^1]' - 'Zheng-Yuan Xue' - Yang Yu title: 'Detecting fractional Josephson effect through $4\pi$ phase slip' --- introduction ============ Topological superconductor with $p$-wave pairing is a hot topic in condensed matter physics. The system can host at its boundaries one kind of exotic quasiparticles-Majorana Fermions (MFs), which are their own antiparticles. MFs has important applications in quantum information processing [Kitaev01,Nayak08,Xue13,Xue15]{}. Two separate MFs could construct one physical qubit, named topological qubit. The non-locality makes topological qubit immune from local environment noise. Nowadays, intrinsic topological superconductor has yet to be found. Moreover, MFs are predicted to also exist in some complicate systems, e.g., topological insulator coupled to s-wave superconductor via proximity effect [@Fu08], or spin-orbit coupled semiconducting nanowire combined with superconductivity and magnetic field [@Lutchyn10; @Oreg10]. Recently, several groups have claimed that they had observed some important signatures of MFs in these systems [Mourik12,Deng12,Das12]{}. However, the existence of MFs has not been confirmed due to the lack of a smoking-gun evidence.A remarkable signature of MFs is fractional Josephson effect. It is well known that the supercurrent through a conventional Josephson junction is $2\pi $ periodic with the phase difference across the junction. However, this statement is not always true for topological Josephson junction, which is made with two weakly coupled topological superconductor instead of s-wave superconductor. Kitaev has predicted that the current-phase relation in topological Josephson junction should be $4\pi $ periodic [@Kitaev01]. This period doubling of the Josephson current is protected by fermion parity conservation. The fermion parity would not change unless a quasiparticle excitation occurs. Unfortunately, non-equilibrium quasiparticles were found in superconducting system at very low temperature, which is called quasiparticle poisoning [@Matveev93; @Joyez94]. It can break the parity conservation of the system and restore the $2\pi $ period of the current in the characteristic time. Therefore, the experiment to probe the $4\pi $ periodicity should be accomplished within the characteristic time of quasiparticle poisoning. On the other side, the experimental duration time is limited by adiabatic condition and measurement speed. Fast manipulation of the phase difference can excite transitions from the subgap Majorana bound states to the out-gap continuum states due to the Landau-Zener transition. Therefore, it is challenge to experimentally detect the fractional Josephson effect. Recently, several theoretical proposals are brought forward to overcome the quasiparticle poisoning problem [San-Jose12,Houzet13,Peng16]{}. Although these proposals are nearly insensitive to quasiparticle poisoning, they all require that the junction works in the ballistic regime, where the nanowire is nearly transparent, i.e., the conductance $D\sim 1$. In this regime the nontopological Josephson junction can also produce the fractional Josephson effect due to the Landau-Zener transition [@Sau12; @Sothmann13]. Therefore, it is desirable to figure out a scheme working in the tunneling regime of the junction ($D\ll 1$). In addition, most of previous researches have paid attentions to AC Josephson effect where the junction is voltage or current biased. Actually, fractional DC Josephson effect, which does not bring dissipation, is more useful in the context of quantum information processing. For instance, it can be employed to couple topological qubits with conventional superconducting qubits. Here we conceive a scheme for detecting fractional DC Josephson effect. Compared with its AC analog [Wiedenmann16,Bocquillon16,Bocquillon17]{}, the DC effect is more susceptible to parity-breaking excitations and other imperfections. Generally, three mechanisms, conventional Josephson coupling [@Pekker13], quasiparticle poisoning, the coupling of MFs from one topological superconductor, result a conventional $2\pi $ phase slip which screens the $4\pi $ slip of topological Josephson energy. By elaborately designing the parameters of device and experiment, we can overcome these problems at the same time. Firstly, the conventional Josephson coupling could be neglected when the parameters of the superconducting circuit are proper, because the conventional Josephson energy $E_{J}$ relies on the parameters of the junction in a different manner with its topological analog $E_{m}$. When $% E_{J}$ is much smaller than $E_{m}$, the $2\pi $ phase slips will be inhibited. Secondly, our scheme can be implemented in a time scale much shorter than the characteristic time of quasiparticle poisoning. At last, the circuit used in our scheme could be reasonably designed such that the interaction of MFs from one topological superconductor is much smaller than the topological Josephson coupling. In this case, the $4\pi $ phase slip can overwhelm the conventional $2\pi $ slip. System and Hamiltonian ====================== The system we considered is a superconducting loop interrupted by a junction. The junction is made by putting a spin-orbit coupled semiconductor nanowire on two separate superconductors. The two pieces of the nanowire contacting with the superconductors underneath is superconducting due to proximity effect. Combining with a parallel magnetic field, the nanowire could be tuned into the topological phase. When the Zeeman splitting excesses a critical value $B_{c}=\sqrt{\Delta ^{2}+\mu ^{2}}$ ($\Delta $ and $\mu $ are$\ $the superconducting gap and the chemical potential, respectively), the two pieces of proximitized nanowire will transition to topological superconductors and two pairs of MFs emerge at their boundaries(see Fig. \[circuit\]). Moreover, the two MFs at the junction couple with each other. The coupling Hamiltonian reads $$\label{eq1} H_{m}=i\gamma _{1}\gamma _{2}E_{m}\cos \frac{\varphi }{2},$$in which $\gamma _{1},\gamma _{2}$ are Majorana operators, $\varphi $ is the phase difference across the junction, $E_{m}=\Delta \sqrt{D}$ is the amplitude of the topological Josephson coupling energy with $D$ the conductance of the quasi-one-dimensional nanowire. Besides, the conventional Josephson coupling of the junction may also exist, which is related to the quasi-continuum states above the superconducting gap. In the case of one-channel nanowire, the conventional Josephson coupling can be written as $$\label{eq2} H_{J}=-\Delta \sqrt{1-D\sin ^{2}\frac{\varphi }{2}}.$$In the low conductance regime ($D\ll 1$), $H_{J}$ transforms to the celebrated tunneling Josephson coupling $H_{J}=-E_{J}\cos \varphi $ (up to a constant) with $E_{J}=\Delta D/4$. Therefore, it is straightforward to deduce the relation $E_{J}=E_{m}^{2}/4\Delta $. If $E_{m}$ is much smaller than the supercoducting gap, we can get $E_{J}\ll E_{m}$. In this case, we can safely ignore $H_{J}$ term [@Hell16] and write the whole Hamiltonian as $$\label{eq3} H=E_{c}n^{2}+E_{L}(\varphi -\varphi _{e})^{2}+H_{m},$$where $E_{c}=2e^{2}/C$ is the charge energy of the junction, and $% E_{L}=(\phi _{0}/2\pi )^{2}/2L$ is inductive energy of the circuit with $% \phi _{0}$ being flux quantum. $\varphi _{e}=2\pi \phi _{e}/\phi _{0}$, $% \phi _{e}$ denotes the external flux threading the loop. The Hamiltonian is as same as that of a flux qubit except the Josephson coupling term. As well-known, a pair of MFs composes one Dirac fermion, and $H_{m}$ can be expressed as $$\label{eq4} H_{m}=E_{m}\cos \frac{\varphi }{2}(2f^{\dag }f-1),$$ ![(Color online) Schematic of the circuit. The superconducting loop is interrupted by a superconductor-normal metal-superconductor junction. The loop is biased with an external flux $\protect\phi _{e}$. The junction is formed by a spin-orbit coupled nanowire laying on the separate superconductors. When the magnetic field along the nanowire is larger than a critical value, the two pieces of proximitized nanowire (orange sections) are topological superconductors. At the boundaries are four MFs $\protect\gamma _{1},\protect% \gamma _{2},\protect\gamma _{3},\protect\gamma _{4}$. []{data-label="circuit"}](1.eps) in which we have defined $f=(\gamma _{1}+i\gamma _{2})/2$. The eigenvalue of $f^{\dag }f$ (0 or 1) determines the parity of the Dirac fermion (even or odd). The topological Josephson coupling given by $H_{m}$ has two distinguishable characters. Firstly, the coupling is $4\pi $ periodic with phase difference. Resultantly, the charge tunneling the junction is in unit of single-electron instead of Cooper-pair. Very recently, an experiment [Albrecht16]{} has examined this character in Coulomb blockade regime, in which $% E_{C}\gg E_{m}$. In the opposite regime, that is $E_{C}\ll E_{m}$, the $4\pi $ phase slip dual with single electron tunneling can occur. Secondly, the coupling depends upon the fermion parity of the two MFs at the junction. This character makes the $4\pi $ phase slip sensitive to the fermion-parity breaking events, such as quasiparticle poisoning. In the following section, we will present our scheme for uncovering the unique $4\pi $ feature of MFs. Scheme ====== We now investigate how to observe the $4\pi $ phase slip with the system shown in the last section. Without lossing generality, we assume that the parity of MFs is restricted in the even subspace. Later on, we will consider the effect of the unintended change of the parity on the phase slips. Under the circumstances, the potential energy of the whole Hamiltonian (Eq. ([eq3]{})) is $$\label{eq5} U=E_{L}(\varphi -\varphi _{e})^{2}-E_{m}\cos \frac{\varphi }{2}.$$By tuning the parameter $\varphi _{e}$ we can control the configuration of the potential. If $\varphi _{e}=0$, the potential has one global minimum at $% \varphi =0$ (see Fig.2A). If the flux is biased at $\varphi _{e}=2\pi $, a symmetric double-well profile of the potential is formed, similar to the potential of a flux qubit biased at $\varphi _{e}=\pi $. However, the separation of the two minima of the double-well is $\sim $$4\pi $ instead of $\sim $$2\pi $ (see Fig.2B). The lowest two energy eigenstates in the double-well are symmetric and antisymmetric superpositions of left and right local states. The energy splitting of them is denoted by $\Delta E$. For probing the $4\pi $ phase slip, we initially set $\varphi _{e}=0$. In low temperature limit, the system will be reset to the ground state in the well around $\varphi =0$. Then, switch the bias to $\varphi _{e}=2\pi $ quickly to make sure that the system localizes in the left well during this operation, and wait for a time $\Delta t$ $\sim \frac{1}{\Delta E}$. In this period, the resonant tunneling of the phase difference between the double well can happen, and the state of the system is coherently oscillating between the the left and right local state of the double well. Finally, bias the circuit away from $\varphi _{e}=2\pi $ and measure the total flux of the circuit. The resulting flux can either be about 0 or $2\phi _{0},$ corresponding to the left or right local state of the double well respectively. The possibility of finding ’$2\phi _{0}$’ oscillates with $% \Delta t$. In experiment, we can measure the total flux of the loop with another RF SQUID [@Spanton17]. The possibility of the system projecting to the $2\phi _{0}$ state can be obtained by repeating the above operations many times. Note that if the same operations are applied to a conventional or topological trivial RF SQUID, the final measured flux would definitely be $\phi _{0}$ independent of $\Delta t$, because of the $2\pi $ periodicity of their Josephson couplings [@Lange15]. Hence, the oscillating $4\pi $ phase slip is a distinctive signature of topological Josephson junction. However, in practice the superconducting circuit is subject to some unavoidable disturbance which might destroy the signature. Therefore, it is vital to investigate the robustness of our scheme. ![Potential energy configurations. The fermion parity is even, and the circuit is biased at $\protect\varphi _{e}=0$ (A), $\protect\varphi % _{e}=2\protect\pi $ (B). In (B), the potential is a symmetric double-well with the lowest two energy eigenstates be symmetric and antisymmetric superpositions of left and right local states. ](2.eps){width="7cm" height="4.5cm"} Effect of quasiparticle poisoning --------------------------------- In Eq. (\[eq5\]), we have assumed that the parity of MFs is conserved in the whole process. Actually, the parity conservation can be broken by quasiparticle poisoning. Quasiparticles exist in various superconducting systems even at vary low temperature. One quasiparticle excitation event could alter the occupation of the in-gap states in a junction. For the topological Josephson junction, it would turn over the parity of MFs. In our case, we prepare the MFs at even parity state, thus an unwanted excitation will take it to odd state. If this happens when the circuit is biased at $% \varphi =2\pi $, the potential energy profile is changed. It is obvious that the circuit will eventually stay at the ground state of the well with minimum at $\varphi =2\pi $. That is exactly the result in conventional RF SQUID in the same bias sequence. Thus, the $4\pi $ phase slip disappears. Therefore, anyone who is going to observe the $4\pi $ phenomenon must carry out the experiment in a period shorter than the quasiparticle poisoning time. Generally, the parity lifetime of the bound state in a proximitized semiconductor nanowire applied magnetic field exceeds 10 $\mu s$ [Albrecht16]{}. The time needed to implement our scheme is on the order of $% 1/\Delta E$. Typically, we choose the parameters as follows: $E_{m}=25$ GHz$% \times h$, $E_{c}=3$ GHz$\times h$, $E_{L}=1$ GHz$\times h$. With this parameter configuration, we have numerically calculated the splitting $% \Delta E=25$ MHz. This value means that the phase slips happen in the time scale of $40$ $ns$, which is at least two orders of magnitude shorter than the poisoning time. We stress that after each run of the experiment, the Fermion parity will be initialized to even subspace. Therefore, we can claim that the quasiparticles have little impact on our scheme.A comment is in order. In our parameters set, the Josephson coupling energy is much larger than the inductive energy with ratio $E_{m}/E_{L}=25$. Even though, the finiteness of the ratio would make the distance of the two minimum of the symmetric double well is not equal to $4\pi $, but rather smaller than it. In fact, the distance is about $3\pi $ with our parameters. From this view of point, the expression $4\pi $ phase slip is somewhat misleading. Similarly, in a conventional RF SQUID the amplitude of the phase slip is not $2\pi $ either ($<2\pi $). Actually, the names are stemming from the formation of the relate Josephson coupling. What is more, we can distinguish these two kinds of phase slips without any confusion. Effect of finite length of topological superconductor ----------------------------------------------------- We know that the coupling of the two MFs of one topological superconductor is oscillating with the length of the superconductor [@Cheng09; @Sarma12]. The oscillation amplitude decreases exponentially with the length $L$, $$\label{eq6} \varepsilon =\varepsilon _{0}e^{-L/\xi },$$where $\varepsilon _{0}$ is a prefactor, $\xi $ is superconducting coherence length. Generally, if the topological superconductor is much longer than its superconducting coherence length, this coupling is rather weak and can be neglected. That is why we have not put the interaction between $\gamma _{1}$($\gamma _{2}$) and $\gamma _{3}$($\gamma _{4}$) in Eq. (\[eq1\]). However, in practice, the length of a one-dimensional topological superconductor may be limited by the technique to make it or the size of the circuit. It is necessary to investigate the effect of the coupling between $\gamma _{1}$($% \gamma _{2}$) and $\gamma _{3}$($\gamma _{4}$) on the $4\pi $ phase slips.Let us first look at the Josephson coupling energy in absence of the interactions $\gamma _{1}\gamma _{3}$, $\gamma _{2}\gamma _{4}$, ie., $H_{m}$ (Eq. (\[eq4\])). When the phase difference takes values of $(2k+1)\pi $ (k be integer), the even and odd parity states are degenerate. When the interactions present, the potential energy can be addressed as $$\label{eq7} U^{\prime }=E_{L}(\varphi -\varphi _{e})^{2}-E_{m}\cos \frac{\varphi }{2}% \sigma _{z}+\varepsilon \sigma _{x},$$ ![Two kinds of tunnelings. The solid lines are describing potential energy given by Eq. (\[eq7\]) after diagonalization in the parity subspace. The existence of MFs couplings $\protect\gamma _{1}\protect\gamma % _{3},\protect\gamma _{2}\protect\gamma _{4}$ makes the transition of the fermion parity of $\protect\gamma _{1}\protect\gamma _{2}$ possible. Tunneling 1 (dashed line) does not change the parity while Tunneling 2 (doted line) does.](3.eps){width="7cm" height="6cm"} where $\sigma _{x,z}$ are Pauli operators acting in the fermion parity space of $\gamma _{1},\gamma _{2}$. $\varepsilon $ denotes the coupling strength of $\gamma _{1}\gamma _{3}$ ($\gamma _{2}\gamma _{4}$) which is much smaller than $E_{m}$. It is easy to see that the odd-even degeneracies at $\varphi =(2k+1)\pi $ are lift, and instead anticrossings arise, which leads to the mixing of the two parity states. When the circuit is biased at $\varphi _{e}=2\pi $ with the initial state be the ground state in the left well, there are two possible tunneling events. One is tunneling to the right well with same parity (named Tunneling 1), and the other is tunneling to the nearest well with opposite parity (Tunneling 2), as shown in Fig. 3. Tunneling 1 is the consequence of the topological Josephson coupling and signify the $4\pi $ phase slip. In contrast, Tunneling 2 denote the $2\pi $ phase slip which is always connected to the topological trivial Josephson junction. Therefore, if Tunneling 2 dominates the process, $4\pi $ phase slip is covered and we can not tell the topological phase from the topological trivial phase. To this end, one needs to clarify whether the Tunneling 2 is weak enough to be neglected under experimentally feasible condition. Now we devote to estimate the tunneling rate of Tunneling 2. The coexistence of parity switching and quantum fluctuation of the phase difference make the task troublesome. We solve this problem in a quasiclassical manner. As Tunneling 2 will change the fermion parity, it is reasonable to believe that the tunneling rate should be related to the transition rate of the parity states when $\varphi $ is considered as a classical quantity. While biasing the circuit at $\varphi _{e}=2\pi $, the system is initially located at left well with minimum of $\sim \pi /2$ (not 0 due to the finite of $E_{m}/E_{L}$) and parity is even. After Tunneling 2, the system localizes at $\varphi =2\pi $ and parity is odd. Therefore, the tunneling rate is limited by the transition rate of the fermion parity at $% \varphi =\pi /2$. For convenience, we assume they are approximately equal. The calculation of parity transition rate is a typical two-level-system problem. Starting with even parity, the population of odd parity state is oscillating with time between 0 and $P$, with $P=\varepsilon \Big /\sqrt{% \varepsilon ^{2}+(E_{m}\cos \frac{\pi }{4})^{2}}$. According to Eq. ([eq6]{}) and the parameters in Ref. [@Deng16], when the nanowire is as long as $L=2$ $\mu m$ which is reachable in experiment, the MFs coupling $% \varepsilon $ is three orders of magnitude smaller than $E_{m}$. In this case, the maximum odd parity population $P\approx 0$, which means that even$% \rightarrow $odd transition rate is almost vanishing. One may argue that the initial state does not localize at $\varphi =\pi /2$, but spreads on a range even including the anticrossing point $\varphi =\pi $. In fact, the parity transition rate reach its maximum value of $\varepsilon $ at the anticrossing, which is the same order of magnitude as tunneling rate of Tunneling 1, i.e., $\Delta E$. However, the probability of the initial state be around the anticrossing is very small due to the large ratio $% E_{m}/\varepsilon $, thereby Tunneling 2 would rarely occur in the period of Tunneling 1. In other words, $4\pi $ phase slip will not be covered by $2\pi $ phase slip. Discussion and Conclusion ========================= We would like to discuss the feasibility of our scheme. The scheme is conceived based on the Hamiltonian of the system given by Eq. (\[eq3\]), in which we have neglected the conventional Josephson coupling of the topological junction. For justifying this approximation, we estimate the ratio $E_{J}/E_{m}$ with practical parameters. For the typical material NbN, its superconducting critical temperature is $\sim $10 K, which equals eight times of the value of $E_{m}$ chosen in this paper. This condition in turn leads to $E_{J}=E_{m}/32$. Consequently, the conventional Josephson coupling has little effect on the 4$\pi $ phase slips and can be ignored. In addition, the large ratio of $\Delta /E_{m}$ is helpful to prevent the subgap Majorana bound state being excited to the continuum states. The other issue is the viability of RF SQUID with a very small inductance energy. It is worth noting that a small value of the inductance energy and, thus, a large magnitude of *L* is essential for the observation of the $4\pi $ phase slip, since the large ratio $E_{m}/E_{L}$ can make the distance of the minima of the double well of the superconducting phase far exceed $2\pi $. Taken $E_{L}=1$ GHz, the inductance of the loop *L* is up to 100 nH. In experiment, we can design a large area superconducting loop, or make use of a array of Josepshon junctions playing the role of a superinductor, such as that in fluxonium qubit [@Pekker13]. Indeed, the requirement of the large inductance could be loosed at the expense of slightly reducing the amplitude of the phase slip.In conclusion, we have proposed a scheme for detecting fractional DC Josephson effect in topological RF SQUID system through $4\pi $ phase slip. To observe this phase slip, we take advantage of the resonant tunneling of the phase difference. Our calculations with reachable parameters show that the duration of the process of the scheme is much shorter than the quasiparticle poisoning time. More importantly, the $4\pi $ phase slip could overwhelm the topological trivial $2\pi $ phase slip with a practical nanowire length. Our scheme is experimentally feasible, and promising for exploring the interplay of topological superconductors and quantum computation. We thank the very helpful discussions with Shi-Liang Zhu. This work was funded by the National Science Foundation of China (No.11404156), the Startup Foundation of Liaocheng University (Grant No.318051325), the NFRPC (Grant No.2013CB921804), and the NKRDP of China (Grant No. 2016YFA0301800). [0]{} . . . . . . . . . . . . . . . . . . . . ; . . . . . . . . [^1]: zhzhentao@163.com
{ "pile_set_name": "ArXiv" }
--- abstract: 'We analyze the effect of local decoherence of two qubits on their entanglement and the Bell inequality violation. Decoherence is described by Kraus operators, which take into account dephasing and energy relaxation at an arbitrary temperature. We show that in the experiments with superconducting phase qubits the survival time for entanglement should be much longer than for the Bell inequality violation.' author: - 'A. G. Kofman' - 'A. N. Korotkov' title: Bell inequality violation versus entanglement in presence of local decoherence --- Entanglement of separated systems is a genuine quantum effect and an essential resource in quantum information processing. [@nie00] Experimentally, a convincing evidence of a two-qubit entanglement is a violation of the Bell inequality [@Bell] in its Clauser-Horne-Shimony-Holt[@CHSH] (CHSH) form. However, only for pure states the entanglement always [@cap73] results in a violation of the Bell inequality. In contrast, some mixed entangled two-qubit states (as we will see, most of them) do not violate the Bell inequality, [@wer89] though they may still exhibit nonlocality in other ways. [@pop94] Distinction between entanglement and Bell-inequality violation, in its relevance to experiments with superconducting phase qubits,[@ste06] is the subject of our paper. The two-qubit entanglement is usually characterized by the concurrence [@woo98] $C$ or by the entanglement of formation,[@Bennett-96] which is a monotonous function [@woo98] of $C$. Non-entangled states have $C=0$, while $C=1$ corresponds to maximally entangled states. There is a straightforward way[@woo98] to calculate $C$ for any two-qubit density matrix $\rho$. The Bell inequality in the CHSH form [@CHSH] is $|S|\leq 2$, where $S=E(\vec{a},\vec{b})-E(\vec{a},\vec{b}')+ E(\vec{a}',\vec{b})+ E(\vec{a}',\vec{b}')$ and $E(\vec{a},\vec{b})$ is the correlator of results ($\pm 1$) for measurement of two qubits (pseudospins) along directions $\vec{a}$ and $\vec{b}$. This inequality should be satisfied by any local hidden-variable theory, while in quantum mechanics it is violated up to $|S|=2\sqrt{2}$ for maximally entangled (e.g., spin-zero) states. Mixed states produce smaller violation (if any), and there is a straightforward way [@hor95] to calculate the maximum value $S_+$ of $|S|$ for any two-qubit density matrix. For states with a given concurrence $C$, there is an exact bound [@ver02] for $S_+$: $2\sqrt{2}C\leq S_+\leq 2\sqrt{1+C^2}$ (we consider only $S_+>2$), so that the Bell inequality violation is guaranteed if $C>1/\sqrt{2}$. For any pure state the upper bound is reached: $S_+=2\sqrt{1+C^2}$, so that non-zero entanglement always leads to $S_+>2$. The distinction between entanglement and Bell inequality violation has been well studied for so-called Werner states [@wer89] which have the form $\rho=f \rho_s+(1-f)\rho_{\rm mix}$, where $\rho_s$ denotes the maximally entangled (singlet) state, and $\rho_{\rm mix}={\bf 1}/4$ is the density matrix of the completely mixed state. The Werner state is entangled for[@wer89] $f>1/3$, while it violates the Bell inequality only when[@hor95] $f>1/\sqrt{2}$. The Werner states, however, are not relevant to most of experiments (including those with superconducting phase qubits [@ste06]), in which an initially pure state becomes mixed due to decoherence (Werner states are produced due to so-called depolarizing channel[@nie00]). Recently a number of authors have analyzed effects of qubit decoherence on the Bell inequality violation [@sam03; @beenak03; @jak04; @sli05; @jam06] and entanglement. [@Loss03; @tyu04; @tyu06; @tol05; @ged06; @san06; @Nori06] Best-studied models of decoherence in this context are pure dephasing [@sam03; @beenak03; @sli05; @tol05; @ged06; @Nori06] and zero-temperature energy relaxation, [@jak04; @jam06; @tyu04; @san06] while there are also papers considering a combination of these mechanisms, [@Loss03; @tyu06] high-temperature energy relaxation, [@jak04] and non-local decoherence. [@jak04; @sli05; @Nori06] In particular, for the case of pure dephasing it has been shown [@tol05; @tyu06] that the concurrence $C$ decays as a product of decoherence factors for the two qubits, and therefore a state remains entangled for arbitrarily long time; moreover, the calculation of $S_+$ shows[@sam03; @beenak03] that the Bell inequality is always violated also. For the case of zero-temperature energy relaxation it has been shown that entanglement can still last forever[@tyu04; @san06; @jam06] (depending on the initial state), while a finite survival time has been obtained [@jam06] for the Bell inequality violation. In this paper we consider a two-qubit state decoherence due to general (Markovian) local decoherence of each qubit (including dephasing and energy relaxation at a finite temperature) and assume absence of any other evolution. For this model we compare for how long an initial state remains entangled ($C>0$), and for how long it can violate the Bell inequality ($S_+>2$). In particular, we show that for typical (best) present-day parameters for phase qubits[@ste06] these durations differ by $\sim 8$ times. Before analyzing this problem let us discuss which fraction of the entangled two-qubit states violate the Bell inequality. This question is well-posed only if we introduce a particular metric (distance) and corresponding measure (volume) in the 15-dimensional space of density matrices. Various metrics are possible; let us choose the Hilbert-Schmidt metric, [@nie00; @zyc01] for which the geometry in the space of states is Euclidean. Then random states $\rho$ with the uniform probability distribution can be generated as [@zyc01] $\rho=A^\dagger A/{\rm tr}(A^\dagger A)$, where $A$ is a $4\times 4$ matrix, all elements of which are independent Gaussian complex variables with the same variance and zero mean. Using this method, we performed Monte-Carlo simulation, generating $10^9$ random states and checking if they are entangled [@note1; @per96; @san98] and if they violate the Bell inequality.[@hor95] In this way we confirmed that 75.76% of all states are entangled [@slater-07] and found that only 0.822% of all states violate the Bell inequality. Therefore, only a small fraction, 1.085% of entangled states violate the Bell inequality. Now let us discuss the effect of decoherence. For one qubit it can be described by the Bloch equations [@coh92] (we use the basis of the ground state $|0\rangle$ and excited state $|1\rangle$) and characterized by the energy relaxation time $T_1$, dephasing time $T_2$ ($T_2\leq 2 T_1$), and the Boltzmann factor $h=\exp (-\Delta /\theta )$, where $\Delta$ is the energy separation of the states and $\theta$ is the temperature. The usual solution of the Bloch equations can be translated into the language of time-dependent superoperator ${\cal L}$ for the one-qubit density matrix $\rho$, so that $\rho(t)={\cal L}[\rho(0)]=\sum_{i=1}^4 K_i\rho(0)K_i^\dagger$, where four Kraus operators $K_i$ can be chosen as K\_1=( [cc]{} 0&0\ &0 ), K\_2=( [cc]{} &0\ 0&/ ),\ K\_3=( [cc]{} 0&0\ 0 & ), K\_4=( [cc]{} 0&\ 0&0 ) , where $g=[1-\exp(-t/T_1)]/(1+h)$, $\lambda=\exp (-t/T_2)$, and in our notation $|1\rangle=(1,0)^T$, $|0\rangle=(0,1)^T$. It is easy to check that the term under the square root in $K_3$ is always non-negative and equals 0 (for $t>0$) only if $T_2=2T_1$ and $\theta=0$. Notice that choice of the Kraus operators $K_i$ is not unique (though limited to the unitary freedom of quantum operations[@nie00]) and, for instance, the Kraus operators presented in Ref.  for the special cases of depolarizing channel ($T_1=T_2$, $\theta =\infty$) and energy relaxation ($T_2=2T_1$) differ from Eq. (\[5.8-m\]). In general, decoherence of two qubits is described by many parameters (out of 240 parameters describing a general quantum operation only 15 parameters describe unitary evolution). We choose a relatively simple but physically relevant model when the decoherence is dominated by local decoherence of each qubit. (Non-local decoherence would be physically impossible in the case of large distance between the qubits.) The model now involves six parameters: $T_1^{a,b}$, $T_2^{a,b}$, and $h_{a,b}=\exp(-\Delta_{a,b}/\theta_{a,b})$, where subscripts (or superscripts) $a$ and $b$ denote qubits, and the evolution is described by the tensor-product superoperator ${\cal L}={\cal L}_a\otimes{\cal L}_b$ (which is completely positive because of complete positivity of ${\cal L}_{a,b}$). This superoperator contains 16 terms: $\rho(t)={\cal L}[\rho(0)] =\sum_{i,j=1}^4K_{ij}\rho(0)K_{ij}^\dagger$, $K_{ij} = K_i^a\otimes K_j^b$, where operators $K_i^{a,b}$ are given by Eq. (\[5.8-m\]) for each qubit. As an initial state we consider an “odd” pure state |= |10+ e\^[i]{} |01 ($0<\beta<\pi/2$), which is relevant for experiments with the phase qubits. [@ste06] Since the parameter $\alpha$ corresponds to $z$-rotation of one of the qubits, while decoherence as well as values of $C$ and $S_+$ are insensitive to such rotation, all results of our model have either trivial or no dependence on $\alpha$. The evolution of the state (\[2.5\]) due to local decoherence ${\cal L}$ can be calculated analytically, and at time $t$ the non-vanishing elements of the two-qubit density matrix $\rho$ are && \_[11]{}(t)=(1-g\_a) h\_b g\_b \^2 + h\_ag\_a(1-g\_b)\^2 ,\ && \_[22]{}(t)=(1-g\_a) (1-h\_b g\_b) \^2 + h\_ag\_a g\_b \^2 ,\ && \_[33]{}(t) = g\_a h\_bg\_b \^2 +(1-h\_ag\_a)(1-g\_b)\^2,\ && \_[44]{}(t) = g\_a (1-h\_bg\_b) \^2 +(1-h\_ag\_a) g\_b \^2 ,\ && \_[32]{}(t)= \_[23]{}\^\*(t)= (-t/T\_2\^a-t/T\_2\^b) e\^[i]{} (2)/2 , where $g_{a,b}$ are defined below Eq. (\[5.8-m\]), and $\rho_{ij}$ subscripts $i,j=1,2,3,4$ correspond to the basis $\{|11\rangle, |10\rangle, |01\rangle,|00\rangle\}$. These equations become very simple at zero temperature because then $h_a=h_b=0$. Notice that the dephasing times $T_2^{a,b}$ enter Eqs. only through the combination $1/T_2^a+1/T_2^b$ (this is not so for a general initial state), so that the two-qubit dephasing can be characterized by one parameter $T_2\equiv 2/(1/T_2^a+1/T_2^b)$. For the state the concurrence is [@tyu06; @jak04] C=2{0,|\_[23]{}|-}, and the Bell inequality parameter $S_+$ is [@hor95; @jam06] S\_+=2{2|\_[23]{}|, } , while for the initial state $C=\sin 2\beta >0$ and $S_+=2\sqrt{1+C^2}>2$. Notice that the first and second terms in Eq. (\[5\]) correspond to the “horizontal” and “vertical” measurement configurations, using the terminology of Ref.. Equations (\[2\]), (\[3\]), and (\[5\]) are all we need to analyze entanglement and Bell inequality violation. Notice that for a pure dephasing ($T_1^a=T_1^b=\infty$) we have $\rho_{11}=\rho_{44}=0$, and therefore C=(-2t/T\_2) 2, S\_+=2. In this case at any $t$ the state remains entangled [@tol05; @tyu06] and violates the Bell inequality. [@sam03; @beenak03] (It also remains within the class of states producing maximal Bell inequality violation for a given concurrence.[@ver02]) In the case when both dephasing and energy relaxation are present but temperature is zero, $\theta_a=\theta_b=0$, the concurrence $C$ is still given by Eq.(\[4\]) and lasts forever; [@san06; @jam06] however $S_+$ does not satisfy Eq. (\[4\]) and, most importantly, the Bell inequality is no longer violated after a finite time.[@jam06] Finally, in presence of energy relaxation at non-zero temperature (at least for one qubit) the entanglement also vanishes after a finite time, as seen from Eq. (\[3\]), in which $\lim_{t\rightarrow\infty} \rho_{11}\rho_{44} \neq 0$. Let us consider in more detail the case when both dephasing and energy relaxation are present, but temperature is zero and $T_1^a=T_1^b\equiv T_1$. Then Eq. (\[5\]) for $S_+$ becomes very simple since $\rho_{11}=0$ and $\rho_{44}=1-\exp (-t/T_1)$. The time dependence $S_+(t)$ consists of three regions: at small $t$ it is always determined by the second term [@note-beta] in Eq.(\[5\]), then after some time $t_1$ the first term becomes dominating, while after a later time $t_2$ the second term becomes dominating again. Notice that in the second region $S_+=4\sqrt{2}|\rho_{23}|=2\sqrt{2}C$, so such state provides minimal $S_+$ for a given concurrence $C$. [@ver02; @note] The time $\tau_B$ after which the Bell inequality is no longer violated \[$S_+(\tau_B)=2$\] falls either into the first or second region, because $S_+(t_2)<2$ \[it is interesting to note that in the third region $S_+(t)$ passes through a minimum and then increases up to $S_+ \rightarrow 2$ at $t\rightarrow \infty$\]. The time $\tau_B$ can be easily calculated if $S_+(t_1)>2$, so that $\tau_B$ falls into the second region and therefore $$\tau_B= (T_2/2) \ln (\sqrt{2}\sin 2\beta). \label{tau-B-an}$$ This case is realized when pure dephasing is relatively weak: $T_1/T_2\le\ln(\sqrt{2} \sin 2\beta)/[2\ln(4-2\sqrt{2})]$; since $T_1/T_2\geq 1/2$, it also requires $\sin 2\beta \ge 2\sqrt{2}-2$. \[For $T_1/T_2=1/2$ Eq. (\[tau-B-an\]) has been obtained in Ref..\] Notice that $\tau_B$ in Eq. (\[tau-B-an\]) corresponds to the condition $C=1/\sqrt{2}$, while in general $\tau_B$ corresponds to $C\leq 1/\sqrt{2}$ because of the inequality[@ver02] $S_+\geq 2\sqrt{2}C$. ![The two-qubit entanglement duration $\tau_E$ in units of the dephasing time $T_2$ for the maximally entangled initial state ($\beta=\pi/4$) and several values of the temperature $\theta$. Dashes lines correspond to Eq. (\[tau-E\]). []{data-label="f1"}](bellent_f1){width="7.8cm"} Now let us focus on calculating the duration $\tau_E$ of the entanglement survival, duration $\tau_B$ of the Bell inequality violation, and their ratio $\tau_E/\tau_B$ at non-zero temperature. For simplicity we limit ourselves to the case of maximally entangled initial state ($\beta =\pi /4$), and we also assume equal energy relaxation, splitting and temperature for both qubits: $T_1^a=T_1^b\equiv T_1$, $\Delta_a=\Delta_b\equiv \Delta$, and $\theta_a=\theta_b\equiv \theta$ (we do not need to assume equal dephasing, since it can be characterized by only one parameter $T_2$). As follows from Eq. , the entanglement duration $\tau_E$ can be calculated numerically using the equation $|\rho_{23}|=\sqrt{\rho_{11}\rho_{44}}$. Figure \[f1\] shows $\tau_E$ (normalized by $T_2$) as a function of the ratio $T_1/T_2$ for several values of the normalized inverse temperature $\Delta/\theta$. As we see, in a typical experimental regime [@ste06] when $\Delta /\theta \sim 10$, the ratio $\tau_E/T_2$ does not depend much on $T_1/T_2$ when $T_1$ is larger but comparable to $T_2$ (which is also typical experimentally). In other words, $\tau_E$ is approximately proportional to $T_2$, and in this regime $\tau_E$ also has crudely inverse dependence on temperature \[see Eq. (\[tau-E\]) below\]. Analytical formulas for $\tau_E$ can be easily obtained in the limiting cases. In absence of pure dephasing ($T_1/T_2=1/2$) and low temperature ($\theta \ll \Delta$) we find $\tau_E/T_2 \approx \Delta/2\theta -\ln(2\sqrt{2}+2)/2 \approx \Delta/2\theta -0.79$, while at high temperature ($\theta \gg \Delta$) we have $\tau_E/T_2 \approx \ln(\sqrt{2}+1)/2 \approx 0.44$. In the case of strong dephasing ($T_1/T_2\gg 1$) we find (neglecting some corrections) $\tau_E/T_2\simeq\Delta/(4\theta )+\ln(T_1/T_2)/2$. However, these asymptotic formulas are not very relevant to a typical experimental situation with phase qubits,[@ste06] in which $T_1\agt T_2$. As another way to approximate $\tau_E$ we have chosen the value at the minimum of the curves in Fig. \[f1\]; this minimum occurs at the ratios $T_1/T_2$ somewhat close to the experimental values, and the result is naturally not much sensitive to $T_1/T_2$ in a significantly broad range. For sufficiently small temperatures ($\Delta/\theta >2$) we have obtained approximation $(\tau_E/T_2)_{\rm min} \approx \Delta/4\theta +\ln(3^{3/4}/2)\approx\Delta/4\theta +0.13$ and found that the minimum occurs at $T_1/T_2 \approx (\tau_E/T_2)_{\rm min}/\ln3$. So, as the crudest approximation in the experimentally-relevant regime ($\theta / \Delta \sim 10^{-1}$, $T_1/T_2\agt 1$), the two-qubit entanglement lasts for (see dashed lines in Fig. \[f1\]) $$\tau_E \simeq T_2 \Delta /4\theta . \label{tau-E}$$ ![The duration $\tau_B$ of the Bell inequality violation (assuming $\beta =\pi/4$) for $\Delta/\theta =15$ (solid line) and $\Delta/\theta =0$ (dotted line). The dashed line: $\tau_B/T_2=\ln[T_1/(4\tau_B)]/4$. []{data-label="f3"}](bellent_f3){width="7.9cm"} The duration $\tau_B$ of the Bell inequality violation is calculated using Eq.  as $S_+(\tau_B)=2$. Solid and dotted lines in Fig. \[f3\] show numerical results for $\tau_B$ (in units of $T_2$) as a function of the ratio $T_1/T_2$ for low and high temperatures: $\Delta/\theta =15$ and 0. The curves are almost indistinguishable, that means that $\tau_B$ is practically independent of the temperature for fixed $T_1$ and $T_2$. Notice that each curve consists of a constant (horizontal) part and an increasing part, which correspond to two terms in Eq. . It can be shown that at zero temperature the horizontal part is realized at $T_1/T_2\le\ln2/[4\ln(4-2\sqrt{2})]\approx 1.1$, while at high temperature ($\theta \gg \Delta$) it is realized at $ T_1/T_2\le 1$. The horizontal part corresponds to the first term in Eq.  dominating at $\tau_B$: $S_+=2\sqrt{2}\exp(-2 t/T_2)$, so at sufficiently weak pure dephasing we have $\tau_B/T_2=\ln2/4\approx 0.17$ \[see also Eq. (\[tau-B-an\])\]. In the opposite case of strong pure dephasing ($T_1/T_2\gg 1$) the duration $\tau_B$ is the solution of the equation $\tau_B/T_2=\ln[T_1/(4\tau_B)]/4$ (dashed line in Fig. \[f3\]), so roughly $\tau_B/T_2\simeq\ln(T_1/T_2)/4$ (dot-dashed line in Fig. \[f3\]). Combining these results, we get a crude estimate: $$\tau_B \simeq T_2 \max\{0.17, \, 0.25 \ln (T_1/T_2)\}. \label{tau-B}$$ ![The ratio $\tau_E/\tau_B$ for the maximally entangled initial state and several values of the temperature $\theta$. []{data-label="f4"}](bellent_f4){width="7.9cm"} Figure \[f4\] shows the ratio $\tau_E/\tau_B$ of the survival durations of entanglement and the Bell inequality violation. We see that the ratio $\tau_E/\tau_B$ increases with the decrease of temperature and decrease of the pure dephasing contribution, which are both the desired experimental regimes. (This rule does not work in the experimentally irrelevant regime $\theta \gg \Delta$ and $T_1<T_2$.) Notice that the kinks on the curves correspond to the change of the dominating term in Eq. . In absence of pure dephasing ($T_1/T_2=1/2$) the low-temperature result ($\theta \ll \Delta$) is $\tau_E/\tau_B\approx (2/\ln2) [\Delta/\theta-\ln(2\sqrt{2}+2)]$, while at $\theta \gg \Delta$ the ratio is $\tau_E/\tau_B\approx 2\ln(\sqrt{2}+1)/\ln2\approx 2.5$. In the limit of strong pure dephasing ($T_1/T_2\gg 1$) the asymptotic result is $\tau_E/\tau_B\approx 2+(\Delta/\theta)/\ln(T_1/T_2)$ (as we see, $\tau_E > 2\tau_B$ for any parameters). In the experimentally relevant regime when $\theta /\Delta \sim 10^{-1}$ and $T_1/T_2\agt 1$, the ratio can be obtained from Eqs. (\[tau-E\]) and (\[tau-B\]), giving a crude estimate $\tau_E/\tau_B \simeq (\Delta /\theta)\min \{1.5,\, 1/\ln(T_1/T_2)\}$. For an experimental estimate let us choose parameters typical for best present-day experiments with superconducting phase qubits: [@ste06] $\Delta/2\pi\hbar\simeq6$ GHz, $\theta\simeq 50$ mK, $T_1\simeq 450$ ns, $T_2\simeq300$ ns. Then $\Delta/\theta\simeq 6$, $T_1/T_2\simeq 1.5$, and we obtain $\tau_E \simeq 470$ ns, $\tau_B \simeq 60$ ns, and $\tau_E/\tau_B\simeq 7.7$. In conclusion, we have found that in the Hilbert-Schmidt metric only 1.085% of entangled states violate the Bell inequality, thus explaining why entanglement can last for a significantly longer time ($\tau_E$) than the Bell inequality violation ($\tau_B$). Using the technique of Kraus operators, we have considered local decoherence due to dephasing and energy relaxation at finite temperature, and for this model calculated $\tau_E$, $\tau_B$, and their ratio $\tau_E/\tau_B$. The work was supported by NSA and DTO under ARO grant W911NF-04-1-0204. [99]{} M. A. Nielsen and I. L. Chuang, [*Quantum Computation and Quantum Information*]{} (Cambridge Univ. Press, 2000). J. S. Bell, Physics [**1**]{}, 195 (1964). J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. [**23**]{}, 880 (1969). V. Capasso, D. Fortunato, and F. Selleri, Int. J. Theor. Phys. [**7**]{}, 319 (1973); N. Gisin, Phys. Lett. A [**154**]{}, 201 (1991). R. F. Werner, Phys. Rev. A [**40**]{}, 4277 (1989). S. Popescu, Phys. Rev. Lett. [**72**]{}, 797 (1994); ibid. [**74**]{}, 2619 (1995); N. Gisin, Phys. Lett. A [**210**]{}, 151 (1996). M. Steffen, M. Ansmann, R. C. Bialczak, N. Katz, E. Lucero, R. McDermott, M. Neeley, E. M. Weig, A. N. Cleland, and J. M. Martinis, Science [**313**]{}, 1423 (2006); M. Ansmann et al., Bulletin of APS [**52**]{}, Abstract L33.00005 (2007). W. K. Wootters, Phys. Rev. Lett. [**80**]{}, 2245 (1998). C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wooters, Phys. Rev. A [**54**]{}, 3824 (1996). R. Horodecki, P. Horodecki, M. Horodecki, Phys. Lett. A [**200**]{}, 340 (1995). F. Verstraete and M. M. Wolf, Phys. Rev. Lett. [**89**]{}, 170401 (2002). P. Samuelsson, E. V. Sukhorukov, and M. Büttiker, Phys. Rev. Lett. [**91**]{}, 157002 (2003). C. W. J. Beenakker, C. Emary, M. Kindermann, and J. L. van Velsen, Phys. Rev. Lett. [**91**]{}, 147901 (2003). L. Jakóbczyk and A. Jamróz, Phys. Lett. A [**333**]{}, 35 (2004); ibid. [**318**]{}, 318 (2003). S.-B. Li and J.-B. Xu, Phys. Rev. A [**72**]{}, 022332 (2005). A. Jamróz, J. Phys. A [**39**]{}, 7727 (2006). G. Burkard and D. Loss, Phys. Rev. Lett. [**91**]{}, 087903 (2003). T. Yu and J. H. Eberly, Phys. Rev. Lett. [**93**]{}, 140404 (2004). D. Tolkunov, V. Privman, and P. K. Aravind, Phys. Rev. A [**71**]{}, 060308(R) (2005). T. Yu and J. H. Eberly, Phys. Rev. Lett. [**97**]{}, 140403 (2006). Z. Gedik, Solid State Comm. [**138**]{}, 82 (2006). M. F. Santos, P. Milman, L. Davidovich, and N. Zagury, Phys. Rev. A [**73**]{}, 040305(R) (2006). L. F. Wei, Y.-X. Liu, M. J. Storcz, and F. Nori, Phys. Rev. A [**73**]{}, 052307 (2006). K. Życzkowski and H.-J. Sommers, J. Phys. A [**34**]{}, 7111 (2001). Entanglement is checked by the fast method based on the sign of the determinant of the partially transposed state [@per96] $\tilde{\rho}$, using the fact that for an entangled state $\rho$ all eigenvalues of $\tilde{\rho}$ are non-zero, and exactly one of them is negative. [@san98] A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996); M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**223**]{}, 1 (1996). A. Sanpera, R. Tarrach, and G. Vidal, Phys. Rev. A [**58**]{}, 826 (1998). P. B. Slater, Phys. Rev. A [**71**]{}, 052319 (2005). C. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, [*Atom-Photon Interactions*]{} (Wiley, N.Y., 1992), Ch. IV. A. G. Kofman and A. N. Korotkov, arXiv:0707.0036. Notice that when $T_1^a \neq T_1^b$, the second term in Eq.(\[5\]) is maximized for a non-maximally entangled state, $\beta \neq \pi/4$, though the benefit is not significant if we need $S_+\agt 2.2$. The statement in Ref.  that any mixed state with $S_+=2\sqrt{2}C>2$ is maximally entangled, is incorrect (here maximum entanglement means that $C$ cannot be increased by any two-qubit unitary transformation). As a counterexample, consider the states $\rho = f |\Psi \rangle \langle \Psi | +(1-f) |00 \rangle \langle 00 |$, produced from the initial state due to zero-temperature energy relaxation ($T_2=2T_1$, $\theta=0$, $f=e^{-t/T_1}$). Any two such states with the same $f$ but different initial parameter $\beta$ can obviously be connected by a unitary transformation (involving only the subspace spanned by $|01\rangle $ and $|10\rangle$), while they have different concurrence $C$ given by Eq. (\[4\]). Finally, as follows from our analysis, there is a finite range of parameters $f$ and $\beta$, in which $S_+=2\sqrt{2}C$; in this range the concurrence can still be varied by unitary transformations varying $\beta$, contradicting the statement of Ref. .
{ "pile_set_name": "ArXiv" }
--- abstract: 'The spin wave spectra of multiferroic BiFeO$_3$ films is calculated using a phenomenological Landau theory that includes magnetostatic effects. The lowest frequency magnon dispersion is shown to be quite sensitive to the angle between spin wave propagation vector and the Néel moment. Since electrical switching of the Néel moment has recently been demonstrated in this material, the sensitivity of the magnon dispersion permits direct electrical switching of spin wave propagation. This effect can be used to construct spin wave logical gates without current pulses, potentially allowing reduced power dissipation per logical operation.' author: - Rogerio de Sousa - 'Joel E. Moore' title: 'Electrical control of magnon propagation in multiferroic BiFeO$_3$ films' --- One of the challenges of current research in microelectronic devices is the development of a fast logic switch with minimal power dissipation per cycle. Devices based on spin wave interference [@kostylev05; @khitun05] may provide an interesting alternative to conventional semiconductor gates by minimizing the need for current pulses. Recently, a spin wave NOT gate was demonstrated experimentally [@kostylev05]. The device consisted of a current-controlled phase shifter made by a ferromagnetic (FM) film on top of a copper wire. The application of a current along the wire creates a local magnetic field on the film, leading to a phase shift of its spin waves. In this letter we predict an effect that allows the design of similar spin wave devices without the need for external current pulses or applied time-dependent magnetic fields. We show that the dispersion of the lowest frequency spin-wave branch of a canted antiferromagnet depends strongly on the direction of spin wave propagation. This occurs because of the long-ranged (dipolar) interactions of the magnetic excitations, which creates a gap for spin waves propagating with non-zero projection along the Néel axis. This effect allows electrical control of spin waves in multiferroic materials that possess simultaneous ferroelectric (FE) and canted antiferromagnetic (AFM) order. Our model is applicable to the prominent room temperature multiferroic BiFeO$_3$ (BFO) [@wang03]. BFO films have homogeneous AFM order [@bai05; @bea07], in contrast to the inhomogeneous (cycloidal) AFM order present in bulk BFO [@sosnowska82]. The canted AFM order in BFO films is constrained to be in the plane perpendicular to the FE polarization $\bm{P}$. Recently, Zhao [*et al.*]{} [@zhao06] demonstrated room temperature switching of the Neél moment $\bm{L}=\bm{M}_1-\bm{M}_2$ in BFO films after the orientation of the ferroelectric moment was changed electrically. As we show here, spin wave propagation along $\bm{P}$ has high group velocity ($\sim 10^5$ cm/s), in contrast to spin wave propagation along $\bm{L}$ which has zero group velocity at $\bm{k}=0$. Hence switching $\bm{P}$ for a fixed spin wave propagation direction allows electrical control of the spin wave dispersion, which assuming some loss rate will effectively stop long-wavelength spin waves such as those created in [@khitun05]. Although a theory of AFM resonance for canted magnets was developed some time ago[@herrmann63; @tilley82], we are not aware of calculations of spin wave dispersion including magnetostatic effects. The electromagnon spectra for a ferromagnet with quadratic magnetoelectric coupling was discussed without magnetostatic effects in Ref. , and with magnetostatic effects in Ref. . Recently we developed a theory of spin wave dispersion in bulk BFO, a cycloidal (inhomogeneous) multiferroic [@desousa07]. The lowest frequency spin wave mode was shown to depend sensitively on the $\bm{P}$ orientation because of the inhomogeneous nature of the antiferromagnetic order. Interestingly, we show here that BFO films with a homogeneous order display a similar effect, albeit due to a completely different physical reason: the magnetostatic effect. Our calculation is based on a dynamical Ginzburg-Landau theory for the coupled magnetic and ferroelectric orders. We assume a model free energy given by $$\begin{aligned} F &=& \frac{a P_{z}^{2}}{2} + \frac{u P_{z}^{4}}{4}+ \frac{a_{\perp}(P_{x}^{2}+P_{y}^{2})}{2}-\bm{P}\cdot \bm{E}\nonumber\\ &&+ \sum_{j=1,2}\left[ \frac{r\bm{M}_{j}^{2}}{2}+\frac{G\bm{M}_{j}^{4}}{4}+ \frac{\alpha \sum_i\left(\nabla M_{ji}\right)^{2}}{2}\right] \nonumber\\&&+\left(J_0+\eta P^2\right)\bm{M}_{1}\cdot \bm{M}_{2}+ d\bm{P} \cdot \bm{M}_{1}\times \bm{M}_{2}. \label{f}\end{aligned}$$ Here $\bm{M}_j$ is the magnetization of one of the two sublattices $j=1,2$, and $\bm{P}$ is a ferroelectric polarization. The coordinate system is such that $\hat{\bm{z}}$ points along one of the cubic (111) directions in BFO. The exchange interaction $J=\left(J_0+\eta P^2\right)$ is assumed to have a quadratic dependence on $P$ due to magnetostriction. The last contribution to Eq. (\[f\]) is a Dzyaloshinskii-Moriya (DM) interaction, with a DM vector given by $d\bm{P}$. Note that this changes sign under inversion symmetry, hence Eq. (\[f\]) is invariant under spatial inversion at a point in between the two sublattices. ![Spin and polarization waves in a canted multiferroic, such as a BiFeO$_3$ film. The sublattice magnetizations $\bm{M}_1$, $\bm{M}_2$ lie in the plane perpendicular to the FE polarization $\bm{P}$. Fluctuations $\delta\bm{P}$ denote polar phonons associated to vibrations of the FE moment. (a) Depicts the low frequency (soft) spin wave mode. (b) High-frequency (gapped) mode. The dots in the circle denote the position of the spins one quarter cycle later. The soft mode leaves the canting angle $\beta$ invariant, while the gapped mode modulates $\beta$. (c) Coordinate system.[]{data-label="fig1"}](canted_waves){width="3in"} The design of multiferroic materials with enhanced couplings of this type was recently discussed [@fennie07]. Although BiFeO$_3$ has no inversion center, its crystal structure is quite close to an inversion-symmetric one, and the above free energy is derived by assuming that both the DM vector and polarization $\bm{P}$ are associated with the same distortion of the lattice. An alternative model for BiFeO$_3$ assumes the DM vector to be independent of $\bm{P}$ [@ederer05], i.e., requires Eq. (\[f\]) be invariant under spatial inversion at a point on top of one of the magnetic ions. Later we will discuss the implications of this alternative assumption for the electromagnon spectra, and show how optical experiments may determine which model is appropriate. The free energy is minimized by a homogeneous ferroelectric and antiferromagnetic state, with FE moment (at $\bm{E}=0$) given by $\bm{P}=P_0\hat{\bm{z}}$, with $P_{0}^{2}=\frac{-a}{u}+{\cal O}(d^3)$. The magnetic moments are perpendicular to $\bm{P}$, $$\begin{aligned} \bm{M}_{01}&=&M_0 \left(\sin{\beta} \hat{\bm{x}} +\cos{\beta}\hat{\bm{y}}\right),\\ \bm{M}_{02}&=&-M_0 \left(-\sin{\beta} \hat{\bm{x}} +\cos{\beta}\hat{\bm{y}}\right),\end{aligned}$$ with canting angle $\beta$ and magnetization $M_0$ determined by $\tan{\beta}=(dP_0)/(\tilde{J}+J)$, and $M_{0}^{2}=(\tilde{J}-r)/G$, with $\tilde{J}^{2}=(dP_0)^{2}+J^{2}$. Below the Curie and Néel temperatures we have $a<0$, and $J>-r>0$ respectively. Small oscillations away from the ground state are described by the Landau-Lifshitz equations, $$\frac{\partial \bm{M}_i}{\partial t}=\gamma \bm{M}_{i}\times \frac{\delta F}{\delta \bm{M}_{i}},\label{ll}$$ where $\gamma$ is a gyromagnetic ratio. A corresponding set of equations is written for $\bm{P}$ in order describe the high frequency optical phonon spectra. Keeping only the lowest order in the deviations $\delta \bm{M}_i$ and $\delta\bm{P}$, and focusing on the low requency magnetic oscillations we seek plane wave solutions of the type $$\bm{M}_i=\bm{M}_{0i}+\delta \bm{M}_i \textrm{e}^{i(\bm{k}\cdot \bm{r}-\omega t)}, \; \bm{P}=P_0 \hat{\bm{z}}+\delta\bm{P}\textrm{e}^{i(\bm{k}\cdot \bm{r}-\omega t)}. \label{pw}$$ &gt;From Eq. (\[ll\]) we see that $\delta\bm{M}_i$ must be perpendicular to $\bm{M}_i$. Hence we may reduce the number of variables by using a parametrization for $\delta\bm{M}_{i}$ shown in Fig. 1(c), with further definitions $Y=y_1+y_2$, $Z=z_1+z_2$, $y=y_1-y_2$, $z=z_1-z_2$. From Maxwell’s equations we see that any macroscopic wave producing nonzero fluctuations of $\delta\bm{M}=\delta\bm{M}_1+\delta\bm{M}_{2}$ must induce an AC magnetic $\bm{h}$ field. In the magnetostatic approximation this is obtained from $\nabla\cdot \bm{h}=-4\pi\nabla\cdot\delta \bm{M}$ and $\nabla\times \bm{h}\approx 0$. The latter assumes the time variations are negligible in Maxwell’s equations, which is a good approximation for spin waves provided $k\gg \omega_{\rm{AFM}}/c$, with $c$ the speed of light. For a canted AFM this is a good approximation provided the domain sizes are smaller than a few centimeters. The self induced field is therefore $$\bm{h}=-4\pi \left(\delta \bm{M}\cdot \hat{\bm{n}}\right)\hat{\bm{n}}, \label{sif}$$ where $\hat{\bm{n}}$ is a propagation direction for the spin waves, $\bm{k}=k\hat{\bm{n}}$. The self-induced field contribute a term $2\pi (\delta \bm{M}\cdot \hat{\bm{n}})^2$ to the free energy, tending to increase the spin wave frequencies whenever the quantity $\delta\bm{M} = (-\cos{(\beta)} y, \sin{(\beta)}Y, Z)$ has a finite projection along $\hat{\bm{n}}$. In the magnetostatic approximation the linearized equations of motion are obtained by substituting Eqs. (\[pw\])-(\[sif\]) into Eq. (\[ll\]), and using the explicit expressions for $\tan{\beta}$ and $M_0$. After some algebra the Landau-Lifshitz equations become $$\begin{aligned} \!\!\!\!\!\!\!\!\!\!-i\tilde{\omega}Y +(\tilde{J}+J+\alpha k^2)Z - 2h_z &=& -2 d' \cos{\beta} \delta P_x, \label{fc}\\ \!\!\!\!\!\!\!\!\!\!\alpha k^2 Y +i\tilde{\omega}Z- 2\sin{\beta} h_y &=&-4\eta' \sin{2\beta} \delta P_z,\label{sw1}\\ \!\!\!\!\!\!\!\!\!\!i\tilde{\omega}z +(2\tilde{J}+\alpha k^2)y +2\cos{\beta}h_x &=& -2d' \cos{2\beta}\delta P_z,\label{sw2}\\ \!\!\!\!\!\!\!\!\!\!(\tilde{J}-J+\alpha k^2) z -i\tilde{\omega}y &=& -2d' \sin{\beta}\delta P_y,\label{sw3}\end{aligned}$$ where we defined $\tilde{\omega}=\omega/(\gamma M_0)$, $d'=dM_0$, and $\eta'=\eta P_0M_0$. Consider the pure spin waves in the limit $\delta\bm{P}\rightarrow 0$. This case may be solved analytically, because the system of four equations decouples into two independent sets of equations on the variables $(Y,Z)$ and $(y,z)$. The former is a low frequency mode, because it corresponds to spin vibrations that leaves the canting angle $\beta$ unchanged \[the spins vibrate in phase, see Fig 1(a)\]. The latter corresponds to spin vibrations half-cycle out of phase, leading to modulations of $\beta$, and a high frequency gap equal to the DM interaction $dP_0$ \[Fig. 1(b)\]. Neglecting terms to second order in $(dP_0)/J$, we may get an analytical expression for the low frequency mode, $$\begin{aligned} \tilde{\omega}^{2}(\bm{k})&\approx& 2J \left( 1+\frac{4\pi}{J}n_{z}^{2}\right) \alpha k^{2} + \frac{4\pi (dP_0)^{2}}{J}n_{y}^{2}. \label{soft}\end{aligned}$$ This dispersion is anisotropic with respect to the polarization ($\hat{\bm{z}}$) axis: For $\bm{k}$ along the $x-z$ plane, we have a truly gapless mode to all orders in $dP_0/J$, with $\tilde{\omega}\approx \sqrt{2J\alpha}k$. For $\bm{k}$ along $\hat{\bm{y}}$ we find a gap equal to a fraction of the DM interaction, $\approx \sqrt{4\pi/J}(dP_0)$. This gap is a result of the *magnetostatic correction in the presence of DM weak ferromagnetism*. ![(a) Low frequency magnetostatic spin wave dispersion for a BiFeO$_3$ film, for propagation angles $\theta=0^{\circ}$ (propagation along the electric polarization direction $\hat{\bm{z}}$), $10^{\circ}$, $30^{\circ}$, $60^{\circ}$, $90^{\circ}$ (propagation along the Néel direction $\hat{\bm{y}}$). The high frequency mode (not shown) has a gap equal to the Dyzyaloshinskii-Moriya coupling ($5\times 10^{10}$ rad/s), and is nearly isotropic with respect to the direction of spin wave propagation. (b) Dispersion including electrodynamical effects in the $k<\omega/c$ region. Note the relationship between the magnetostatic gap in (a) and the photon-magnon anticrossing in (b).[]{data-label="fig2"}](spectra_film){width="3in"} The physical origin of the magnetostatic gap is found by noting that $\delta \bm{M}$ for a pure soft mode $Y,Z\neq 0, y,z=0$ as $k\rightarrow 0$ is approximately given by a rigid rotation around the $\hat{\bm{z}}$ axis. In this limit, $\delta \bm{M}$ points exclusively along $\hat{\bm{y}}$, hence only propagation with some projection in this direction leads to a gap. A small anisotropy is also found for the high frequency mode ($y,z\neq 0,Y=Z=0$). For example, when $\hat{\bm{k}}\parallel \hat{\bm{x}}$ the high frequency mode gap increases to $dP_0\sqrt{1+4\pi/J}$. We calculated the coupled spin and polarization wave spectra solving the full set of Eqs. (\[fc\])-(\[sw3\]) numerically, with parameters extracted from experiment [@wang03; @bai05; @bea07]. The low frequency spin wave branch within the magnetostatic approximation is shown in Fig. 2(a). The inset \[Fig. 2(b)\] shows the low frequency spectra beyond the magnetostatic approximation, including electrodynamical corrections (For numerical convenience the speed of light was rescaled to $10^6$ cm/s). Note the anticrossing of the spin wave modes with the photon dispersion $\omega=ck$, and the orientation dependence of the photon gap. As expected, we see that the strict $k\rightarrow 0$ limit has no orientation dependence. We emphasize that the latter low $k$ limit is only observable for domain sizes of one cm or larger. The magnetostatic propagation anisotropy discussed in this work arises precisely because the spin waves travel with finite $k>\omega/c$. Finally, we discuss the selection rules for the excitation or detection of magnon modes using an AC electric field. From inspecting the right hand side of Eqs. (\[fc\]) and (\[sw1\]) we see that the low frequency magnon may be excited electrically by the application of an AC field in the $x$ or $z$ direction. The former has a strong response in the presence of linear magnetoelectric effect ($d\neq 0$), while the latter has a weak response ($\propto \sin{\beta}$) due to magnetostriction. The high frequency magnon $(x,y)$ has a dielectric response only in the presence of the linear magnetoelectric effect, as seen in Eqs. (\[sw2\]) and (\[sw3\]). This mode responds to electric fields in the $y-z$ plane, with the $z$ direction response larger by a factor of $\cos{2\beta}/\sin{\beta}\sim 2J/dP_0\gg 1$. The presence or absense thereof of this electromagnon using an optical probe may be used to discern whether the DM vector is linear in $P$ as proposed e.g. in [@zhdanov06] or if it is independent of $P$ as suggested in [@ederer05]. In conclusion, we predicted a magnetostatic gap anisotropy for the propagation of spin waves in a canted antiferromagnet. This effect may allow the electrical switching of magnons in multiferroic materials such as BiFeO$_3$ films. The authors acknowledge useful conversations with J. Orenstein and R. Ramesh. This work was supported by WIN (RdS) and by NSF DMR-0238760 (JEM). [99]{} M.P. Kostylev, A.A. Serga, T. Schneider, B. Leven, and B. Hillebrands, , 153501 (2005). A. Khitun and K. L. Wang, Superlattices and Microstructures [**38**]{}, 184 (2005). J. Wang, J. B. Neaton, H. Zheng, V. Nagarajan, S. B. Ogale, B. Liu, D. Viehland, V. Vaithyanathan, D. G. Schlom, U. V. Waghmare, N. A. Spaldin, K. M. Rabe, M. Wuttig, and R. Ramesh, Science [**299**]{}, 1719 (2003). F. Bai, J. Wang, M. Wuttig, J. Li, and N. Wang, Appl. Phys. Lett. [**86**]{}, 032511 (2005). H. Béa, M. Bibes, S. Petit, J. Kreisel, and A. Barthélémy, Phil. Mag. Lett. [**87**]{}, 165 (2007). I. Sosnowska, T. Peterlin-Neumaier, and E. Steichele, J. Phys. C: Solid State Phys. [**15**]{}, 4835 (1982). T. Zhao, A. Scholl, F. Zavaliche, K. Lee, M. Barry, A. Doran, M. P. Cruz, Y. H. Chu, C. Ederer, N. A. Spaldin, R. R. Das, D. M. Kim, S. H. Baek, C. B. Eom, R. Ramesh, Nature Materials [**5**]{}, 823 (2006). G.F. Herrmann, J. Phys. Chem. Solids [**24**]{}, 597 (1963). D.R. Tilley and J.F. Scott, , 3251 (1982). V.G. Bar’yakhtar and I.E. Chupis, Sov. Phys. Solid State [**10**]{}, 2818 (1969); [*ibid*]{} [**11**]{}, 2628 (1970). G.A. Maugin, , 4608 (1981). R. de Sousa and J.E. Moore (preprint), arXiv:0706.1260 (2007). C. Fennie (preprint), arXiv:0711.1331 (2007). C. Ederer and N.A. Spaldin, , 060401(R) (2005). A.G. Zhdanov, A.K. Zvezdin, A.P. Pyatakov, T.B. Kosykh, and D. Viehland, Phys. Solid State [**48**]{}, 88 (2006).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a new replay-based method of continual classification learning that we term “conditional replay” which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term “marginal replay”) that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and FashionMNIST data, and compare to the regularization-based *elastic weight consolidation* (EWC) method [@Kirkpatrick2016; @shin2017continual].' author: - Timothée Lesort - Alexander Gepperth - Andrei Stoian - David Filliat bibliography: - 'samples.bib' - 'lll.bib' title: '**Marginal Replay vs Conditional Replay for Continual Learning** ' --- Introduction ============ ![Left: The problem setting of continual learning as investigated in this article. DNN models are trained one after the other on a sequence of sub-tasks (of which three are shown here), and are continuously evaluated on a test set consisting of the union of all sub-task test sets. This gives rise to results as shown exemplarily on the right-hand side of the figure, i.e., plots of test set accuracy over time for different models, where boundaries between sub-tasks (5 in this case) are indicated by vertical lines. \[fig:my\_label\] ](Figures/mnist_rotations_all_task_accuracy.png){width="40.00000%"} This contribution is in the context of incremental, continual or lifelong learning, subject that is gaining increasing recent attention [@parisi2018continual; @gepperth2016incremental] and for which a variety of different solutions have recently been proposed (see below). Briefly put, the problem consists of repeatedly re-training a deep neural network (DNN) model with new sub-tasks, or continual learning tasks (CLTs), (for example: new visual classes) over long time periods, while avoiding the abrupt degradation of previously learned abilities that is known under the term “catastrophic interference” or “catastrophic forgetting” [@gepperthICANN; @french; @gepperth2016incremental]. Please see Fig. \[fig:my\_label\] for a visualization of the problem setting. Is has long been known that catastrophic forgetting (CF) is a problem for connectionist models [@french] of which modern DNNs are a specialized instance, but only recently there have been efforts to propose workable solutions to this problem for deep learning models [@lee2017overcoming; @Kirkpatrick2016; @selfless; @DBLP:journals/corr/abs-1805-10784; @3862]. A recent article [@pfuelb2019a] demonstrates empirically that most proposals fail to eliminate CF when common-sense application constraints are imposed (e.g., restricting prior access to data from new sub-tasks, or imposing constant, low memory and execution time requirements). One aspect of the problem seems to be that gradient-based DNN training is greedy, i.e., it tries to optimize all weights in the network to solve the current task only. Previous tasks, which are not represented in the current training data, will naturally be disregarded in this process. While approaches such as [@Kirkpatrick2016; @lee2017overcoming] aim at “protecting” weights that were important for previous tasks, one can approach the problem from the other end and simply include samples from previous tasks in the training process each time a new task is introduced. This is the *generative replay* approach, which is in principle model-agnostic, as it can be performed with a variety of machine learning models such as decision trees, support vector machines (SVMs) or deep neural networks (DNNs). It is however unfeasible for, e.g., embodied agents or embedded devices performing object recognition, to store all samples from all previous sub-tasks. Because of this, generative replay proposes to train an additional machine learning model (the so-called *generator*). Thus, the “essence” of previous tasks comes in the form of trained generator parameters which usually require far less space than the samples themselves. A downside of this and similar approaches is that the time complexity of adapting to a new task is not constant but depends on the number of preceding tasks that shouldbe replayed. Or, conversely, if continual learning should be performed at constant time complexity, only a fixed amount of samples can be generated, and thus there will be forgetting, although it won’t be catastrophic. This article proposes and evaluates a particular method for performing replay using DNNs, termed “conditional replay”, which is similar in spirit to [@shin2017continual] but presents important conceptual improvements (see next section). The main advantage of conditional replay is that samples can be generated conditionally, i.e., based on a provided label. Thus, labels for generated samples need not be inferred in a separate step as other replay-based approaches, e.g., [@shin2017continual], which we term *marginal replay* approaches. Since inferring the label of a generated sample inevitably requires the application of a possibly less-than-perfect classifier, avoiding this step conceivably reduces the margin for error in complex continual learning tasks. Contribution {#sec:contr} ------------ The original contributions of this article can be summarized as follows: - **Conditional replay as a method for continual classification learning** We experimentally establish the advantages of conditional replay in the field of continual learning by comparing conditional and marginal replay models on a common set of benchmarks. - **Improvement of marginal learning** We furthermore propose an improvement of marginal replay as proposed in [@shin2017continual] by using generative adversarial networks (GANs, see [@goodfellow2014generative]). - [New experimental benchmarks for generative replay strategies]{} To measure the merit of these proposals, we use two experimental settings that have not been previously considered for benchmarking generative replay: rotations and permutations. In addition, we promote the ”10-class-disjoint” task as an important benchmark for continual learning as it is impossible to solve for purely discriminative methods (at no time, samples from different classes are provided for training so no discrimination can happen). - **Comparison of generative replay to EWC** We show the principled advantage that generative replay techniques have with respect to regularization methods like EWC in a “one class per task” setting, which is after all a very common setting in practice and in which discriminatively trained models strongly tend to assign the same class label to every sample regardless of content. [0.15]{} ![image](Samples/rotations/sample_0.png){width="\textwidth"} [0.15]{} ![image](Samples/rotations/sample_1.png){width="\textwidth"} [0.15]{} ![image](Samples/rotations/sample_2.png){width="\textwidth"} [0.15]{} ![image](Samples/rotations/sample_3.png){width="\textwidth"} [0.15]{} ![image](Samples/rotations/sample_4.png){width="\textwidth"} [0.9]{} ![image](Samples/permutations/train_perm.png){width="\textwidth"} \[fig:mnist\_permutation\_train\] Related work ------------ The field of continual learning is growing and has been recently reviewed in, e.g., [@parisi2018continual; @gepperth2016incremental]. In the context of neural networks, principal recent approaches include ensemble methods [@ren2017life; @fernando2017pathnet; @mallya2018packnet; @rusu2016progressive; @yoon2017lifelong; @aljundi2017expertGate; @serra2018overcoming; @li2018learning], regularization approaches [@Kirkpatrick2016; @lee2017overcoming; @selfless; @DBLP:journals/corr/abs-1805-10784; @Srivastava2013; @Hinton2012; @aljundi2018memory; @liu2018rotate; @chaudhry2018riemannian; @gepperth2019matrix], dual-memory systems [@kemker2017fearnet; @rebuffi2017icarl; @gepperth2015bio], distillation-based approaches [@shmelkov2017incremental; @li2018learning; @kim2018keep] and generative replay methods [@shin2017continual; @kemker2017fearnet; @lesort2018generative; @kamra2017deep; @wu2018memory]. In the context of single-memory DNN methods, regularization approaches are predominant: whereas it was proposed in [@Goodfellow2013] that the popular Dropout regularization can alleviate catastrophic forgetting, the EWC method [@Kirkpatrick2016] proposes to add a term to the DNN energy function that protects weights that are deemed to be important for the previous sub-task(s). Whether a weight is important or not is determined by approximating and analyzing the Fisher information matrix of the DNN. A somewhat related approach is pursued with the incremental moment matching (IMM, see [@lee2017overcoming]) technique, where weights are transferred between DNNs trained on successive sub-tasks by regularization techniques, and the Fisher information matrix is used to “merge” weights for current and past sub-tasks. Other regularization-oriented approaches are proposed in [@selfless; @Srivastava2013] which focus on enforcing sparsity of neural activities by lateral interactions within a layer, or in [@DBLP:journals/corr/abs-1805-10784]. Concerning recent advances in generative replay improving upon [@shin2017continual]: Several works propose the use of generative models in continual learning of classification tasks [@Kamra17; @wu18incremental; @wu2018memory; @Shah18] but their results does not provide comparison between different types of generative models. [@2018arXiv181209111L] propose a conditional replay mechanism similar to the one investigated here, but their goal is the sequential learning of data generation and not classification tasks. Generally, each approach to continual learning has its advantages and disadvantages: - ensemble methods suffer from little to no interference between present and past knowledge as usually different networks or sub-networks are allocated to different learning tasks. The problem with this approach is that, on the one hand, model complexity is not constant, and more seriously, that the task from which a sample is coming from must be known at inference time in order to select the appropriate (sub-)network. - regularization approaches are very diverse: in general, their advantage is simplicity and (often) a constant-time/memory behavior w.r.t. the number of tasks. However, the impact of the regularizer on continual learning performance is difficult to understand, and several parameters need to be tuned whose significance is unclear (i.e., the strengths of the regularization terms) - distillation approaches can achieve very good robustness and continual learning performance, but either require the retention of past samples, or the occurrence of samples from past classes in current training data to be consistent. Also, the strength of the various distillation loss regularizers needs to be tuned, usually by cross-validation. - generative replay and dual-memory systems show very good and robust continual learning performance, although time complexity of learning depends on the number of previous tasks for current generative replay methods. In addition, the storage of weights for a sufficiently powerful generator may prove very memory-consuming, so this approach cannot be used in all settings. Methods ======= A basic notion in this article is that of a continual (or sequential) learning task (CLT or SLT, although we will use the abbreviation CLT in this article), denoting a classification problem that is composed of two or more sub-tasks which are presented sequentially to the model in question. Here, the CLTs are constructed from two standard visual classification benchmarks: MNIST and Fashion MNIST, either by dividing available classes into several sub-tasks, or by performing per-sample image processing operations that are identical within, and different between, sub-tasks. All continual learning models are then trained and evaluated in an identical fashion on all CLTs, and performances are compared by a simple visual inspection of classification accuracy plots. Benchmarks ---------- **MNIST**  [@LeCun1998] is a common benchmark for computer vision systems and classification problems. It consists of gray scale 28x28 images of handwritten digits (ten balanced classes representing the digits 0-9). The train, test and validation sets contain 55.000, 10.000 and 5.000 samples, respectively.\ **Fashion MNIST**  [@Xiao2017] consists of grayscale 28x28 images of clothes. We choose this dataset because it claims to be a “more challenging classification task than the simple MNIST digits data [@Xiao2017]” while having the same data dimensions, number of classes, balancing properties and number of samples in train, test and validation sets. Continual learning tasks (CLTs) ------------------------------- All CLTs are constructed from the underlying MNIST and FashionMNIST benchmarks, so the number of samples in train and test sets for each sub-task depend on the precise way of constructing them, see below. [0.12]{} ![image](Samples/disjoint/sample_0.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_1.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_2.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_3.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_4.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_5.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_6.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_7.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_8.png){width="\textwidth"} [0.12]{} ![image](Samples/disjoint/sample_9.png){width="\textwidth"} **Rotations** New sub-tasks are generated by choosing a random rotation angle $\beta \in [0,\pi/2]$ and then performing a 2D in-plane rotation on all samples of the original benchmark. As both benchmarks we use contain samples of 28x28 pixels, no information loss is introduced by this procedure. We limit rotation angles to $\pi/2$ because larger rotations could mix MNIST classes like 6 and 9. Each sub-task in rotation-based CLTs contains all 10 classes of the underlying benchmark, leading to 55.000 and 10.000 samples, respectively, in the train and test sets of each sub-task.\ **Permutations** New sub-tasks are generated by defining a random pixel permutation scheme, and then applying it to each data sample of the original benchmark. Each sub-task in permutation-based CLTs contains all 10 classes of the underlying benchmark, leading to 55.000 and 10.000 samples, respectively, in the train and test sets of each sub-task.\ **Disjoint classes** For each benchmark, this CLT has as many sub-tasks as there are classes in the benchmark (10 in this article). Each sub-task contains the samples of a single class, i.e., roughly 6.000 samples in the train set and 1.000 samples in the test set. As the classes are balanced for both benchmarks, this does not unduly favor certain classes. This CLT presents a substantial challenge for machine learning methods since a normal DNN would, for each sub-task, learn to map all samples to a single class label irrespective of content. Selective discrimination between any two classes is hard to obtain except if replay is involved, because then a classifier actually “sees” samples from different classes at the same time. Models ------ In this article, we compare a considerable number of deep learning models: unless otherwise stated, we employ the Rectified Linear Unit (ReLU) transfer function, cross-entropy loss for classifier training, and the Adam optimizer. **EWC** We re-implemented the algorithm described in [@Kirkpatrick2016], choosing two hidden layers with 200 neurons each.\ **Marginal replay**  [In the context of classification, the *marginal replay* [@2018arXiv181209111L; @shin2017continual; @wu2018memory] method works as follows : For each sub-task $t$, there is a dataset $D_t$, a classifier $C_t$, a generator $G_t$ and a memory of past samples composed of a generator $G_{t-1}$ and a classifier $C_{t-1}$. The latter two allow the generation of artificial samples $D_{t-1}$ from previous sub-tasks. Then, by training $C_t$ and $G_t$ on $D_t$ and $D_{t-1}$, the model can learn the new sub-task $t$ without forgetting old ones.]{} At the end of the sub-task, $C_t$ and $G_t$ are frozen and replace $C_{t-1}$ and $G_{t-1}$. In the default setting, we use the generator for marginal replay in a way that ensures a balanced distribution of classes from past sub-tasks $D_{t-1}$, see also Fig. \[fig:distribution\]. This is achieved by choosing a predetermined number of samples $N$ to be added for all sub-tasks t, and letting the generator produce $tN$ previous samples at sub-task $t$. Thus, the number of generated samples increases linearly over time. We choose to evaluate two different models for the generator: WGAN-GP as used in [@shin2017continual] and the original GAN model [@NIPS2014_5423] since it is a competitive baseline [@lesort2018training].\ **Conditional replay**  The conditional replay method is derived from *marginal replay*: instead of saving a classifier and a generator, the algorithm only saves a generator that can generate conditionally (for a certain class). Hence, for each sub-task $t$, there is a dataset $D_t$, a classifier $C_t$ and two generators $G_t$ and $G_{t-1}$. The goal of $G_{t-1}$ is to generate data from all the previous sub-tasks during training on the new sub-task. Since data is generated conditionally, samples automatically have a label and do not require a frozen classifier. We follow the same strategy as for marginal replay (previous paragraph) for choosing the number of generated samples at each sub-task. However, conditional replay does not require this: it can, in principle, keep the number of generated samples constant for each sub-task since it is trivially possible to generate a balanced distribution of $\frac{N}{t}$ samples per class, from $t$ different classes, via conditional sample generation. $C_t$ and $G_t$ learn from generated data $D_{t-1}$ and $D_t$. At the end of a sub-task $t$, $C_t$ is able to classify data from the current and previous sub-tasks, and $G_t$ is able to sample from them also. We choose to use two different popular conditional models : CGAN described in [@mirza2014conditional] and CVAE [@NIPS2015_5775]. [0.45]{} ![image](Figures/mnist_disjoint_all_task_accuracy.png){width="\textwidth"} [0.45]{} ![image](Figures/fashion_disjoint_all_task_accuracy.png){width="\textwidth"} [0.45]{} ![image](Figures/mnist_permutations_all_task_accuracy){width="\textwidth"} [0.45]{} ![image](Figures/fashion_permutations_all_task_accuracy){width="\textwidth"} [0.45]{} ![image](Figures/mnist_rotations_all_task_accuracy){width="\textwidth"} [0.45]{} ![image](Figures/fashion_rotations_all_task_accuracy){width="\textwidth"} [0.45]{} ![image](Balancing/mnist_disjoint_500_all_task_accuracy.png){width="\textwidth"} [0.45]{} ![image](Balancing/fashion_disjoint_500_all_task_accuracy.png){width="\textwidth"} [0.45]{} ![image](Balancing/mnist_disjoint_5000_all_task_accuracy.png){width="\textwidth"} [0.45]{} ![image](Balancing/fashion_disjoint_5000_all_task_accuracy.png){width="\textwidth"} Experiments =========== We conduct experiments using all models and CLTs described in the previous section. Each class (regardless of the CLT) is presented for 25 epochs, Results are presented either based on the time-varying classification accuracy on the *whole* test set, or on the class (from the test set) that was presented first. In the first case, accuracy should ideally increase over time and reach its maximum after the last class has been presented. In the second case, accuracy or decrease over time, reflecting that some information about the first class is forgotten. We distinguish two major experimental goals or questions: - Establishing the performance of the newly proposed methods (marginal replay with GAN, conditional replay with CGAN or CVAE) w.r.t. the state of the art. To this effect, we conduct experiments that increase the number of generated samples over time in a way that ensures an effectively balanced class distribution (see Fig. \[fig:distribution\]). We do this both for marginal and conditional replay in order to ensure a fair comparison, although technically conditional replay can generate balanced distribution even with a constant number of generated samples. - demonstrating the advantages of conditional w.r.t. marginal replay strategies, especially when only few samples can be generated, thus obtaining a skewed class distribution for marginal replay (see Fig. \[fig:distribution\]). Results shedding light on the first question are presented in Fig. \[fig:all\_task\_accuracy\] (showing classification accuracy on whole test set over time, see Fig. \[fig:first\] for accuracy on first sub-task), whereas the second question is addressed in Fig. \[fig:bal\] for the disjoint CTL only due to space limitations. Results and discussion ====================== From the experiments described in the previous section, we can state the following principal findings:\ ![Why marginal replay must linearly increase the number of generated samples: distribution of classes produced by the generator of a marginal replay strategy after sequential training of 10 sub-tasks (of 1 class each). This essentially corresponds to the “disjoint” type of CLTs. Shown are three cases: “*balanced*: $tN$” (blue bars) where $tN$ samples are generated for each sub-task $t$, “unbalanced: $N$” (orange bars) where the number of generated samples is constant and equal to the number of newly trained samples $N$ for each sub-task, and “unbalanced: $0.1 tN$” where $0.1tN$ samples are generated. We observe that, in order to ensure a balanced distribution of classes, the number of generated samples must be re-scaled, or, in other words, must increase linearly with the number of sub-tasks. []{data-label="fig:distribution"}](distrBoth.png){width="44.00000%"} **Replay methods outperform EWC** As can be observed from Fig. \[fig:all\_task\_accuracy\], the novel methods we propose (marginal replay with GAN and WGAN-GP, conditional replay with CGAN and conditional replay with CVAE) outperform EWC, on all CLTs, sometimes by a large margin. Particular attention should be given to the performance of EWC: while generally acceptable for rotation and permutation CLTs, it completely fails for the disjoint CLT. This is due to the fact that there is only one class in each sub-task, making EWC try to map all samples to the currently presented class label regardless of input, since no replay is available to include samples from previous sub-tasks (as outlined before in Sec. \[sec:contr\]).\ **Marginal replay with GAN outperforms WGAN-GP** The clear advantage of GAN over WGAN-GP is the higher stability of the generative models. This is not only observable in Fig. \[fig:all\_task\_accuracy\], but also when measuring performance on the first sub-task only during the course of continual learning (see Fig.\[fig:first\]).\ **Conditional replay can be run at constant time complexity** A very important point in favour of conditional replay is run-time complexity, as expressed by the number of samples that need to be generated each time a new sub-task is trained. Since the generators in marginal replay strategies generate samples regardless of class, the distribution of classes will be proportional to the distribution of classes during the last training of the generator, which leads to an unbalanced class distribution over time, with the oldest classes being strongly under-represented (see Fig. \[fig:distribution\]). This is avoided by increasing the number of generated samples over time for marginal replay, leading to a balanced class distribution (see also Fig. \[fig:distribution\]) while vastly increasing the number of samples. Conditional replay, on the other hand, can selectively generate samples from a defined class, thus constructing a class-balanced dataset without needing to increase the number of generated samples over time. In the interest of accuracy, it can of course make sense to increase the number of generated samples over time, just as for marginal replay. This, however, is a deliberate choice and not something required by conditional replay itself.\ **Marginal replay outperforms conditional replay when many samples can be generated** From Fig. \[fig:all\_task\_accuracy\], it can be observed that marginal replay outperforms conditional replay by a small margin. This comes at the price of having to generate a large number of samples, which will become unfeasible if many classes are involved in the retraining.\ **Conditional replay is superior when few samples are generated** The results of Fig. \[fig:bal\] show that conditional replay is superior to marginal replay when generating fewer samples at each sub-task (more precisely: $0.1tN$ samples instead of $tN$, for sub-task $t$ and number of new samples per sub-task N). This can be understood quite easily: since we generate only $0.1tN$ samples instead of $tN$ samples at each sub-task, marginal replay produces an unbalanced class distribution, see Fig. \[fig:distribution\], which strongly impairs classification performance. This is a principal advantage that conditional replay has over marginal replay: generating balanced class distributions while having much more control over the number of generated samples.\ [0.31]{} ![image](First_task/mnist_disjoint_first_task_accuracy){width="\textwidth"} [0.31]{} ![image](First_task/mnist_permutations_first_task_accuracy){width="\textwidth"} [0.31]{} ![image](First_task/mnist_rotations_first_task_accuracy){width="\textwidth"} [0.32]{} ![image](First_task/fashion_disjoint_first_task_accuracy){width="\textwidth"} [0.32]{} ![image](First_task/fashion_permutations_first_task_accuracy){width="\textwidth"} [0.32]{} ![image](First_task/fashion_rotations_first_task_accuracy){width="\textwidth"} Conclusions =========== **Summary** We have proposed several of performing continual learning with replay-based models and empirically demonstrated (on novel benchmarks) their merit w.r.t. the state of the art, represented by the EWC method. A principal conclusion of this article is that conditional replay methods show strong promise because they have competitive performance, and they impose less restrictions on their use in applications. Most notably, they can be used at constant time complexity, meaning that the number of generated samples does not need to increase over time, which would be problematic in applications with many sub-tasks and real-time constraints.\ **Concerning the benchmarks** While one might argue that MNIST and FashionMNIST are too simple for a meaningful evaluation, this holds only for non-continual learning scenarios. In fact, recent articles [@pfuelb2019a] show that MNIST-related CLTs are still a major obstacle for most current approaches to continual learning under realistic conditions. So, while we agree that MNIST and FashionMNIST are not suitable benchmarks in general anymore, we must stress the difficulty of MNIST-related CLTs in continual learning, thus making these benchmarks very suitable indeed in this particular context. The use of intrinsically more complex benchmarks, such as CIFAR,SVHN or ImageNet is at present not really possible since generative methods are not really good enough for replaying these data [@2018arXiv181209111L]. **Next steps** Future work will include a closer study of conditional replay: in particular, we would like to better understand why they exhibit better performance w.r.t marginal replay in cases where the number of generated samples is restricted to be low. In addition, it would be interesting to study the continual learning behavior of conditional replay models when a fixed number of generated samples is imposed at each sub-task, for various CLTs. The latter topic is interesting because the success of replay-based continual learning methods in applications will depend on whether the number of generated samples (and thereby time and memory complexity) can be reduced to manageable levels.\ **Observations** An interesting point is that the disjoint type CLTs pose enormous problems to conventional machine learning architectures, and therefore represent a very useful benchmark for continual learning algorithms. If each of a CLT’s sub-tasks contains a single visual class, training them one after the other will induce no between-class discrimination at all since every training step just “sees” a single class. Replay-based methods nicely bridge this gap, allowing continual learning while allowing between-class discrimination. To our mind, any application-relevant algorithm for continual learning therefore must include some form of experience replay.\ **Outlook** Ultimately, the goal of our research is to come up with replay-based models where the effort spent on replaying past knowledge is small compared to the effort of training with new samples, which will require machine learning models that are, intrinsically, less prone to catastrophic forgetting than DNNs are. \[ap:first\]
{ "pile_set_name": "ArXiv" }
=6truept 0truept 0truept [**[Chromospheric activity of ROSAT discovered weak-lined T Tauri stars]{}**]{} [*[ D. Montes$^{1,2}$, L.W. Ramsey$^{1}$ ]{}*]{} $^1$ [The Pennsylvania State University, Department of Astronomy and Astrophysics, 525 Davey Laboratory, University Park, PA 16802, USA]{}\ $^2$ [Departamento de Astrofísica, Facultad de Físicas, Universidad Complutense de Madrid, E-28040 Madrid, Spain]{} To be published in ASP Conf. Ser., Solar and Stellar Activity: Similarities and Differences (meeting dedicated to Brendan Byrne, Armagh 2-4th September 1998) C.J. Butler and J.G. Doyle, eds =0.6truecm ------------------------------------------------------------------------ [**Abstract**]{} We have started a high resolution optical observation program dedicated to the study of chromospheric activity in weak-lined T Tauri stars (WTTS) recently discovered by the ROSAT All-Sky Survey (RASS). It is our purpose to quantify the phenomenology of the chromospheric activity of each star determining stellar surface fluxes in the more important chromospheric activity indicators (Ca [ii]{} H & K, H$\beta$, H$\alpha$, Ca [ii]{} IRT) as well as obtain the Li [i]{} abundance, a better determination of the stellar parameters, spectral type, and possible binarity. With this information we can study in detail the flux-flux and rotation-activity relations for this kind of objects and compare it with the corresponding relations in the well studied RS CVn systems. A large number of WTTS have been discovered by the RASS in and around different star formation clouds. Whether these stars are really WTTS, or post-TTS, or even young main sequence stars is a matter of ongoing debate. However, we have centered our study only on objects for which very recent studies, of Li [i]{} abundance (greater than Pleiads of the same spectral type) or radio properties, clearly confirmed their pre-main sequence (PMS) nature. In this contribution we present preliminary results of our January 1998 high resolution echelle spectroscopic observations at the 2.1m telescope of the McDonald Observatory. We have analysed, using the spectral subtraction technique, the H$\alpha$ and Ca [ii]{} IRT lines of six WTTS (RXJ0312.8-0414NW, SE; RXJ0333.1+1036; RXJ0348.5+0832; RXJ0512.0+1020; RXJ0444.9+2717) located in and around the Taurus-Auriga molecular clouds. A broad and variable double-picked H$\alpha$ emission is observed in RXJ0444.9+2717. Emission above the continuum in H$\alpha$ and Ca [ii]{} IRT lines is detected in RXJ0333.1+1036 and a filling-in of these lines is present in the rest of the stars. Our spectral type and Li [i]{} EW deterninations confirm the PMS nature of these objects. ------------------------------------------------------------------------ ------------------------------------------------------------------------ [**Introduction**]{} Weak-lined T Tauri stars (WTTS) are low mass pre-main sequence stars (PMS) with H$\alpha$ equivalent widths $\leq$ 10 [Å]{} in which no signs of accretion are observed. The emission spectrum of these stars is not affected by the complications of star-disk interaction which often masks the underlying absorption lines as well as extincts the stellar light in classical T Tauri stars (CTTS). The WTTS are thus ideal targets to study the behavior of surface activity in the PMS stage of the stellar evolution. While there are a large number of studies in UV, X-ray and radio wavelengths, little research has been directed towards the study of the chromospheric activity using optical observations. Those which have been done are based on low resolution spectroscopic observations. Only some recent higher resolution studies centered in bona-fide WTTS in Taurus are available (see Feigelson et al. 1994; Welty 1995; Welty & Ramsey 1995, 1998; Poncet et al. 1998; Montes & Miranda 1999). In order to improve the knowledge of the WTTS chromospheres high resolution optical observations are needed. The WTTS discovered very recently by the ROSAT All-Sky Survey (RASS) are good targets to accomplish these objectives. A large number of them have been found far away from the star formation clouds (Neuhäuser et al. 1995; Alcalá et al. 1995, 1996; Wichmann et al. 1996; Magazzù et al. 1997; Krautter et al. 1997). Whether these stars are really WTTS, or post TTS, or even young main sequence stars is a matter of ongoing debate (Feigelson 1996, Briceño et al. 1997, Favata et al. 1997). However, we will study only those in the Taurus Auriga Molecular Cloud for which very recent studies clearly confirmed their PMS nature. In this contribution we present preliminary results of our high resolution echelle spectroscopic observations of RX J0312.8-0414NW, SE; RX J0333.1+1036; RX J0348.5+0832; RX J0512.0+1020; and RX J0444.9+2717. ------------------------------------------------------------------------ ------------------------------------------------------------------------ [**Observations**]{} The spectroscopic observations were obtained during a 10 night run 12-21 January 1998 using the 2.1m telescope at McDonald Observatory and the Sandiford Cassegrain Echelle Spectrograph (McCarthy et al. 1993). This instrument is a prism crossed-dispersed echelle mounted at the Cassegrain focus and it is used with a 1200X400 Reticon CCD. The spectrograph setup was chosen to cover the H$\alpha$ (6563 Å) and Ca [ii]{} IRT (8498, 8542, 8662 Å) lines. The wavelength coverage is about 6400-8800Å$\ $ and the reciprocal dispersion ranges from 0.06 to 0.08  Å/pixel. The spectral resolution, determined by the FWHM of the arc comparison lines, ranges from 0.13 to 0.20 Å$\ $ (resolving power R=$\lambda$/$\Delta\lambda$ of 50000 to 31000) in the H$\alpha$ line region. In one of the nights we changed the spectrograph setup to include the He[i]{} D$_{3}$ (5876 Å) and Na [i]{} D$_{1}$ and D$_{2}$ (5896, 5890 Å) with a wavelength coverage of 5600-7000 Å. The spectra have been extracted using the standard reduction procedures in the IRAF package (bias subtraction, flat-field division, and optimal extraction of the spectra). The wavelength calibration was obtained by taking spectra of a Th-Ar lamp. Finally, the spectra have been normalized by a low-order polynomial fit to the observed continuum. The chromospheric contribution in these features is determined using the spectral subtraction technique (Huenemoerder & Ramsey 1987; Montes et al. 1995; 1997). The synthesized spectrum was constructed using the program STARMOD developed at Penn State (Barden 1985). ------------------------------------------------------------------------ ------------------------------------------------------------------------ [**Results**]{} We have analysed the H$\alpha$ and Ca [ii]{} IRT lines of six WTTS (RX J0312.8-0414NW, SE; RX J0333.1+1036; RX J0348.5+0832; RX J0512.0+1020; RX J0444.9+2717) located in and around the Taurus-Auriga molecular clouds. These targets were selected from two sources: \(1) From the ROSAT detected late-type stars south of the Taurus (Neuhäuser et al. 1995; Magazzù et al. 1997 (hereafter M97)) we selected the stars that the spectroscopic studies of M97 and Neuhäuser et al. (1997, hereafter N97) clearly identified as WTTS from their greater Li abundance than Pleiades of the same spectral type. Some of them have been classified by these authors as single- and double-lined spectroscopic binaries (SB1, and SB2) and others are visual binaries (Sterzik et al. 1997, hereafter S97). \(2) From the list of Wichmann et al. (1996) (hereafter W96) of new WTTS stars in Taurus, we selected the stars in which radio emission was detected by Carkner et al. (1997, hereafter C97) supporting their identification as genuine WTTS rather than ZANS. Representative spectra of these stars are plotted in Fig. 1 (H$\alpha$), Fig. 2 (Li [i]{} 6708 Å), and Fig. 3 (Ca [ii]{} IRT). A K1V reference star is also plotted for comparison. The observed and subtracted spectra for the case of RX J0444.9+2717 are plotted in Fig. 4. ------------------------------------------------------------------------ ------------------------------------------------------------------------ [****]{} RX J0312.8-0414 is a visual binary with components NW and SE separated by 14” (M97, S97). The NW component is a G0V with [*v*]{}sin[*i*]{} = 33 km s$^{-1}$ and is a SB2. The SE component is a G8V with [*v*]{}sin[*i*]{} = 11 km s$^{-1}$. Both components exhibit H$\alpha$ absorption with a EW(H$\alpha$) of 3.5 and of of 2.5 Å respectively (M97; N97). Our spectra exhibit a strong Li [i]{} 6708 Å line confirming the PMS nature of these objects. However, the level of chromospheric activity is very low, only a small filling-in of H$\alpha$ and Ca [ii]{} IRT lines is detected, in agreement with the earlier spectral type of both stars. [****]{} This star is classified as a confirmed PMS star by M97 and N97 on the basis of its Li [i]{} abundance, however, C97 detect no radio emission. M97 give a spectral type of K3 and observed the H$\alpha$ line in emission with a EW of -0.8 Å. N97 measured a [*v*]{}sin[*i*]{} of 20 km s$^{-1}$ Emission above the continuum in H$\alpha$ and Ca [ii]{} IRT lines is detected in our five spectra from January 14 to January 20 1998 with small variations from night to night. [****]{} This PMS is a rapidly-rotating star ([*v*]{}sin[*i*]{} = 127 km s$^{-1}$, N97) of spectral type G7 and with a small emission in the H$\alpha$ line (EW = -0.1 Å, M97). In our five spectra (from 01/12/98 to 01/18/98) we observe the H$\alpha$ line always in absorption, but filled in. The Ca [ii]{} IRT lines are also filled in by chromospheric emission. [****]{} M97 give a spectral type K2 for this star and observed a small emission in the H$\alpha$ line (EW = -0.1 Å). N97 measured a rotational velocity of 57 km s$^{-1}$. In our three spectra (from 01/14/98 to 01/17/98) we observe a variable filling-in of the H$\alpha$ and Ca [ii]{} IRT lines. The H$\alpha$ line shows emission in the blue wing in one of the spectra. ------------------------------------------------------------------------ ------------------------------------------------------------------------ [****]{} This is a K1 star with H$\alpha$ emission above the continuum (EW = -2.1 Å) and classified as a PMS star by W96 on the basis of its Li [i]{} abundance. The detection of radio emission by C97 confirm its PMS nature. Kohler & Leinert (1998) found a IR companion with a separation of 1.754” and a brignness ratio at K of 0.102. We have eight spectra of this star available (from 01/12/98 to 01/20/98). The observed spectra are well matched using a K1V reference star with a rotational broadening of 65 km s$^{-1}$. Some of the more intense photospheric lines exhibit a flat-bottomed core (i.e. the core is noticeable filled in with respect to the reference profile) as is observed in other rapidly-rotating and spotted stars. A broad and variable double-picked H$\alpha$ emission above the continuum is observed (see Fig. 4). The H$\alpha$ EW in the observed spectra changes from -1.2 Å  to -2.6 Å. The Ca [ii]{} IRT lines exhibit a strong filling-in. ------------------------------------------------------------------------ Alcalá, J. M., et al. 1995, A&AS, 114, 109 Alcalá, J. M., et al. 1996, A&AS, 119, 7 Briceño, C., Hartmann, L. W., Stauffer, J. R., et al. 1997, AJ, 113, 740 Barden, S. C. 1985, ApJ, 295, 162 Carkner, L., Mamajek, E., Feigelson, E. D., et al. 1997, ApJ, 490, 735 Favata, F., Micela, G., Sciortino, S. 1997, A&A, 326, 647 Feigelson, E. D., et al. 1994, , 432, 373 Feigelson, E. D. 1996, ApJ, 468, 306 Huenemoerder, D. P., & Ramsey, L. W. 1987, ApJ, 319, 392 Kohler, R. & Leinert, C. 1998, , 331, 977 Krautter, J., 1997, A&AS, 123, 329 Magazzù, A., Martín, E. L., Sterzik, M. F., et al. 1997, A&AS, 124, 449 McCarthy, J. K., Sandiford, B. A., Boyd, D., & Booth, J. 1993, PASP, 105, 881 Montes, D., Fernández-Figueroa, M. J., De Castro, E., & Cornide, M. 1995, A&A, 294, 165 Montes, D., Fernández-Figueroa, M. J., De Castro, E., & Sanz-Forcada J. 1997, A&AS, 125, 263 Montes, D., Miranda, L. F. 1999, , in preparation Neuhäuser, R., Sterzik, M. F., Torres, G., & Martín, E. L. 1995, A&A, 299, L13 Neuhäuser, R., Torres, G., Sterzik, M. F., & Randich, S. 1997, A&A, 325, 647 Poncet, A., Montes, D., Fernández-Figueroa, M. J., & Miranda, L. F. 1998, in ASP Conf. Ser. 155, Cool Stars, Stellar Systems, and the Sun, 10th Cambridge Workshop, eds. R. A. Donahue & J. A. Bookbinder (San Francisco: ASP), CD-1772 Sterzik, M. F., Durisen, R. H., Brandner, W., et al. 1997, AJ, 114, 1673 Welty, A. D., 1995, , 110, 776 Welty, A. D., & Ramsey, L. W. 1995, AJ, 110, 336 Welty, A. D., & Ramsey, L. W. 1998, AJ, in press Wichmann, R., Krautter, J., Schmitt, J. H. M. M., et al. 1996, A&A, 312, 439
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given a saddle fixed point of a surface diffeomorphism, its stable and unstable curves $W^S$ and $W^U$ often form a homoclinic tangle. Given such a tangle, we use topological methods to find periodic points of the diffeomorphism, using only a subset of the tangle with finitely many points of intersection, which we call a [*trellis*]{}. We typically obtain exponential growth of periodic orbits, symbolic dynamics and a strictly positive lower bound for topological entropy. For a simple example occurring in the Hénon family, we show that the topological entropy is at least $0.527$.' address: 'Department of Mathematics, University of California, Berkeley, California 94704' author: - Pieter Collins title: Dynamics Forced by Surface Trellises --- [^1] Introduction {#sec:Introduction} ============ Let $f:{\mathbb{R}}^2{\longrightarrow}{\mathbb{R}}^2$ be a diffeomorphism. A fixed point $p$ of $f$ is a *hyperbolic fixed point* if the eigenvalues of $Df(p)$ have modulus $\neq 1$. By the Stable Manifold Theorem, the stable and unstable sets of $p$ are injectively immersed manifolds, and if $p$ is a saddle point, these manifolds are curves. If these curves intersect at a point $q$ distinct from $p$, there must be infinitely many intersections, and the stable and unstable curves then form a complicated set called a *homoclinic tangle*. Homoclinic tangles have been studied extensively, dating back to Poincaré and Birkhoff. The main result, due in its modern form to Smale, is that a diffeomorphism with a transverse homoclinic point has a horseshoe in some iterate. While this has been generalised to topologically transverse intersections and quadratic tangencies, little progress has been made in determining more about the actual dynamics forced by a homoclinic tangle. Since all interesting homoclinic tangles have infinitely many intersection points, we cannot compute them in practice. The purpose of this paper is to show that we can obtain interesting information about the dynamics of a system by considering a portion of a homoclinic tangle with only finitely many intersection points. We call these objects *trellises*. We will consider systems on compact surfaces with boundary. Given a trellis for a system, we find lower bounds for the number of periodic orbits of a given period, and the location of these orbits in terms of the complement of the trellis. In many cases, we can find a finite type shift which gives a good symbolic description of the system. The growth rate of the number of periodic points is the same as the entropy of the shift, which is a lower bound for the topological entropy of $f$. All but finitely many periodic points of the shift are realised by the the original map. Since all the tools we use are topological, we do not need any differentiability requirements, and we can even weaken the hypothesis that $f$ is invertible. Further, the methods work equally well for heteroclinic tangles. We will refer to both homoclinic and heteroclinic tangles as *tangles*. Note that our terminology differs from that of Easton [@Easton86], who uses the word trellis for what we call a tangle. Algorithms exist for computing approximations to stable and unstable manifolds for surface diffeomorphisms. Since transverse intersections of these curves are persistent under perturbations, and trellises contain finitely many intersections, we can often compute trellises precisely. This allows us to obtain rigorous results about real systems. As an example, we find symbolic dynamics for the Hénon map $(x,y)\mapsto(r-x^2+cy,x)$ with parameter values $c=-\frac{4}{5}$ and $r=\frac{3}{2}$, and show that it has topological entropy at least $0.527$. In [Section \[sec:Periodic\]]{} we state the definitions and theorems from relative periodic point theory we need to study trellises. Proofs and further discussion of the results in this section can be found in [@CollinsPPa]. In [Section \[sec:Trellis\]]{} we give a formal definition of trellises, and details of the operations we need to study them. For a trellis $T$, we first cut along the unstable curves of $T$ to obtain a topological pair ${{{{\mathcal{C}}}T}}$ consisting of a surface and a subset corresponding to the stable curves. We then homotopy-retract ${{{{\mathcal{C}}}T}}$ onto a graph ${\mathcal{G}T}$. If $T$ is a trellis for a map $f$, then we obtain maps ${{{{\mathcal{C}}}\!{f}}}$ on ${{{{\mathcal{C}}}T}}$ and ${\mathcal{G}f}$ on ${\mathcal{G}T}$. We can then use Nielsen theory to show that periodic orbits for the graph map correspond to periodic orbits for the original map $f$. If the trellis $T$ has transverse intersections and is a subset of a tangle for a homeomorphism $f$, then the growth rate of the number of periodic points of $f$ so found is a lower bound for the topological entropy of $f$. In [Section \[sec:Examples\]]{} we give a number of examples showing how we can use these methods to obtain interesting results about the dynamics of maps. Relative Periodic Point Theory {#sec:Periodic} ============================== In this section we give, without proofs, a brief summary of the definitions and theorems for the relative fixed point theory developed in [@CollinsPPa]. The results are based on standard fixed point theory, a good introduction to which can be found in Brown [@Brown71]. There are two basic types of theory, Lefschetz theory and Nielsen theory. Both these are homotopy-invariant, and allow for comparison of maps on different spaces. The Lefschetz theory finds periodic points by looking at cohomology actions on $H^*(X,Y)$, and is most useful when no a priori information about periodic points is available. The computations involved are similar to those for the cohomological Conley index of Szymczak [@Szymczak95FUNDM], and were motivated by this theory, though some of the topology is complicated since our regions may not have disjoint closures. The Nielsen theory determines when two periodic points can bifurcate with each other. It is most useful when we can explicitly find periodic points for one map in a homotopy class, since we can then decide whether these points exist for other maps in the homotopy class. When studying trellises, the strongest results are obtained by applying Nielsen theory to maps of divided graphs. Throughout this section, all topological spaces will be assumed to be compact absolute neighbourhood retracts. All cohomology groups will be taken over ${\mathbb{Q}}$. Topological Pairs, Regions and Itineraries {#sub:toppair} ------------------------------------------ In this section we define a number of terms which provide a framework for describing dynamics. A *topological pair* is a pair $(X,Y)$ where $X$ is a topological space and $Y$ is a closed subset of $X$. If $(X,Y)$ is a topological pair, we will write $Y^C$ for $X\setminus Y$, the complement of $Y$ in $X$. A *map of pairs* $f:(X_1,Y_1){\longrightarrow}(X_2,Y_2)$ is a continuous function $f:X_1{\longrightarrow}X_2$ such that $f(Y_1)\subset Y_2$. A map of pairs $f:(X_1,Y_1){\longrightarrow}(X_2,Y_2)$ is *exact* if $f^{-1}(Y_2)\subset Y_1$, or, equivalently, if $f(Y_1^C)\subset Y_2^C$. Let $f_0,f_1:(A,B){\longrightarrow}(X,Y)$. A *homotopy* from $f_0$ to $f_1$ in the category of topological pairs is a family of maps $f_t:(A,B){\longrightarrow}(X,Y)$ for $0\le t\le1$ such that such that the function $F:A\times I{\longrightarrow}X$ defined by $F(a,t)=f_t(a)$ is continuous. We write $f_t:f_0{\sim}f_1$ if $f_0$ is homotopic to $f_1$ via the homotopy $f_t$. ${\sim}$ induces an equivalence relation on maps of pairs, and we write $[f]$ for the equivalence class of $f$. A homotopy $f_t$ is a *strong homotopy* if $f_t(a)=f_0(a)$ whenever $f_1(a)=f_0(a)$ an *exact homotopy* if each map $f_t$ is exact. An *region* ${{R}}$ of a topological pair $(X,Y)$ is an open subset of $X\setminus Y$ such that ${{R}}\cup Y$ is closed in $X$. A *regional space* is a triple $(X,Y;{{\mathbf{R}}})$ where $(X,Y)$ is a topological pair, and ${{\mathbf{R}}}$ is a set of mutually disjoint regions. Note that we do not require $\bigcup{{\mathbf{R}}}$, the union of the regions in ${{\mathbf{R}}}$, to cover $Y^C$. If $(X_1,Y_1;{{\mathbf{R}}}_1)$ and $(X_2,Y_2;{{\mathbf{R}}}_2)$ are regional spaces, a map $f:(X_1,Y_1;{{\mathbf{R}}}_1){\longrightarrow}(X_2,Y_2;{{\mathbf{R}}}_2)$ is *region-preserving* if there is a function ${{f_{{\mathbf{R}}}}}:{{\mathbf{R}}}_1{\longrightarrow}{{\mathbf{R}}}_2$ such that for all regions ${{R}}_1\in{{\mathbf{R}}}_1$, $f({{R}}_1)\subset {{f_{{\mathbf{R}}}}}({{R}}_1)$, and for all regions ${{R}}_2\in{{\mathbf{R}}}_2$, $f^{-1}({{R}}_2)\subset\bigcup{{\mathbf{R}}}_1$. A *dynamical system* on a regional space $(X,Y;{{\mathbf{R}}})$ is a self-map $f$ of $(X,Y)$. If $f$ and $g$ are dynamical systems on $(X_1,Y_1;{{\mathbf{R}}}_1)$ and $(X_2,Y_2;{{\mathbf{R}}}_2)$ respectively, a region-preserving map $r:(X_1,Y_1;{{\mathbf{R}}}_1){\longrightarrow}(X_2,Y_2;{{\mathbf{R}}}_2)$ is a *morphism* from $f$ onto $g$ if there is a map of pairs $s:(X_2,Y_2){\longrightarrow}(X_1,Y_1)$ such that $r\circ s{\sim}{id}$ and $f{\sim}s\circ g\circ r$. We interpret $X$ as the base space of the system, $Y$ as invariant set on which the dynamics of $f$ is known, and ${{\mathbf{R}}}$ as the regions in which we are interested in finding symbolic dynamics on. We will see that if there is a morphism from $f$ onto $g$, then the symbolic dynamics we can compute for $f$ are more complicated than that for $g$. \[defn:itinerary\] Let $f$ be a dynamical system on $(X,Y;{{\mathbf{R}}})$. A sequence ${{R}}_0{{R}}_1{{R}}_2\ldots$ of regions in ${{\mathbf{R}}}$ is an *itinerary* for $x\in X$ if $f^i(x)\in {{R}}_i$ for all $i\in{\mathbb{N}}$. Let ${\mathrm{Per}}_n(f)$ be the set fixed points of $f^n$ (that is, the set of points of not necessarily least period $n$). A word ${{\mathcal{R}}}={{R}}_0{{R}}_1\ldots {{R}}_{n-1}$ on ${{\mathbf{R}}}$ is a *code* for $x\in{\mathrm{Per}}_n(f)$ if $f^i(x)\in {{R}}_i$ for $0\le i<n$. We write ${{{\mathrm{Per}}_{{{\mathcal{R}}}}(f)}}$ for the set of periodic points with code ${{\mathcal{R}}}$, and ${{{\mathrm{Per}}_{{{\mathbf{R}}},n}(f)}}$ for the set of points with codes in ${{\mathbf{R}}}$ of length $n$. Notice that the itinerary is not defined for points which leave $\bigcup{{\mathbf{R}}}$, but since regions are disjoint, it is unique where defined. Relative Lefschetz Theory {#sec:lefschetz} ------------------------- Since $X$ and $Y$ are ANRs, we can use the strong excision property to define a *cohomology projection*. Let ${{R}}$ be a region of $(X,Y)$. let $j_1:({{R}}\cup Y,Y)\hookrightarrow(X,Y)$, $j_2:(X,Y)\hookrightarrow(X,X\setminus {{R}})$ and $j_3:({{R}}\cup Y,Y)\hookrightarrow(X,X\setminus {{R}})$ be inclusions. $j_3$ is (weakly) excisive, so induces isomorphisms on cohomology. The *cohomology projection onto ${{R}}$* is $\pi_{{R}}^*=j_2^*\circ(j_3^*)^{-1}\circ j_1^*$. Using the cohomology projection, we can restrict the cohomology action of a dynamical system $f$ on $(X,Y;{{\mathbf{R}}})$ to each region. Given a word ${{\mathcal{R}}}$ on ${{\mathbf{R}}}$, we can obtain a kind of restricted cohomology action of $f^n$. Let $f$ be a semidynamical system on $(X,Y;{{\mathbf{R}}})$. For all ${{R}}\in{{\mathbf{R}}}$, let $f_{{R}}^*=\pi_{{R}}^*\circ f^*$. For all words ${{\mathcal{R}}}$ on ${{\mathbf{R}}}$ of length $n$, let $f_{{\mathcal{R}}}^*=f_{{{R}}_0}^*\circ f_{{{R}}_1}^*\circ\cdots\circ f_{{{R}}_{n-1}}^*$. The Lefschetz number of $f^*_{{\mathcal{R}}}$ is defined as follows. \[defn:lefschetznumber\] The *Lefschetz number* of $f_{{\mathcal{R}}}^*$ is $L(f^*_{{\mathcal{R}}})=\sum_{i=0}^\infty (-1)^i {\mathrm{Tr}}(f^{(i)}_{{\mathcal{R}}})$. Using this, we can deduce the existence of periodic points with a given code. \[thm:rellef\] Let $f$ by a semidynamical system on $(X,Y;{{\mathbf{R}}})$. Suppose ${{\mathcal{R}}}$ is a word of length $n$ on ${{\mathbf{R}}}$, and $L(f^*_{{\mathcal{R}}})\neq 0$. Then there is a period-$n$ point $x$ such that $x$ is the limit of a sequence $(x_i)$ such that $f^j(x_i)\in {{R}}_{j{\;\mathrm{mod}\;}n}$ for all $J<i$. We write ${{\widehat{{\mathrm{Per}}}_{{{\mathcal{R}}}}(f)}}$ for the set of periodic points defined above. Note that if $x\in{{\widehat{{\mathrm{Per}}}_{{{\mathcal{R}}}}(f)}}$, then, $f^j(x)\in{\mathrm{cl}({{R}}_{j{\;\mathrm{mod}\;}n})}$ for all $j$. We give a result showing how we can compare systems on different spaces. \[thm:lefschetzfunctorial\] Let $f$ and $g$ be dynamical systems on $(X_1,Y_1;{{\mathbf{R}}}_1)$ and $(X_2,Y_2;{{\mathbf{R}}}_2)$ respectively, and $r$ a morphism from $f$ onto $g$. Then $$\sum_{{{\mathcal{R}}}_1\in{{r_{{\mathbf{R}}}}}^{-1}({{\mathcal{R}}}_2)}L(f^*_{{{\mathcal{R}}}_1})=L(g^*_{{{\mathcal{R}}}_2})$$ Relative Nielsen Theory {#sec:nielsen} ----------------------- Throughout this section, by *curve* we mean a map $\alpha:(I,J){\longrightarrow}(X,Y)$, where $I$ is the unit interval. All homotopies of curves will be relative to endpoints, and we write $\alpha_0{\sim}\alpha_1$ if $\alpha_0{\sim}\alpha_1$ are homotopic ${\mathrm{rel}}$ endpoints. Let $f$ be a dynamical system on $(X,Y;{{\mathbf{R}}})$, and $n\in{\mathbb{N}}$. Suppose $x_1,x_2\in{\mathrm{Per}}_n(f)$. We say $x_1$ is *Nielsen equivalent* to $x_2$, denoted $x_1{\simeq}_f x_2$, if there is a subset $J$ of $I$ and exact curves $\alpha_j:(I,J){{\longrightarrow}}(X,Y)$ from $f^j(x_1)$ to $f^j(x_2)$ for $j=0\ldots n-1$ such that $\alpha_{j+1\;{\;\mathrm{mod}\;}\;n}{\sim}f\circ\alpha_j$ for all $j$. The family $(\alpha_j)$ is a *relating family*. If $x\in{\mathrm{Per}}_n(f)$, then $x$ is *Nielsen related to $Y$*, denoted $x{\simeq}_f Y$ if there is a relating family $(\alpha_j)$ for $x{\simeq}_f x$ consisting of exact curves $(I,J){{\longrightarrow}}(X,Y)$ for which $J\neq\emptyset$. If $x\not{\simeq}_f Y$, then we say $x$ is *Nielsen separated from $Y$*. Clearly ${\simeq}_f$ is an equivalence relation. Equivalence classes of ${\mathrm{Per}}_n(f)$ are called *$n$-Nielsen classes* We will drop the subscript $f$ where this will cause no confusion. We have the following important lemma. \[lem:yrelated\] If $x_1{\simeq}x_2$, then $x_1$ is Nielsen related to $Y$ if and only if $x_2$ is Nielsen related to $Y$. If $x_1{\simeq}x_2$, and $x_1\in{{{\mathrm{Per}}_{{{\mathcal{R}}}}(f)}}$, then $x_2\in{{{\mathrm{Per}}_{{{\mathcal{R}}}}(f)}}$ or $x_1,x_2{\simeq}Y$. We can therefore speak of a Nielsen *class* $Q$ being *Nielsen related to $Y$* or *Nielsen separated from $Y$*. If $Q$ is Nielsen separated from $Y$, then all points of $Q$ have the same code, which we call the *code for $Q$*. We let ${N_{{{\mathcal{R}}}}(f)}$ be the number of essential Nielsen classes with code ${{\mathcal{R}}}$, and ${N_{n}(f)}$ the number of Nielsen classes with codes ${{\mathcal{R}}}$ of length $n$. \[thm:simopen\] Suppose $Q$ is a Nielsen class of $f$. Then $Q$ is open in ${\mathrm{Per}}_n(f)$. We can therefore define the index of a Nielsen class $Q$, denoted ${\mathrm{Ind}}(X,Q;f)$ or simply ${\mathrm{Ind}}(Q)$ to be the Lefschetz index ${\mathrm{Ind}}(X,U;f)$, where $U$ is an open neighbourhood of $Q$ containing no other fixed points in its closure. A Nielsen class $Q$ is *essential* if ${\mathrm{Ind}}(X,Q;f)\neq0$. We let ${N_{n}(f)}$, the number of essential Nielsen classes separated from $Y$. We let ${\bar{N}_{n}(f)}$ be the total number of essential Nielsen classes, and ${N^Y_{n}(f)}$ the number of Nielsen classes related to $Y$. ${N^Y_{n}(f)}$ may be greater or less than the number of Nielsen classes of $f|_Y$. The following result is a localisation result for Nielsen theory. \[thm:mapchange\] Suppose $f$ and $g$ agree on $\bigcup{{\mathbf{R}}}$. Then ${N_{{{\mathcal{R}}}}(f)}={N_{{{\mathcal{R}}}}(g)}$ for all words ${{\mathcal{R}}}$ on ${{\mathbf{R}}}$. If there is a morphism from $f$ to $g$, then $f$ has more Nielsen classes than $g$ in the following sense. \[thm:nielsenfunctorial\] Let $f$ and $g$ be dynamical systems on $(X_1,Y_2;{{\mathbf{R}}}_1)$ and $(X_2,Y_2;{{\mathbf{R}}}_2)$ respectively, and $r$ morphism from $f$ onto $g$. Then $$\sum_{{{\mathcal{R}}}_1\in{{r_{{\mathbf{R}}}}}^{-1}({{\mathcal{R}}}_2)}{N_{{{\mathcal{R}}}_1}(f)}\ge{N_{{{\mathcal{R}}}_2}(g)}$$ We have the following trivial corollary. \[cor:nielhomotopic\] If $g$ is homotopic to $f$, then ${N_{{{\mathcal{R}}}}(g)}={N_{{{\mathcal{R}}}}(f)}$ for all words ${{\mathcal{R}}}$, and $g$ has at least ${N_{n}(f)}$ points of period $n$. Entropy {#sec:entropy} ------- There are several ways of defining topological entropy. We will use the following definition based on $({\mathcal{U}},n,f)$-separated sets. Let ${\mathcal{U}}$ be an open cover of $X$. Points $x_1,x_2\in X$ are *$({\mathcal{U}},n,f)$-close* if for all $i<n$ there exist $U_i\in{\mathcal{U}}$ such that $f^i(x_1),f^i(x_2)\in U_i$. Points $x_1,x_2$ *$({\mathcal{U}},f)$-shadow* each other if they are $({\mathcal{U}},n,f)$-close for all $n$. A set $S$ is $({\mathcal{U}},n,f)$-separated if no two points of $S$ are $({\mathcal{U}},n,f)$-close. Let $s({\mathcal{U}},n,f)$ be the maximum cardinality of a $({\mathcal{U}},n,f)$ separated set. Then the *topological entropy* of $f$, written ${h_\mathit{top}}(f)$ is given by $${h_\mathit{top}}(f)=\sup_{\mathcal{U}}\lim_{n{\longrightarrow}\infty}\frac{\log s({\mathcal{U}},n,f)}{n}$$ We have a classical result that ${h_\mathit{top}}(f)\ge\limsup_{n{\longrightarrow}\infty}\frac{\log N(f^n)}{n}=N_\infty(f)$. (See Katok and Hasselblatt [@KatokHasselblatt95]). In other words the growth rate of the number of essential fixed-point classes of $f^n$ is a lower bound for the topological entropy of $f$. For the relative case, we define the *asymptotic Nielsen number* $N_\infty(f)=\limsup_{n{\longrightarrow}\infty}\frac{\log N_n(f)}{n}$. We would like to show again that ${h_\mathit{top}}(f)\ge N_\infty(f)$. Unfortunately, problems can occur near $Y$, so we introduce an additional hypothesis. \[defn:expansiveperiodicity\] Let $f$ be a dynamical system on a regional space $(X,Y;{{\mathbf{R}}})$. We say $f$ has [*expansive periodicity near $Y$*]{} if there is a neighbourhood $U_0$ of $Y$ and an open cover ${\mathcal{U}}$ of $X$ such that whenever $x_0,x_1\in{{{\mathrm{Per}}_{{{\mathbf{R}}},n}(f)}}\cap W$ are Nielsen separated from $Y$, then either $f^i(x_1)$ and $f^i(x_2)$ are ${\mathcal{U}}$-separated for some $i$, or every curve from $x_1$ to $x_2$ in $U_0$ is homotopic to a curve from $x_0$ to $x_1$ which does not intersect $Y$. We can show that expansive periodicity near $Y$ is enough to show that the topological entropy is at least the asymptotic Nielsen number. \[thm:epte\] Let $f$ be a dynamical system on $(X,Y;{{\mathbf{R}}})$ with expansive periodicity near $Y$. Then ${h_\mathit{top}}(f)\ge {N_{\infty}(f)}$. Trellises {#sec:Trellis} ========= We now give a formal definition of trellises and two important classes of topological pairs. We also describe some important operations on these objects. Trellises {#sec:trellis} --------- A trellis $T$ in a surface with boundary $M$ is a collection $(T^P,T^V,T^U,T^S)$ of subsets of $M\setminus\partial M$ with the following properties. 1. $T^P$ is finite. 2. $T^U$ and $T^S$ are embedded copies of $T^P\times I$ such that each component of $T^U$ and of $T^S$ contains exactly one point of $T^P$. 3. $T^V=T^U\cap T^S$ is finite. We write $T=(T^P,T^V,T^U,T^S)$. We will write ${{U/S}}$ for a statement which holds for both the stable ($S$) and unstable ($U$) case. A trellis is *transverse* if intersections of $T^S$ and $T^U$ are topologically transverse. A *segment* is an interval in $T^U$ or $T^S$. Segments may be open or closed subsets of $T^{{U/S}}$, or neither. If $q_1$ and $q_2$ lie in the same component of $T^{{U/S}}$, we have an *open segment* $T^{{U/S}}(q_1,q_2)$ and a *closed segment* $T^{{U/S}}[q_1,q_2]$ between $q_1$ and $q_2$. An *initial segment* has endpoints $p$ and $q$ where $p\in T^P$. A *minimal segment* has endpoints $q_1,q_2\in T^V$, and $T^{{U/S}}(q_1,q_2)$ contains no vertices. A *maximal segment* has endpoints $q_1,q_2\in T^V$, such that $T^{{U/S}}[q_1,q_2]$ contains all vertices in that component of $T^{{U/S}}$. The *ends* of $T^{{U/S}}$ are the subsets of $T^{{U/S}}$ not contained in any maximal segment. For our purposes, only the maximal segments of $T^{{U/S}}$ are important, and so we will sometimes remove the ends of $T^{{U/S}}$ without explicitly mentioning this. We now define a natural class of maps between trellises: If $T_1$ is a trellis in $M_1$, $T_2$ is a trellis in $M_2$ and $h:M_1{\longrightarrow}M_2$, we say $h$ is a *trellis map* from $T_1$ to $T_2$ if 1. $h$ maps $T^P_1$ bijectively with $T^P_2$. 2. $h(T^S_1)\subset T^S_2$. 3. $h^{-1}(T^U_2)\subset T^U_1$. Two trellis maps $f_0,f_1$ from $T_1$ to $T_2$ are *homotopic* if there is a homotopy $f_t:f_0{\sim}f_1$ such that each $f_t$ is a trellis map. The most important trellis maps are those from a trellis $T$ to itself. If $f:M{\longrightarrow}M$ is such a trellis map, we say *$T$ is a trellis for $f$*. Clearly, if $f$ is a diffeomorphism with saddle periodic points $T^P$, and stable and unstable curves $T^S$ and $T^U$ with intersection $T^V$, then $(T^P,T^V,T^U,T^S)$ is a trellis for $f$. We use the more general definition of trellis map to keep a formalism for comparing trellis maps for different trellises; in particular, we have a category of trellises and trellis maps. Combinatorics of trellises {#sec:combinatorics} -------------------------- Often the best way of describing a trellis is simply to draw it. However, it is also useful to have a combinatorial way of describing it. We shall only consider the simplest case, namely that of a trellis for a homoclinic tangle on a sphere with transverse intersections. In this case, $T=(T^P,T^V,T^U,T^S)$, where $T^P$ is a one-point set $\{p\}$, and $T^U$ and $T^S$ are embedded intervals. We need to choose orientations for $T^U$ and $T^S$. We now assign coordinates to each point of $T^V$. The *unstable coordinate* of $q\in T^V$, denoted $n_U(q)$ is $n$ if $q$ is the $n^\mathrm{th}$ point of $T^V$ in the positive direction from $p$ along $T^U$, or the $-n^\mathrm{th}$ point of $T^V$ in the negative direction from $p$. We define the *stable coordinate* $n_S(q)$ in a similar way. Merely giving the unstable and stable coordinates of points of $T^V$ is not enough to give a good description of a trellis. We also need to specify the *orientation* of the crossing of $T^U$ with $T^S$. The orientation at $q$, written ${\mathcal{O}}(q)$ is positive ($+$) if $T^U$ and $T^S$ intersect with the same orientation as they do at $p$, and negative ($-$) if they intersect with the opposite orientation. We can define a trellis up to ambient isomorphism just by giving $(n_U,n_S,{\mathcal{O}})$ for all points $q\in T^V$. This description will be called the $(U,S,{\mathcal{O}})$-coordinate description of $T$. Cutting {#sec:cutting} ------- Suppose $f:M{\longrightarrow}M$ has trellis $T$. We would like to obtain a map of pairs from $f$ which captures the action of $f$ on $T$. The process by which we do this is *cutting* along the unstable curves $T^U$. Let $M$ be a surface. An embedded curve $\alpha$ is a *cutting curve* if $\alpha\cap\partial M\subset\partial\alpha$. A finite set of mutually disjoint cutting curves is a *cutting set*. A surface ${{{{\mathcal{C}}}_{\alpha}M}}$ is obtained by *cutting $M$ along $\alpha$* if there are curves $\alpha_1,\alpha_2:I{\longrightarrow}{{{{\mathcal{C}}}_{\alpha}M}}$ in the boundary of ${{{{\mathcal{C}}}_{\alpha}M}}$ which are disjoint except that we allow $\alpha_1(0)=\alpha_2(0)$ or $\alpha_1(1)=\alpha_2(1)$ (or both), and a map $q_\alpha:{{{{\mathcal{C}}}_{\alpha}M}}{\longrightarrow}M$ such that $q_\alpha$ is the quotient map for the relation $\alpha_1(t)\sim\alpha_2(t)$, and $\alpha(t)=q_\alpha(\alpha_1(t))=q_\alpha(\alpha_2(t))$. The quotient map $q_\alpha$ is called the *gluing map*. If $A$ is a cutting set, we can cut along all curves simultaneously to obtain a surface ${{{{\mathcal{C}}}_{A}M}}$ and gluing map $q_A$. It is a straightforward, though messy, exercise to show that cutting surfaces are unique up to homeomorphism. Cutting is shown pictorially in [Figure \[fig:cutting\]]{}. ![Cutting along curves[]{data-label="fig:cutting"}](pjcollins-cutting.eps) The gluing map takes ${{{{\mathcal{C}}}_{A}M}}\setminus q_A^{-1}(A)$ homeomorphically onto $M\setminus A$. If $x\in A$ then typically $x$ has two preimages under $q_A$, and a neighbourhood $U$ such that $q_A^{-1}(U)$ is homeomorphic to two disjoint copies of the upper-half plane $H$ (and $q_A^{-1}(x)$ lies on the boundaries of these half-planes). However, if for some arc $\alpha$, $x\in \partial\alpha\setminus\partial M$, then $x$ has a neighbourhood $U$ such that $q_A^{-1}(U)$ is homeomorphic to a single half-plane. We extend cutting to topological pairs as follows. If $(M,B)$ is a topological pair, and $A$ is a collection of cutting curves, then ${{{{\mathcal{C}}}_{A}(M,B)}}$ is the pair $({{{{\mathcal{C}}}_{A}M}},q_A^{-1}(B))$. Given a function $f:M_1\rightarrow M_2$, and cutting sets $A_1$ for $M_1$ and $A_2$ for $M_2$, we would like to know when we can find a map ${{{{\mathcal{C}}}\!{f}}}:{{{{\mathcal{C}}}_{A_1}M_1}}{\longrightarrow}{{{{\mathcal{C}}}_{A_2}M_2}}$ such that $q_{A_2}\circ{{{{\mathcal{C}}}\!{f}}}=f\circ q_{A_1}$. The following lemma gives such a condition. Suppose $M_1$ and $M_2$ are surfaces, $A_1$ and $A_2$ are cutting sets in $M_1$ and $M_2$ respectively, and $f:M_1{\longrightarrow}M_2$ is a map such that $f^{-1}(A_2)\subset A_1$. Then there is a map ${{{{\mathcal{C}}}\!{f}}}:{{{{\mathcal{C}}}_{A_1}M_1}}{\longrightarrow}{{{{\mathcal{C}}}_{A_2}M_2}}$ such that $q_{A_2}\circ{{{{\mathcal{C}}}\!{f}}}=f\circ q_{A_1}$. Further, if $f(B_1)\subset B_2$, then ${{{{\mathcal{C}}}\!{f}}}(q_{A_1}^{-1}(B_1))\subset q_{A_2}^{-1}(B_2)$ If $q_{A_1}(x)\in f^{-1}(A_2^C)$, then we can take ${{{{\mathcal{C}}}\!{f}}}(x)=q_{A_2}^{-1}(f(q_{A_1}(x)))$. If $f(q_{A_1}(x))$ lies at a point of $A_2$ with one preimages, take ${{{{\mathcal{C}}}\!{f}}}(x)=q_{A_2}^{-1}(f(q_{A_1}(x)))$. Otherwise, let $V$ be a neighbourhood of $f(q_{A_1}(x))$ with such that $q_{A_2}^{-1}(V)$ consists of two disjoint copies of $H$. Let $\hat{U}$ be a semicircular neighbourhood of $x$ such that $q_{A_1}$ maps $\hat{U}$ homeomorphically onto $U$, a subset of $f^{-1}(V)$. Let $W=U\setminus A_1$. $W$ is connected, so $f(W)$ is connected, and since $f(W)\subset A_2^C$, $q_{A_2}^{-1}(f(W))$ is connected, so lies in one of the components of $q_{A_2}^{-1}(V)$. Take ${{{{\mathcal{C}}}\!{f}}}(x)$ to be the preimage of $f(q_{A}(x))$ under $q_{A_2}$ in this component. Clearly the map so defined is continuous at $x$, and ${{{{\mathcal{C}}}\!{f}}}({{{{\mathcal{C}}}_{A_1}B_1}})\subset{{{{\mathcal{C}}}_{A_2}B_2}}$ Now suppose $T=(T^P,T^V,T^U,T^S)$ is a trellis for a map $f$ on $M$. We can cut along $T^U$ to obtain a surface ${{{{\mathcal{C}}}_{T^U}M}}$. We can also take the preimage of $T^S$ under the gluing map, an obtain a pair ${{{{\mathcal{C}}}T}}=({{{{\mathcal{C}}}_{T^U}M}},q_{T^U}^{-1}(T^S))$. For convenience, we will often write ${{{{\mathcal{C}}}T}}=(X_T,Y_T)$ An example of the cutting procedure is shown in [Figure \[fig:cuttrellis\]]{} ![Cutting along the unstable segment[]{data-label="fig:cuttrellis"}](pjcollins-cuttrellis.eps) Since $f^{-1}(T^U)\subset T^U$, we have a map ${{{{\mathcal{C}}}\!{f}}}:{{{{\mathcal{C}}}_{T^U}M}}{\longrightarrow}{{{{\mathcal{C}}}_{T^U}M}}$, and since $f(T^S)\subset T^S$, ${{{{\mathcal{C}}}\!{f}}}$ is a map of pairs ${{{{\mathcal{C}}}\!{f}}}:{{{{\mathcal{C}}}T}}{\longrightarrow}{{{{\mathcal{C}}}T}}$. More generally, if $f:M_1{\longrightarrow}M_2$ is a trellis map from $T_1$ to $T_2$, then we can define ${{{{\mathcal{C}}}\!{f}}}:{{{{\mathcal{C}}}T_1}}{\longrightarrow}{{{{\mathcal{C}}}T_2}}$. Since ${{{{\mathcal{C}}}\!{(f\circ g)}}}={{{{\mathcal{C}}}\!{f}}}\circ{{{{\mathcal{C}}}\!{g}}}$, cutting induces a functor from the trellis category to that of topological pairs. We now give some trivial, but fundamentally important properties of the $T^U$-cutting projection $q_{A}$. \[prop:cuttingproperties\] 1. $q_{T^U}$ maps regions of $(M,T^U\cup T^S)$ bijectively with regions of ${{{{\mathcal{C}}}T}}$. 2. $f$ has the same periodic orbits as ${{\mathcal{C}}}{f}$, except perhaps for those lying on $T^U$. 3. $q_{T^U}$ is a finite-to-one semiconjugacy, and so ${h_\mathit{top}}(f)={h_\mathit{top}}({{\mathcal{C}}}{f})$. Cross-Cut Surfaces and Divided Graphs {#sec:collapse} ------------------------------------- The relationship between graph maps and surface homeomorphisms has been studied in detail, particularly with regard to Thurston’s train tracks and the classification of surface diffeomorphisms. More recently, Bestvina and Handel [@BestvinaHandell95], Franks and Misiurewicz [@FranksMisiurewicz93] and Los [@Los93] produced algorithms for computing the dynamics of isotopy classes of homeomorphisms relative to a finite invariant set. When studying trellises, we will need to consider *divided graphs*, where we have an invariant subset of the vertex set. The regions of a divided graph obtained from a trellis are typically very simple (often trees with two or three vertices) making these graphs particularly easy to study. A *cross-cut surface* is a topological pair $(M,A)$, where $M$ is a surface with nonempty boundary, and $A$ is a finite union of disjoint embedded intervals $\alpha$ such that $\alpha\cap\partial M=\partial\alpha$. $A$ is a *cross-cutting set* and curves $\alpha\in A$ are *cross-cuts*. When cutting along $T^U$, all minimal segments of $T^S$ lift to cross-cuts of ${{{{\mathcal{C}}}_{T^U}M}}$. If $T$ is a transverse trellis, the endpoints of these lifts are disjoint, so ${{{{\mathcal{C}}}T}}$ is a cross-cut surface. The main property of cross-cut surfaces is that they fibre nicely over graphs. A divided graph is a topological pair (G,W), where $G$ is a graph (simplicial 1-complex) and $W$ is a subset of ${\mathrm{Ver}}(G)$, the vertex set of $G$. We now show that for any pair $(M,A)$ where $M$ is a surface and $A$ consists of nicely embedded curves, there is an exact, homotopy invertible map $r$ to a divided graph. \[thm:collapse\] Let $M$ be a surface such that $H_2(M,\emptyset)=0$, and $A\subset M$ set of embedded compact intervals such that $A\cap\partial M$ has only a finite number of components. Then there is a divided graph $(G,W)$ and an exact map $(M,A){{\longrightarrow}}(G,W)$ with a homotopy inverse. If $M$ is a cross-cut surface, then the homotopy inverse can be made an embedding and all homotopies exact. Let $(X,W)$ be the quotient space obtained by collapsing each component of $A$ to a point, and $q$ the quotient map. Clearly $q$ is exact, and since neighbourhoods of $A$ are topological discs, $q$ has a homotopy inverse $j$. Further, if $A$ consists of cross-cuts, this homotopy inverse can be made an embedding, as shown in [Figure \[fig:localretract\]]{} ![Exact deformation retract of a cross-cut to a point[]{data-label="fig:localretract"}](pjcollins-localretract.eps) Choose a simplicial subdivision of $X$, such that no simplex contains more than one point of $W$. Since $X$ is the quotient of a surface by the curves $A$, each 1-simplex of $X$ is contained in no more than two 2-simplexes of $X$. Then any two vertices lying in the same component of $X\setminus W$ can be joined by an edge-path which does not touch $W$. Let $Y$ be a minimal 1-complex with the property that any two vertices in the same component of $X\setminus W$ lie in the same component of $Y$. By the minimality of $Y$, each component of $Y$ is contractible, so $H_2(X,Y\cup W)=0$. Hence there exists an edge $e$ such that $e\not\in Y$ and $e$ is an edge of exactly one 2-simplex $s$ of $X$. Let $X_1$ be the simplicial complex formed by removing $e$ and $s$ from $X$. There is a strong deformation retract $r_1:X{{\longrightarrow}}X_1$ such that $r_1(s\cup e)\subset \partial s\setminus e$, and both $r_1$ and the corresponding inclusion $i_1$ are exact. By iterating this procedure to remove one simple at a time, we obtain the graph $(G,W)$. Since the homotopy inverse for $q$ can be made an exact embedding if $A$ consists of cross cuts, and each inclusion is an exact embedding, we obtain the required homotopy inverse in the case where $A$ consists of cross-cuts. Thus there are maps $r:(M,A){{\longrightarrow}}(G,W)$ and $s:(G,W){{\longrightarrow}}(M,A)$ such that $r\circ s={id}$ and $s\circ r{\sim}{id}$. If ${{\mathbf{R}}}$ is a set of disjoint regions of $(M,A)$, and ${{\mathbf{R}}}_G=\{r(R):R\in{{\mathbf{R}}}\}$, then $r$ is a region-preserving map $(M,A;{{\mathbf{R}}}){{\longrightarrow}}(G,W;{{\mathbf{R}}}_G)$. Suppose $f$ is a dynamical system on $(M,A;{{\mathbf{R}}})$. Let $g=r\circ f\circ s$. Clearly $r$ is a morphism from $f$ to $g$, so we can study the dynamics of $f$ by studying the dynamics of $g$ using relative Nielsen theory. If $A$ consists of cross cuts, then since $s\circ g\circ r=s\circ r\circ f\circ r\circ s=s\circ r\circ f{\sim}{id}\circ f=f$, there is also a morphism from $f$ to $g$. In this case, the Nielsen classes of $f$ and $g$ are equivalent. In the ideal situation, we can find a divided graph ${\mathcal{G}T}$ and a map ${\mathcal{G}f}$ such that all periodic points of ${\mathcal{G}f}$ persist under homotopy. Graph Maps {#sec:graph} ---------- Under certain conditions, all, or at least all but finitely many, of the periodic points of a system on a graph are unremovable under homotopy. If there is a morphism from a dynamical system on some other space to such a map, we obtain a lot of information about the periodic points of this system. One particularly appealing feature of maps on graphs is that we can easily describe homotopy classes combinatorially using simplicial maps. Let $G$ be a graph, $\tilde{G}$ a subdivision of $G$, and $g:\tilde{G}{\longrightarrow}G$ a simplicial map. We call such a map $g$ a *graph map*. Let $e$ be an edge of $G$, such that $e=\tilde{e}_1\tilde{e}_2\ldots\tilde{e}_m$, where the $\tilde{e}_i$ are edges of $\tilde{G}$. Then we write $g(e)=g(\tilde{e}_1)g(\tilde{e}_2)\ldots g(\tilde{e}_m)=e_1e_2\ldots e_n$, the *edge-path action* of $g$. If $e_{i+1}=\bar{e}_i$ for some $i$, then we say that $g$ *folds* the edge $e$. Thus, graph maps either map an edge $e$ to a vertex, or stretch it in a piecewise-linear way over an edge-path $e_1e_2\ldots e_n$ so that the only points of local non-injectivity on $e$ are isolated preimages of vertices. Dynamics of graph maps can be represented by the *transition matrix* \[defn:transitionmatrix\] Let $g$ be a graph map of $G$ and let $e_1,\ldots,e_m$ be the edges of $G$. Let $A$ be the $m\times m$ matrix with $i,j$-th element $a_{ij}$ equal to the number of times $g$ maps edge $e_i$ across $e_j$. A is the *transition matrix* for $g$. If $A$ is the transition matrix for $g$, then we can show that $A^n$ is the transition matrix for $g^n$. $(A^n)_{ij}$ measures the number of times $g^n$ maps edge $e_i$ across $e_j$. There must be one periodic point of $g$ of period $n$ in $e_i$ for each time $g^n$ maps $e_i$ across $e_i$ (except in the degenerate case where $g^n(e_i)=e_i$, where all points are periodic by linearity). Thus there are $(A^n)_{ii}$ period $n$ points of $g$ in $e_i$. Naively, one would expect ${\mathrm{Tr}}(A^n)=\sum_{i=1}^m(A^n)_{ii}$ to give the total number of points of period $n$ for $g$. Unfortunately, periodic points in ${\mathrm{Ver}}(G)$ may be counted several times, or not at all. However, the error between ${\mathrm{Tr}}(A^n)$ and ${\#}{{\mathrm{Per}}_n(g)}$ is bounded by a constant $c$ independent of $n$. It is well known that the topological entropy of $g$ is given by the growth rate of the number of periodic points of $g$, $\limsup_{n\rightarrow\infty}\frac{1}{n}\log{\mathrm{Tr}}(A^n)$, and is equal to the Perron-Frobenius eigenvalue of $A$, $\lambda_{\max}(A)$. $A$ determines a graph with $a_{ij}$ edges from vertex $i$ to vertex $j$, and the dynamics of $g$ are represented by the edge shift on this graph. Now suppose $(G,W)$ is a divided graph, ${{\mathbf{R}}}$ is a set of disjoint regions, and $g$ is a graph map of $(G,W)$. We can extend the definition of transition matrices to take into account the regions in ${{\mathbf{R}}}$ as follows: \[defn:transitionmatrices\] For all regions $R\in{{\mathbf{R}}}$, define an $m\times m$ matrix $P_R$ by $(P_R)_{ii}=1$ if edge $e_i\in R$ and $(P_R)_{ij}=0$ otherwise. Let $A_R=P_R A$, and $A_{{\mathbf{R}}}=\sum_{R\in{{\mathbf{R}}}}A_R$. If ${{\mathcal{R}}}$ is a word on ${{\mathbf{R}}}$ of length $n$, let $A_{{\mathcal{R}}}=A_{R_0}A_{R_1}\cdots A_{R_{n-1}}$, the *transition matrix for the code ${{\mathcal{R}}}$*. When writing $A_{{\mathbf{R}}}$ we will typically drop rows and columns corresponding to edges not in $\bigcup{{\mathbf{R}}}$, and draw a horizontal line between rows corresponding to edges in different regions. ${\mathrm{Tr}}(A_{{\mathcal{R}}})$ is gives the number of points of period $n$ for $g$ with code ${{\mathcal{R}}}$ (except for small errors occurring at vertices). It is easy to check that $$\sum_{{{\mathcal{R}}}\in W^m({{\mathbf{R}}})}{\mathrm{Tr}}(A_{{\mathcal{R}}})={\mathrm{Tr}}(A_{{\mathbf{R}}}^n)\le{\mathrm{Tr}}(A^n)$$ where $W^m({{\mathbf{R}}})$ is the set of words on ${{\mathbf{R}}}$ of length $m$. Again, ${\mathrm{Tr}}(A_{{\mathbf{R}}}^n)$ counts the number of points in ${{{\mathrm{Per}}_{{{\mathbf{R}}},n}(g)}}$, up to an error which is constant in $n$. We have shown that the periodic points of graph maps are easy to calculate. We now define a class of graph maps, called *tight graph maps*, which have minimal dynamics in the homotopy class. \[defn:tight\] A graph map $g:(G,W){\longrightarrow}(G,W)$ is ${{\mathbf{R}}}$-*tight* if for all regions $R\in{{\mathbf{R}}}$, for all edges $e$ in $R$, $g(e)$ does not fold, and if $e_1$ and $e_2$ are distinct edges from the same vertex $v$ in $R\setminus W$, then $g(e_1)$ and $g(e_2)$ have different initial edges. Not every map of a divided graph is homotopic to a tight graph map, but all the maps of cross-cut surfaces we study are exactly homotopy retract onto a tight graph map, and we conjecture that this is true in general. The fundamental theorem on tight graph maps is that the periodic points lie in different Nielsen classes, and that, typically, these Nielsen classes are essential. \[thm:rexpand\] Suppose $g$ is ${{\mathbf{R}}}$-tight and $x_1,x_2\in{{{\mathrm{Per}}_{{{\mathbf{R}}},n}(f)}}$. Then either $x_1$ and $x_2$ lie in different Nielsen classes, or there is an edge-path joining $x_1$ to $x_2$ which is fixed by $g^n$. Further, if $x\in{{{\mathrm{Per}}_{{{\mathbf{R}}},n}(f)}}$, then either ${\mathrm{Ind}}(x;f)\neq0$ or $x\in{\mathrm{Ver}}(G)$. Suppose $x_1\neq x_2$ are Nielsen-equivalent, and $\alpha_j:(I,J){{\longrightarrow}}(G,W)$ is a relating family for $x_1{\simeq}x_2$. Suppose $J\neq\emptyset$. Let $s=\inf J$ and $y=\alpha(s)$ Since $x_1\not\in Y$, $s>0$. Let $\beta_j:(I,\{1\}){{\longrightarrow}}(X,Y)$ be given by $\beta_j(t)=\alpha_j(t/s)$. Then $(\beta_j)$ is a relating family for $x_1{\simeq}y$, and further, there are regions $R_j\in{{\mathbf{R}}}$ such that $\beta_j(I)\subset R_j$. If $J=\emptyset$, then we let $\beta_j=\alpha_j$, so again there are regions $R_j\in{{\mathbf{R}}}$ such that $\beta_j(I)\subset R_j$. By homotoping if necessary to remove any folds, we can assume that all curves $\beta_j$ are locally injective. Since $g$ is ${{\mathbf{R}}}$-tight, $g(\beta)$ is locally injective, so, up to parameterisation, $g\circ\beta_j=\beta_{j+1}$ Hence $g^n(\beta_0(I))=\beta_0(I)$ so $g^n\circ\beta_0{\sim}\beta_0$. Thus $g^n\circ\beta=\beta$ , and so all points of $\beta$ are fixed by $g^n$. If $x$ is an isolated repelling fixed point of a graph map $f$ and $x$ does not lie on a vertex of $G$, then ${\mathrm{Ind}}(G,x;g^n)=\pm1$. Entropy of Trellis Maps {#sec:trellisentropy} ----------------------- We now show that we can find a lower bound for the entropy of a trellis map in terms of the asymptotic Nielsen number. By [Theorem \[thm:epte\]]{}, we need only show that ${{{{\mathcal{C}}}\!{f}}}$ has expansive periodicity near $Y_T$. \[thm:trellisentropy\] If $f$ is a homeomorphism with trellis $T$ such that $T^P$ consists of hyperbolic periodic points, then ${{{{\mathcal{C}}}\!{f}}}:(X_T,Y_T){\longrightarrow}(X_T,Y_T)$ has expansive periodicity near $Y_T$. Since $Y_T$ is the inverse image under the glueing map of a submanifold of stable manifold for $f$, $Y_T$ has a neighbourhood $W$ for which every point of $W\setminus Y_T$ eventually leaves $W$. Since $Y_T$ is a union of disjoint copies of an interval with endpoints in $\partial X_T$, we can find neighbourhoods $V_1$, $V_2$ and $V_3$ of $Y_T$ each of which deformation retract onto $Y_T$ such that ${\mathrm{cl}(V_1)}\subset V_2$, ${\mathrm{cl}(V_2)}\subset V_3$, ${{{{\mathcal{C}}}\!{f}}}(V_1)\subset V_2$, and every point of $V_1\setminus Y_T$ eventually leaves $V_1$ Choose an open cover ${\mathcal{U}}$ containing the components of $V_1$ and $V_2\setminus Y_T$, and such that for all other $U\in{\mathcal{U}}$, $U\cap V_1=\emptyset$ and $U$ intersects at most one component of $V_2\setminus Y_T$ (This is where we need ${\mathrm{cl}(V_2)}\subset V_3$). Let $U_0=V_1$. We claim that ${\mathcal{U}}$ and $U_0$ are the required open cover and neighbourhood of $Y_T$. First notice that if $x_1$ and $x_2$ lie in the same component of $V_1$, but different components of $V_1\setminus Y_T$ (equivalently, every path from $x_1$ to $x_2$ in $V_1$ crosses $Y$), then $f(x_1)$ and $f(x_2)$ lie in different components of $V_2$. Suppose $x_1,x_2\in U_0\setminus Y_T$, and $f^j(x_1)$ and $f^j(x_2)$ are ${\mathcal{U}}$-close for all $j$. Then there exists least $i$ such that either $f^i(x_1)$ or $f^i(x_2)$ are not in $U_0=V_1$. By minimality of $i$, $f^i(x_1),f^i(x_2)\in V_2$. Since $f^i(x_1)$ and $f^i(x_2)$ are ${\mathcal{U}}$-close, they must lie in the same component of $V_2\setminus Y_T$. This means that $x_1$ and $x_2$ lie in the same component of $V_1\setminus Y_T$, and since components of $V_1$ are simply connected, every path in $V_1$ from $x_1$ to $x_2$ is homotopic to one which does not intersect $Y_T$. We can use this to show that the entropy of a map with trellis $T$ is at least the asymptotic Nielsen number of ${{{{\mathcal{C}}}\!{f}}}$. \[cor:trellisentropy\] If $f$ is a homeomorphism with transverse trellis $T$ such that $T^P$ consists of hyperbolic periodic points, then ${h_\mathit{top}}(f)\ge N_\infty({{{{\mathcal{C}}}\!{f}}})$. ${h_\mathit{top}}(f)={h_\mathit{top}}({{{{\mathcal{C}}}\!{f}}})$ since the gluing map is a finite-to-one surjective semiconjugacy, and ${h_\mathit{top}}({{{{\mathcal{C}}}\!{f}}})\ge N_\infty({{{{\mathcal{C}}}\!{f}}})$ by [Theorem \[thm:trellisentropy\]]{} and [Theorem \[thm:epte\]]{}. If the homeomorphism $f$ for the trellis $T$ is clear, we will sometimes call $N_\infty({{\mathcal{C}}}{f})$ the *entropy of $T$*. Examples {#sec:Examples} ======== First we give a familiar example, the Smale horseshoe map. Recall that the Smale horseshoe map $f:S^2{\longrightarrow}S^2$ maps the stadium-shaped area of [Figure \[fig:smale\]]{} into itself as shown, mapping the square $S$ linearly across itself with uniform expansion in the horizontal direction and contraction in the vertical direction. ![Smale horseshoe Map[]{data-label="fig:smale"}](pjcollins-smale.eps) $f$ maps the semicircular region $D_1$ into itself so that all points in $D_1$ are attracted to a fixed point, and maps $D_2$ into $D_1$. Outside the stadium, $f$ has a single repelling fixed point. There is a hyperbolic saddle point in $S$, and the stable and unstable curves form a homoclinic tangle. The *horseshoe trellis* $T_2$ is the subset of the tangle in shown in [Figure \[fig:xsmale\](a)]{}. Except for two fixed points outside $S$, the nonwandering set $\Lambda$ of $f$ lies in the regions $R_1$ and $R_2$. ![(a) Horseshoe trellis $T_2$, (b) Cut surface ${{{{\mathcal{C}}}T_2}}$, (c) Embedded graph ${\mathcal{G}T_2}\subset{{{{\mathcal{C}}}T_2}}$, (d) Graph ${\mathcal{G}T_2}$ with edges labeled[]{data-label="fig:xsmale"}](pjcollins-xsmale.eps) The $(U,S,{\mathcal{O}})$-coordinates for the vertices are $$(0,0,+),(1,7,-),(2,4,+),(3,3,-),(4,2,+),(5,5,-),(6,6,+),(7,1,-)$$ To study the dynamics, we first cut along the unstable set $T_2^U$ of the trellis (dropping the ends) as shown in [Figure \[fig:xsmale\](b)]{}. This gives us a topological pair ${{{{\mathcal{C}}}T_2}}=(X_{T_2},Y_{T_2})$, where $X$ is the surface obtained by the cutting, $Y_{T_2}$ is a subset of $X_{T_2}$ corresponding to the stable set $T_2^S$ of the trellis. $f$ naturally induces a map ${{\mathcal{C}}}{f}$ of ${{{{\mathcal{C}}}T_2}}$. Let $G_{T_2}$ be graph embedded in ${{{{\mathcal{C}}}T_2}}$ as shown in [Figure \[fig:xsmale\](c)]{}. Letting $W_{T_2}=G_{T_2}\cap Y_{T_2}$, we obtain a topological pair ${\mathcal{G}T_2}=(G_{T_2},W_{T_2})$ onto which we can deformation retract $(X_{T_2},Y_{T_2})$. This collapsing induces a map ${\mathcal{G}f}$ on ${\mathcal{G}T_2}$. Just by knowing the action of $f$ on $T_2^S$, we can deduce the action of ${\mathcal{G}f}$ on $W$. In this case we have $$p_0,p_3,p_4\mapsto p_0,\ p_1,p_2,p_5\mapsto p_3 {\mathrm{\ and\ }}p_6\mapsto p_4$$ Since ${\mathcal{G}T_2}$ is a tree, this determines the homotopy class of ${\mathcal{G}f}$ as a self-map of ${\mathcal{G}T_2}$ completely. A tight graph map in the homotopy class of ${\mathcal{G}f}$ maps the arcs corresponding to regions $R_1$ and $R_2$ across each other. Using the labeling of [Figure \[fig:xsmale\](d)]{}, we have $$a\mapsto abc {\mathrm{\ and\ }}c\mapsto\bar{c}\bar{b}\bar{a}$$ Thus ${\mathcal{G}T_2}$ must have a subset on which ${\mathcal{G}f}$ is conjugate to the one-sided shift on two symbols. Therefore, the trellis forces dynamics conjugate with the shift on two symbols. In particular, any map with the same trellis as the Smale horseshoe $f$ must have entropy ${h_\mathit{top}}\ge\log2$. \[example:iterate\] Again consider the trellis $T_2$ of and let $f$ be the second iterate of the horseshoe map. One might expect the homotopy class of $f$ to have *more* entropy than that of $f$. However, ${\mathcal{G}f}$ maps all points $p_0\ldots p_6$ to $p_0$ so is homotopic to a constant map. Thus we obtain no information about the dynamics. We can find diffeomorphisms homotopic to $f$ with this trellis and arbitrarily small entropy. \[example:trivial\] Consider the trellis $T_1$ of [Figure \[fig:xtype1\](a)]{}which is a subset of the horseshoe trellis, and let $f$ be the horseshoe map. Cutting along the unstable manifolds we obtain the surface ${{{{\mathcal{C}}}T_1}}$ shown in [Figure \[fig:xtype1\](b)]{}. ![(a) Trellis $T_1$, (b) Surface ${{{{\mathcal{C}}}T_1}}$[]{data-label="fig:xtype1"}](pjcollins-xtype1.eps) The components $Y_0$, $Y_1$ and $Y_2$ of $Y_{T_1}$ all map to $Y_0$ under ${{\mathcal{C}}}{f}$, so ${{\mathcal{C}}}{f}$ is homotopic to a constant. Therefore, our topological methods give no interesting dynamics. An even more extreme example is given by the trellis $T_0$ of [Figure \[fig:xtype0\](a)]{} Cutting along the unstable manifolds we obtain the surface ${{{{\mathcal{C}}}T_0}}$ of [Figure \[fig:xtype0\](b)]{}. ![Trellis $T_0$ and cut surface ${{{{\mathcal{C}}}T_0}}$[]{data-label="fig:xtype0"}](pjcollins-xtype0.eps) All maps on ${{{{\mathcal{C}}}T_0}}$ are homotopic to a constant, so again, our topological methods to any map with this trellis yields no information. In each of these cases, we know that if $f$ is a diffeomorphism with this trellis, ${h_\mathit{top}}(f)>0$. However, we can find diffeomorphisms with arbitrarily small entropy. The type-$3$ trellis $T_3$ is the simplest nontrivial trellis other than the horseshoe. It trellis occurs in the Hénon map for a range of parameter values, and a particular case is shown in as in [Figure \[fig:henon\]]{}. ![Trellis in the Hénon map with parameters $c=-\frac{4}{5}$ and $r=\frac{3}{2}$[]{data-label="fig:henon"}](pjcollins-henon.eps) This figure was drawn using the DsTool implementation of the algorithm of Krauskopf and Osinga [@KrauskopfOsinga98]. The trellis is shown in [Figure \[fig:xtype3\](a)]{}. ![(a) Type $3$ trellis $T_3$, (b) Graph ${\mathcal{G}T_3}$[]{data-label="fig:xtype3"}](pjcollins-xtype3.eps) The $(U,S,{\mathcal{O}})$-coordinates for the vertices are $$\begin{aligned} (0,0,+),(1,9,-),(2,6,+),(3,5,-),(4,4,+),(5,3,-),\\ (6,2,+),(7,7,-),(8,8,+),(9,1,-)\end{aligned}$$ and the vertices map $(1,9,-)\mapsto(3,5,-)\mapsto(5,3,-)\mapsto(9,1,-)$. Cutting along the unstable manifold, we obtain the surface ${{{{\mathcal{C}}}T_3}}$ and the embedded graph ${\mathcal{G}T_3}$ as shown in [Figure \[fig:xtype3\](b)]{}. The action on the distinguished vertex set is $$p_0,p_4,p_5\mapsto p_0,\;p_1,p_2,p_6\mapsto p_3,\;p_3\mapsto p_4,\;p_7\mapsto p_5 {\mathrm{\ and\ }}p_8\mapsto p_7$$ The graph is a tree, and the regions $R_1$ and $R_2$ are expanding under the the tight map $$a\mapsto abc_1\bar{c}_2, \; b\mapsto\cdot, \; c_1\mapsto c_2, \; c_2\mapsto c_3 {\mathrm{\ and\ }}c_3\mapsto abc_1$$ This gives transition matrix (on $\{a,c_1,c_2,c_3\}$) $$A=\left( \begin{array}{cccc} 1&1&1&0\\ \hline 0&0&1&0\\ 0&0&0&1\\ 1&1&0&0\\ \end{array} \right)$$ The horizontal line in the matrix separates the rows corresponding to edges of ${{R}}_1$ from edges of $R_2$. The edge shift for this transition matrix is given in [Figure \[fig:shift3\]]{}, and since $a\subset R_1$ and $c_1,c_2,c_3\subset R_2$ we obtain a sofic shift on regions. ![Shift for $T_3$[]{data-label="fig:shift3"}](pjcollins-shift3.eps) The characteristic polynomial of $A$ is $\lambda(\lambda^3-\lambda^2-2)$, and the Perron-Frobenius eigenvalue $\lambda_{\max}$ of $A$ therefore satisfies $\lambda_{\max}^3-\lambda_{\max}^2-2=0$. The value of $\lambda_{\max}$ is approximately $1.70$, giving a lower bound of $0.527$ for the topological entropy. The horseshoe trellis and type-$3$ trellis are part of a family of [*simple trellises*]{}. The general type-$n$ trellis has vertices with coordinates $$\begin{aligned} (0,0,+),(1,2n+3,-),(2,2n,+),(3,2n-1,-),(4,2n-2,+)\ldots(2n-1,3,-)\\ (2n,2,+),(2n+1,2n+1,-),(2n+2,2n+2,+),(2n+3,1,-)\end{aligned}$$ We consider trellis maps taking $(1,2n+3,-)$ to $(3,2n-1,-)$. The graph ${\mathcal{G}T_n}$, shown in [Figure \[fig:xtypen\]]{}, has two expanding regions $R_1$ and $R_2$ under the tight map. ![Graph for $T_n$[]{data-label="fig:xtypen"}](pjcollins-xtypen.eps) $R_1$ has a single edge $a$, and $R_2$ has edges $c_1,c_2\ldots c_n$ which map: $$\begin{aligned} a & \mapsto & abc_1\bar{c}_2\\ c_i & \mapsto & \begin{case} c_{i+1} & {\mathrm{if\ }}i<n \\ abc_1 & {\mathrm{if\ }}i=n\\ \end{case}\end{aligned}$$ where $b$ is an edge from the end of $a$ to the beginning of $c_1$. The transition matrix (on $\{a,c_1,c_2,\ldots,c_n\}$) is $$A=\left( \begin{array}{rrrrcrr} 1&1&1&0&\ldots&0\\ \hline 0&0&1&0&\ldots&0\\ 0&0&0&1&\ldots&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&0&\ldots&1\\ 1&1&0&0&\ldots&0\\ \end{array} \right)$$ The characteristic polynomial of this matrix is $\lambda(\lambda^n-\lambda^{n-1}-2)$, from which we can find the entropy of the system. In particular $\lambda_{\max}{\longrightarrow}1$ as $n{\longrightarrow}\infty$, so ${h_\mathit{top}}{\longrightarrow}0$. Consider the trellis $T_D$ shown in [Figure \[fig:xpuncture\](a)]{}. ![(a) Trellis $T_D$ (with puncture points), (b) Graph ${\mathcal{G}T_D}$, (c) Graph ${\mathcal{G}T_{pA}}$[]{data-label="fig:xpuncture"}](pjcollins-xpuncture.eps) The $(U,S,{\mathcal{O}})$-coordinates for the vertices are $$\begin{aligned} (0,0,+),(1,10,-),(2,7,+),(3,4,-),(4,3,+),(5,8,-),(6,9,+), \\ (7,2,-),(8,5,+),(9,6,-),(10,1,+)\end{aligned}$$ The graph ${\mathcal{G}T_D}$ is shown in [Figure \[fig:xpuncture\](c)]{}, and the tight graph map is $$\begin{array}{llll} a_1\mapsto a_1a_2a_3 & a_2\mapsto\cdot & a_3\mapsto{\bar{a}}_3{\bar{a}}_2{\bar{a}}_1 & \\ b_1\mapsto\cdot & & & \\ c_1\mapsto c_1c_2c_3 & c_2\mapsto\cdot & c_3\mapsto{\bar{c}}_3{\bar{c}}_2{\bar{c}}_1 & c_4\mapsto a_1a_2a_3 \end{array}$$ This map has entropy ${h_\mathit{top}}=\log2$. Now suppose the trellis is embedded in a surface with three holes positioned at the stars in [Figure \[fig:xpuncture\](a)]{} The graph is of the trellis is shown in [Figure \[fig:xpuncture\](c)]{}. The tight map is $$\begin{array}{lllll} a_1\mapsto a_1a_2a_3 & a_2\mapsto a_4 & a_3\mapsto{\bar{a}}_3{\bar{a}}_2{\bar{a}}_1 & a_4\mapsto b_1b_2{\bar{b}}_1 & \\ b_1\mapsto c_1c_2c_3 & b_2\mapsto c_4c_5{\bar{c}}_4 & & & \\ c_1\mapsto c_1c_2c_3 & c_2\mapsto c_4c_5{\bar{c}}_4 & c_3\mapsto{\bar{c}}_3{\bar{c}}_2{\bar{c}}_1 & c_4\mapsto a_1a_2a_3 & c_5\mapsto a_4 \end{array}$$ Since the map does not fold of the edge paths $a_1a_2a_3$ and $c_1c_2c_3c_4$, the dynamics of this map are the same as that of $a\mapsto a{\bar{a}}b$, $b\mapsto c$ and $c\mapsto c{\bar{c}}a$. From this we can show that the characteristic polynomial of the transition matrix has a factor $\lambda^2-3\lambda+1$, from which we obtain entropy ${h_\mathit{top}}(f)\ge{h_\mathit{top}}(g_T)=\log(\frac{3+\sqrt{5}}{2})$. Note that this entropy is larger than that for the trellis in a surface without holes. Collapsing the holes to points, we obtain a periodic orbit of period $3$. The braid type of this orbit is pseudo-Anosov, and the minimal representative has entropy $\log(\frac{3+\sqrt{5}}{2})$, the same as that computed above. Further, the trellis is exhibited by a blow-up of the pseudo-Anosov homeomorphism. Thus all the dynamics are forced by the isotopy class in the surface. Let $A$ be that matrix $$A=\left( \begin{array}{cc} 2 & 1 \\ 1 & 1 \\ \end{array}\right)$$ The eigenvalues of $A$ are ${\frac{1}{2}}(3\pm\sqrt{5})$ and the eigenvectors are $$v_u=\left( \begin{array}{cc} 1\\ \frac{-1+\sqrt{5}}{2}\end{array}\right) {\ \ \ \ \ \ \ \ \ \ }v_s\left( \begin{array}{c} -1 \\ \frac{1+\sqrt{5}}{2} \\ \end{array}\right)$$ The trellis $T_A$ of [Figure \[fig:xanosov\](a)]{} occurs in the toral Anosov map with matrix $A$ ![(a) Trellis $T_A$, (b) Graph ${\mathcal{G}T_A}$[]{data-label="fig:xanosov"}](pjcollins-xanosov.eps) The points of intersection have coordinates $$\begin{aligned} q_0 & = & (0,0) \\ q_1 & = & \textstyle{\frac{1}{10}} (-15+7\sqrt{5},25-11\sqrt{5}) \\ q_2 & = & \textstyle{\frac{1}{10}} (- 5+3\sqrt{5},10- 4\sqrt{5}) \\ q_3 & = & \textstyle{\frac{1}{10}} (-10+6\sqrt{5},20- 8\sqrt{5}) \\ q_4 & = & \textstyle{\frac{1}{10}} ( 2\sqrt{5}, 5- \sqrt{5})\end{aligned}$$ and the Anosov map $f$ fixes $q_0$ and maps $q_1\mapsto q_2\mapsto q_4$. The graph ${\mathcal{G}T_A}$ for $T_A$ is shown in [Figure \[fig:xanosov\](b)]{} and has edges which map: $$a_1\mapsto a_1,\;a_2\mapsto ba_2,\;a_3\mapsto ca_3,\;b\mapsto ba_2{\bar{a}}_3{\bar{c}}{\mathrm{\ and\ }}c\mapsto a_1{\bar{a}}_2{\bar{b}}$$ If $\alpha=a_1$, $\beta=ba_2$ and $\gamma=ca_3$, then we have $$\alpha\mapsto\alpha,\;\beta\mapsto\beta\bar{\gamma}\beta {\mathrm{\ and\ }}\gamma\mapsto\alpha\bar{\beta}\gamma$$. Thus the growth rate of the number of periodic points is simply the Perron-Frobenius eigenvalue ${\frac{1}{2}}(3+\sqrt{5})$ of $A$, and all orbits of the Anosov map persist under homotopies preserving the trellis structure. The heteroclinic trellis $T_H$ shown in [Figure \[fig:xheteroclinic\](a)]{} occurs in the Smale horseshoe. ![(a) Heteroclinic trellis $T_H$, (b) Surface ${{{{\mathcal{C}}}T_H}}$, (c) Graph ${\mathcal{G}T_H}$[]{data-label="fig:xheteroclinic"}](pjcollins-xheteroclinic.eps) There are two saddle fixed points, $p_0$ and $p_1$. Cutting along the unstable manifold, we obtain the surface ${{{{\mathcal{C}}}T_H}}$ of [Figure \[fig:xheteroclinic\](b)]{}, and we can retract this to the graph ${\mathcal{G}T_H}$ as shown in [Figure \[fig:xheteroclinic\](c)]{}. The action on the distinguished vertex set is: $$p_0,p_4\mapsto p_0,\;p_1,p_5\mapsto p_2,\;p_2\mapsto p_5 {\mathrm{\ and\ }}p_3\mapsto p_4$$. The regions $R_1$, $R_2$, $R_3$ and $R_4$ are expanding under the tight map, for which $$a\mapsto ab, \; b\mapsto c\bar{e}_2e_3d, \; c\mapsto \bar{d} {\mathrm{\ and\ }}e\mapsto ab$$ This gives transition matrix (on $\{a,b,c,d\}$) $$A=\left( \begin{array}{cccc} 1&1&0&0\\ \hline 0&0&1&1\\ 0&0&0&1\\ 1&1&0&0\\ \end{array} \right)$$ The characteristic polynomial for $A$ is $\lambda(\lambda^3-\lambda^2-\lambda-1)=0$, and the maximum eigenvalue is $\lambda_{\max}\approx 1.839$. $\log\lambda_{\max}\approx 0.609$, so $h_\top(f)\ge0.609$ for any map with this trellis action. Note that this entropy bound is less than that obtained from the horseshoe trellis $T_2$. \[example:tangential\] Consider the trellis $T_I$ of [Figure \[fig:xtangency\](a)]{} which occurs in bifurcations from the Smale horseshoe and has tangential intersections. ![(a) Trellis with tangencies $T_I$, (b) Surface ${{{{\mathcal{C}}}T_I}}$[]{data-label="fig:xtangency"}](pjcollins-xtangency.eps) Cutting along the unstable manifold, we obtain the surface ${{{{\mathcal{C}}}T_I}}$ shown in [Figure \[fig:xtangency\](b)]{} This is not a cross-cut surface, and while there is an exact deformation retract from this surface to a divided graph, we shall study the induced map using the Lefschetz theory. The cohomology action gives $$\alpha\mapsto\alpha+\beta+\gamma, \ \beta\mapsto 0 {\mathrm{\ and\ }}\gamma\mapsto -\alpha-\beta-\gamma$$ Just considering the cohomology action on $\alpha$, and $\gamma$, we have Lefschetz matrices $$A =\left( \begin{array}{cc} 1& 1\\ \hline -1&-1\\ \end{array} \right) {\ \ \ \ \ \ \ \ \ \ }A_{R_1}=\left( \begin{array}{cc} 1& 1\\ \hline 0& 0\\ \end{array} \right) {\ \ \ \ \ \ \ \ \ \ }A_{R_2}=\left( \begin{array}{cc} 0& 0\\ \hline -1&-1\\ \end{array} \right)$$ Thus for any word ${{\mathcal{R}}}$ on $R_1$ and $R_2$, $L(A_{{\mathcal{R}}})=\pm1$ and so ${{\widehat{{\mathrm{Per}}}_{{{\mathcal{R}}}}(f)}}\neq\emptyset$. Again, we have at least $2^n$ points of period $n$ for $f$, and since $R_1$ and $R_2$ are disjoint, we can again deduce that the topological entropy is at least $\log 2$. Further Study {#sec:Further Study} ============= In this paper we describe a general framework for studying maps with tangles. However there are still many unanswered questions and opportunities for further work. One particularly important problem is that of optimality of these methods. This is intimately related to the conditions we place on the map itself. As an example, consider a homoclinic trellis on the sphere with two intersections, and a map $f$ with this trellis. If $f$ is a diffeomorphism, we know that $f$ must have a horseshoe in some iterate, and hence be chaotic and have exponential growth of periodic points. Unfortunately, as previously remarked, we cannot find a lower bound for topological entropy, even though we know if must be strictly positive. Using the pruning theory of de Carvalho [@deCarvalhoPP] we can show that there is a homeomorphism with this trellis with zero entropy. This homeomorphism has stable and unstable curves at the fixed point, but this fixed point is not hyperbolic. Therefore, it is not surprising that our methods do not give periodic orbits when applied in this case. For many examples, we can show that there is a uniformly hyperbolic diffeomorphism with the given trellis which realises the entropy bound given by the asymptotic Nielsen number. As remarked above, this cannot be true in general, but a nice result would be the following Let $f$ be a trellis map for the trellis $T$. Then $N_\infty(f)$ is a lower bound for all maps with trellis $T$ homotopic to $f$. Further, there is a homeomorphism homotopic to $f$ with topological entropy $N_\infty(f)$, and for all $\epsilon>0$ there is a uniformly hyperbolic diffeomorphism homotopic to $f$ such that ${h_\mathit{top}}h<N_\infty(f)+\epsilon$. A possible way of constructing these diffeomorphisms is by using a tight graph map. For this method to work, we probably need to show that for any trellis map $f$, there is a tight graph map isomorphic to ${{\mathcal{C}}}{f}$ (for a suitable regional decomposition). Since we cannot in general find a morphism in the category of dynamical systems from a general graph map to a tight one without losing entropy, this could be a tricky problem. Another interesting problem is the case of non-invertible maps. We have shown that there are no major problems unless points not in $T^U$ maps over $T^U$, in which case our method breaks down. Sander [@SanderPP] showed that in general, non-invertible maps may have non-trivial tangles but still be non-chaotic. However, we still may be able to deduce chaos in more general situations than those described here. Ultimately, we would like to refine this procedure into an algorithm suitable for implementation on a computer. This requires a way of encoding the important properties of trellises and trellis maps combinatorially. As we have seen, the $(U,S,{\mathcal{O}})$ coordinate description for the vertices provides a good description of a homoclinic trellis on a sphere; in more complicated cases we have to take into account the homotopy classes of the curves in the surface $M$, and also the way different curves wind round each other. Having obtained a complete description of a single trellis, we would then like to consider bifurcation sequences. This requires an especially good understanding of trellises with tangential intersections. Since Nielsen classes are open in the set of periodic points of a given period, they cannot be removed by sufficiently small perturbations, even if the trellis is destroyed. Therefore, our analysis of the trellis in [Example \[example:tangential\]]{} shows that all periodic horseshoe orbits are present at the bifurcation of the trellis, and therefore, given a sufficiently small perturbation, all such orbits of sufficiently low period remain. However, the possible orderings in which periodic orbits may be destroyed is unknown, though some results have been obtained by Hall [@Hall94]. [Bro71]{} Mladen Bestvina and Michael Handel, *Train-tracks for surface homeomorphisms*, Topology **34** (1995), no. 1, 109–140. Robert Brown, *The [L]{}efschetz fixed point theorem*, Scott, Foresman and Company, 1971. Pieter Collins, *Relative periodic point theory*, Unpublished. André de Carvalho, *Pruning fronts and the formation of horseshoes*, Preprint. Robert Easton, *Trellises formed by stable and unstable manifolds in the plane*, Trans. Amer. Math. Soc. **294** (1986), no. 2, 719–732. John Franks and Michael Misiurewicz, *Cycles for disk homeomorphisms and thick trees*, Nielsen Theory and Dynamical Systems, Contemporary Mathematics, 1993. Toby Hall, *The creation of horseshoes*, Nonlinearity **7** (1994), no. 3, 861–924. Anatole Katok and Boris Hasselblatt, *Introduction to the modern theory of dynamical systems*, Encyclopedia of Mathematics and its Applications, no. 54, Cambridge University Press, 1995. Bernd Krauskopf and Hinke Osinga Bernd Krauskopf, *Growing $1$d and quasi-$2$d unstable manifolds of maps*, J. Comput. Phys. **146** (1998), no. 1, 404–419. Jérôme E. Los, *Pseudo-[A]{}nosov maps and invariant train tracks in the disc: A finite algorithm*, Proc. London Math. Soc. (3) **66** (1993), no. 2, 400–430. Evelyn Sander, *Homoclinic tangles for noninvertible maps*, Preprint. Andrzej Szymczak, *The [C]{}onley index for decompositions of isolated invariant sets*, Fund. Math. **148** (1995), no. 1, 71–90. [^1]: The author wishes to thank Morris Hirsch for his advice and suggestions, which were valuable in writing this paper.
{ "pile_set_name": "ArXiv" }
--- abstract: | We report the observation of intensity feedback random lasing at 645 nm in Disperse Orange 11 dye-doped PMMA (DO11/PMMA) with dispersed ZrO$_2$ nanoparticles (NPs). The lasing threshold is found to increase with concentration, with the lasing threshold for 0.1 wt% being $75.8 \pm 9.4$ MW/cm$^2$ and the lasing threshold for 0.5 wt% being $121.1 \pm 2.1$ MW/cm$^2$, with the linewidth for both concentrations found to be $\approx 10$ nm. We also consider the material’s photostability and find that it displays fully reversible photodegradation with the photostability and recovery rate being greater than previously observed for DO11/PMMA without NPs. This enhancement in photostability and recovery rate is found to be explicable by the modified correlated chromophore domain model, with the NPs resulting in the domain free energy advantage increasing from 0.29 eV to 0.41 eV. Additionally, the molecular decay and recovery rates are found to be in agreement with previous measurements of DO11/PMMA \[Polymer Chemistry **4**, 4948 (2013)\]. These results present new avenues for the development of robust photodegradation-resistant organic dye-based optical devices. PACS: 42.55.Mv,42.55.Zz,42.70.Hj, 42.70.Jk author: - 'Benjamin R. Anderson$^*$' - Ray Gunawidjaja - Hergen Eilers bibliography: - 'PrimaryDatabase.bib' - 'ASLbib.bib' title: 'Random Lasing and Reversible Photodegradation in Disperse Orange 11 Dye-Doped PMMA with Dispersed ZrO$_2$ Nanoparticles' --- Introduction ============ Lasing in scattering media – known as random lasing (RL) – was first predicted by Letokhov and coworkers [@Letokhov67.01; @Letokhov67.02; @Letokhov66.01] in the late 1960’s and then experimentally observed by Lawandy *et al.* in 1994 [@Lawandy94.01]. RL differs from normal lasing in that random lasers operate without an external cavity, with scattering acting as the feedback mechanism [@Cao03.01; @Wiersma96.01; @Wiersma08.01]. Due to different scattering regimes in diffuse media, RL is found to have two distinct spectral classes: intensity feedback random lasing (IFRL) and resonant feedback random lasing (RFRL) [@Cao03.01; @Cao05.01; @Ignesti13.01]. IFRL is characterized by a single narrow emission peak (FWHM on the order of 10’s of nm) and is wholly determined by the diffusive nature of light [@Lawandy94.01; @Burin01.01; @Wiersma96.01; @Pinheiro06.01]. On the other hand, RFRL is characterized by multiple sub-nm width peaks [@Ling01.01; @Cao03.01; @Cao03.02; @Cao05.02; @Cao99.01; @Tureci08.01] with two proposed mechanisms: strong scattering resonances [@Molen07.01] and Anderson localization of light [@Cao00.01]. Based on these proposed mechanisms, models of RFRL have been developed using spin-glass modeling of light [@Angelani06.01], Levy-flight scattering [@Ignesti13.01], condensation of lasing modes [@Conti08.01; @Leonetti13.03], and strongly interacting lossy modes [@Tureci08.01]. Regardless of the microscopic mechanisms of the two regimes, their spectral characteristics can be described macroscopically in terms of active lasing modes, with RFRL representing a few distinct active lasing modes, and IFRL representing multiple overlapping active lasing modes [@Cao03.01; @Cao03.02; @Ling01.01; @Tureci08.01; @Ignesti13.01]. The different modal nature of the two regimes makes each attractive for different applications. Since RFRL has few modes –allowing for the creation of unique spectral signatures – it is attractive in the fields of authentication [@Zurich08.01], biological imaging, emergency beacons [@Hoang10.01; @Cao05.01], and random number generation [@Atsushi08.01; @Murphy08.01; @Mgrdichian08.01]. Also, the limited number of active modes in RFRL allows for a high degree of spectral control, as the pump beam can be modulated using a spatial light modulator (SLM) to activate only certain lasing modes, thus controlling the RFRL spectrum [@Leonetti12.01; @Cao05.01; @Leonetti13.02; @Leonetti12.02; @Leonetti12.03; @Andreasen14.01; @Bachelard12.01; @Bachelard14.01]. This spectral control is attractive for implementing optically based physically unclonable functions [@Anderson14.04; @Anderson14.05; @Eilers14.01] and the creation of bright tunable light sources [@Cao05.01]. In the case of IFRL, the many active modes leads to the emission having a low degree of spatial coherence [@Redding11.01], making it an attractive method for high-intensity low-coherence light sources [@Redding12.01]. Such light sources have applications in photodynamic therapy, tumor detection [@Hoang10.01; @Cao05.01], flexible displays, active elements in photonics devices [@Cao05.01], picoprojectors, cinema projectors [@Hecht12.01], and biological imaging [@Redding12.01; @Hecht12.01]. With the wide variety of possible applications for RL, work on RL has focused on discovering materials that have the following three characteristics: they provide desirable RL spectra, are relatively cheap and easy to work with, and are robust enough to use over a reasonable time frame. One such class of materials that fulfills the first two criteria are organic-dye-based materials. Unfortunately, most organic-dye-based systems are found to *irreversibly* photodegrade when exposed to intense radiation [@wood03.01; @taylo05.01; @Avnir84.01; @Knobbe90.01; @Kaminow72.01; @Rabek89.01], thus limiting their usefulness in optical devices. However, in the past two decades it has been discovered that some dye-doped polymers actually photodegrade *reversibly*, with the material self-healing once the illumination is turned off for a period of time. These materials include Rhodamine B and Pyrromethene dye-doped (poly)methyl-methacrylate (PMMA) optical fibers [@Peng98.01], disperse orange 11 (DO11) dye-doped PMMA [@howel04.01; @howel02.01] and styrene-MMA copolymer [@Hung12.01], anthraquinone-derivative-doped PMMA [@Anderson11.02], 8-hydroxyquinoline (Alq) dye-doped PMMA [@Kobrin04.01], and air force 455 (AF455) dye-doped PMMA [@Zhu07.01]. In all these studies the dye was doped into the polymer without any scattering particles, such that no RL was observed. Given the large number of dye-doped polymers that display self-healing (but not RL) and the desirability of a self-healing organic dye-based random laser, we recently tested random lasers consisting of Rhodamine 6G dye-doped polyurethane with dispersed ZrO$_2$ (R6G+ZrO$_2$/PU) [@Anderson14.04] or Y$_2$O$_3$ (R6G+Y$_2$O$_3$/PU) nanoparticles (NP) for reversible photodegradation [@Anderson15.01; @Anderson15.03]. In those studies we found that R6G+ZrO$_2$/PU and R6G+Y$_2$O$_3$/PU display self-healing after photodegradation, with a recovery efficiency of 100% [@Anderson15.01; @Anderson15.03]. However, we also found that the photodegradation could not be called truly reversible as the RL wavelength and linewidth changed [@Anderson15.01; @Anderson15.03]. While the approach used in our recent study – of testing an already known RL system for self-healing – was successful at producing a self-healing organic dye based random laser, a different approach to the problem is to develop an already known self-healing system into a random laser. To this end we investigate RL in DO11/PMMA with dispersed ZrO$_2$ NPs. The choice to use DO11/PMMA is three-fold: (1) DO11 has previously been shown to be suitable as a laser dye [@howel02.01; @howel04.01], (2) the majority of organic-dye based RL studies have focused on Rhodamine dyes and therefore DO11 is a new and unique organic dye in RL studies, with its lasing wavelength being attractive for use with polymer optical fibers [@howel02.01] and (3) DO11/PMMA is the test bed system for self-healing research, with numerous studies performed to understand the phenomenon of self healing in DO11/PMMA. These studies have been performed with different probe techniques including: absorption [@embaye08.01; @Anderson14.02], white light interferometry [@Anderson14.03], fluorescence [@Dhakal12.01; @raminithesis], photoconductivity [@Anderson13.02], transmittance microscopy [@Anderson11.01; @Anderson13.01], and amplified spontaneous emission (ASE) [@howel02.01; @howel04.01; @embaye08.01; @Ramini12.01; @Ramini13.01]. These techniques have been used to characterize the behavior of DO11/PMMA’s photodegradation and recovery under different wavelengths [@Anderson15.04], temperatures [@Ramini12.01; @Ramini13.01; @raminithesis; @andersonthesis], applied electric fields [@Anderson13.01; @andersonthesis; @Anderson14.01], co-polymer compositions [@Hung12.01], thicknesses [@Anderson14.01], concentrations [@Ramini12.01; @Ramini13.01; @raminithesis], and intensities [@Anderson11.01; @Anderson14.01; @Anderson14.02]. Based on all these studies a model has been developed to describe DO11/PMMA’s photodegradation and recovery called the correlated chromophore domain model (CCDM) [@Ramini12.01; @Ramini13.01; @raminithesis; @Anderson14.02; @andersonthesis]. The CCDM posits that dye molecules form linear isodesmic domains along polymer chains with molecular interactions – mediated by the polymer – resulting in increased photostability and self-healing. Within the domain model the decay rate $\alpha$, depends inversely on the domain size $N$, as $$\alpha=\frac{\alpha_1}{N},\label{eqn:domdec}$$ and the recovery rate $\beta$, depends linearly on the domain size, $$\beta=\beta_1N,\label{eqn:domrec}$$ where $\alpha_1$ and $\beta_1$ are the unitary domain decay and recovery rates, respectively. While these rates describe the dynamics of a single domain, the macroscopically measured rates result from an ensemble average over the distribution of domains $\Omega(N)$, which depends on the density of dye molecules $\rho$, and the free energy advantage $\lambda$ [@Ramini12.01; @Ramini13.01; @raminithesis; @Anderson14.02; @andersonthesis]. Method ====== In order to produce a suitable DO11 based random lasing material we disperse ZrO$_2$ NPs into DO11 dye-doped PMMA. We begin by first fabricating the ZrO$_2$ NPs using forced hydrolysis followed by calcination at a temperature of 600 $^{\circ}$C for an hour [@Gunawidjaja13.01]. The ZrO$_2$ NPs are then functionalized by dispersing them in a 2.5 vol% solution of 3-(Trimethoxysilyl)propyl methacrylate in toluene, which is subsequently refluxed for 2 h [@Gunawidjaja11.01]. To prepare the dye-doped polymer, we first filter Methyl methacrylate (MMA) through a column of activated basic alumina to remove inhibitor. Next we dissolve 25 wt% PMMA into the inhibitor-free MMA and divide the MMA/PMMA solution into three batches for different dye concentrations. DO11 dye (TCI America, purity $>$98%) is added to the MMA/PMMA solution in concentrations of 0.1 wt%, 0.5 wt%, and 1.0 wt%. The functionalized ZrO$_2$ NPs are then added at a concentration of 10 wt% and the mixture is sonicated until it is homogeneous, at which point 0.25 wt% 2, 2’-azobis(2-methyl-propionitrile) is added and the mixture is further sonicated before being poured onto 1”$\times$1.5” glass slides. The samples are then covered and placed in an oven at 60-65 $^\circ$C for 2 h to cure. Once prepared the samples are characterized using SEM, absorption spectroscopy, transmission measurements, and mechanical measurements. The results of the relevant sample parameters are tabulated in Table \[tab:param\]. To measure the sample’s emission we use an intensity controlled random lasing system [@Anderson15.03] shown schematically in Figure \[fig:setup\]. The pump is a Spectra-Physics Quanta Ray Pro Q-switched frequency doubled Nd:YAG laser (532 nm, 10 Hz, 10 ns) with the emission stabilized using a motorized half-waveplate (HWP) and polarizing beamsplitter (PBS) combination with a Thorlabs Si photodiode (PD) providing feedback for the HWP. The stabilized pump beam is focused onto the sample using a spherical lens with a focal length of 50 mm. Once pumped, the sample emits light in the backward direction, which is collimated using the focusing lens and then reflected by a dichroic mirror (DCM) (cutoff wavelength of 550 nm) into an optical fiber connected to a Princeton Instruments Acton 2300i spectrometer with a Pixis 2K CCD detector. For reference the relevant experimental parameters are tabulated in Table \[tab:param\]. ![Schematic of RL setup.[]{data-label="fig:setup"}](RLSetup) [|ccl|]{}\ $l$$^1$ & $4.10 \pm 0.25$ & $\mu$m\ $l_a$$^2$ & $657\pm 20$ & $\mu$m\ $d_{NP}$$^3$ &$195 \pm 32$ &nm\ $\rho_{NP}$& $2.26 \times 10^{12}$ & cm$^{-3}$\ $L$ & $\approx 500$ & $\mu$m\ \ $\lambda_p$ &532 & nm\ $r_p$ & 10 & Hz\ $\Delta t$ & 10 & ns\ $A$ & $7.85\times10^{-3}$ & cm$^2$\ $\Delta\lambda$ & 0.27 & nm\ \ \ \ \ \ \ Results and discussion ====================== Random Lasing Properties ------------------------ We test DO11+ZrO$_2$ for RL using a NP concentration of 10 wt% and three different dye concentrations (0.1 wt%, 0.5 wt%, and 1.0 wt%) and find that the 0.1 wt% and 0.5 wt% samples display RL, while the 1.0 wt% samples are not found to produce RL. Figure \[fig:RLspec\] shows the emission spectra for the 0.1 wt% sample at several pump energies, with the emission narrowing into a single lasing peak as the pump energy passes the lasing threshold, while Figure \[fig:denscomp\] compares the normalized spectra of the 0.1 wt% sample (pump energy of 10 mJ) and the 1.0 wt% sample (pump energy of 60 mJ). Note that the 1.0 wt% sample is pumped at a level near it’s ablation threshold (i.e. any higher pump energies result in the material being ablated by a single pulse). From Figure \[fig:denscomp\] we find that at high pump energies the emission from the 0.1 wt% sample is characteristic of IFRL, while the emission from the 1.0 wt% sample is much broader. From the spectral shape of the 1.0 wt% emission, and it’s peak location of 647.5 nm, we conclude that the emission corresponds to a combination of amplified spontaneous emission (ASE) and fluorescence, as DO11/PMMA’s ASE wavelength is known to be $\approx 650$ nm [@howel02.01; @howel04.01; @embaye08.01]. Since we are primarily concerned with RL in DO11+ZrO$_2$/PMMA, the remainder of this study will focus only on the samples with dye concentrations of 0.1 wt% or 0.5 wt%. ![Random lasing spectra as a function of wavelength for different pump energies for a dye concentration of 0.1 wt%.[]{data-label="fig:RLspec"}](11514spec) ![Comparison of emission from 0.1 wt% sample and 1.0 wt% sample. Note that the 0.1 wt% sample is pumped with an energy of 10 mJ, while the 1.01 wt% sample is pumped with an energy of 60 mJ, which is near the ablation threshold.[]{data-label="fig:denscomp"}](103114BFcomp) We characterize the RL properties of the 0.1 wt% and 0.5 wt% samples by considering three RL features: peak intensity, peak wavelength, and RL linewidth (e.g. FWHM). Figure \[fig:01\] shows the peak intensity and linewidth for the 0.1 wt% sample, while Figure \[fig:05\] shows the same quantities for the 0.5 wt% sample. From Figure \[fig:01\] we see that the transition to RL for the 0.1 wt% sample is quick, with the linewidth narrowing from 100 nm at 3 mJ to 10 nm at 9 mJ, and the intensity’s slope changes by a factor of $\approx 4.1\times$ once above the lasing threshold. While the transition for the 0.1 wt% sample is found to be abrupt, the transition to RL for the 0.5 wt% sample is more gradual. From Figure \[fig:05\] we find that the linewidth changes from 100 nm at 5 mJ to 10 nm at 25 mJ and the intensity’s slope only increases by a factor of $\approx 2.7\times$. This more gradual transition into lasing suggests that there is more competition between ASE and lasing [@Andreasen10.01; @Cao00.02] in the 0.5 wt% sample, than in the 0.1 wt%. ![Peak intensity and linewidth as a function of pump energy for a sample with a dye concentration of 0.1 wt% and NP concentration of 10 wt%. From both the peak intensity and linewidth we determine a lasing threshold of $75.8 \pm 9.4$ MW/cm$^2$.[]{data-label="fig:01"}](DO11d01Thresh) ![Peak intensity and linewidth as a function of pump energy for a sample with a dye concentration of 0.5 wt% and NP concentration of 10 wt%. From both the peak intensity and linewidth we determine a lasing threshold of $121.1 \pm 2.1$ MW/cm$^2$.[]{data-label="fig:05"}](DO11d05Thresh) While Figures \[fig:01\] and \[fig:05\] can help us understand the underlying spectral properties of the sample’s emission, they also can be used to directly determine the sample’s lasing threshold. Using either the FWHM as a function of pump energy [@Cao03.01] or a bilinear fit to the peak intensity [@Vutha06.01; @Anderson14.04] the lasing threshold of each sample can be calculated with the 0.1 wt% sample having a lasing threshold of $75.8 \pm 9.4$ MW/cm$^2$ and the 0.5 wt% sample having a threshold of $121.1 \pm 2.1$ MW/cm$^2$. Note that these thresholds are much larger ($\approx 10\times$) than similar RL materials based on R6G [@Anderson14.04]. These large lasing thresholds are due to DO11 having a smaller gain coefficient than R6G [@howel02.01; @howel04.01; @Mysliwiec09.01] as well as our use of off resonance pumping ($\lambda_{pump}=532$ nm and $\lambda_{res}=470$ nm). Based on the observed lasing thresholds – and the observation that the 1.0 wt% sample didn’t lase even with a pump energy of 60 mJ ($I=754$ MW/cm$^2$) – we find that the RL threshold of DO11+ZrO$_2$/PMMA increases with dye concentration, which is counter to measurements in other dyes [@Anderson14.04]. One possible explanation for this effect is the formation of dimers at higher concentrations. From studies in other organic dye materials it is known that dimer formation leads to a redshift in the absorption spectrum of the material [@Toudert13.01; @Gavrilenko06.01; @Arbeloa88.01]. This redshift can result in fluorescence quenching [@Penzkofer86.01; @Penzkofer87.01; @Bojarski96.01; @Setiawan10.01], which will decrease the material’s RL gain leading to the RL threshold increasing. Additionally, dimer formation can describe the difference in the shape of the linewidth as a function of pulse energy (i.e. the 0.1 wt% has a sudden decrease in linewidth and the 0.5 wt% has a slow decrease) as a similar effect has been observed in comparisons of RL in Rhodamine B monomers and dimers [@Marinho15.01]. The last RL property we consider is the peak wavelength as a function of pump energy, which is shown in Figure \[fig:wave\] for both the 0.1 wt% and the 0.5 wt% samples. Both sample’s begin with their peak emission near 625 nm, with the peak wavelength smoothly transitioning into a steady lasing wavelength after the lasing threshold. The 0.1 wt% sample is found to have its RL peak at 645 nm, while the 0.5 wt% sample is found to have its RL peak at 646 nm. These two results, along with the observation of the 1.0 wt% sample’s ASE wavelength of 647.5 nm suggests that as the dye concentration increases the peak emission is redshifted. This is a known effect caused by increased self-absorption due to the greater dye concentration [@Shuzhen09.01; @Ahmed94.01; @Shank75.01] and subsequent dimer formation [@Toudert13.01; @Gavrilenko06.01; @Arbeloa88.01]. ![Peak wavelength as a function of pump energy for the 0.1 wt% and 0.5 wt% dye concentration samples. The low intensity emission is peaked near 625 nm and smoothly transitions with increasing pump energy to be centered at $\approx 645$ nm.[]{data-label="fig:wave"}](DO11wavelength) Photodegradation and self-healing --------------------------------- With DO11+ZrO$_2$/PMMA found to lase in the IFRL regime for low dye concentrations, we now turn to consider the effect of ZrO$_2$ NPs on DO11/PMMA’s ability to self heal. For these measurements we use a sample with a dye concentration of 0.1 wt% and a NP concentration of 10 wt%. We use a 7 mJ/pulse (time averaged intensity of $I_{avg}=8.9$ W/cm$^2$) beam for both degrading the sample and measuring the RL spectra. During decay, the beam is always incident on the sample, while during recovery the beam is blocked except when taking measurements of the sample’s RL spectrum. Spectral measurements during recovery involve exposing the sample to three pump pulses to determine the average RL spectrum. These measurements occur every ten minutes during recovery, which equates to a duty cycle of 0.05%. Figure \[fig:dec\] shows the measured RL spectra during decay at several times, with the peak blueshifting and becoming broader. The large background fluorescence in Figure \[fig:dec\] is due to pumping the sample only slightly above it’s lasing threshold (7 mJ pump, 5.9 mJ threshold). ![Random lasing spectra as a function of wavelength at different times during photodegradation for a pump energy of 7 mJ and a 0.1 wt% dye-concentration sample.[]{data-label="fig:dec"}](11414decspec) From the spectra recorded during decay and recovery we determine the peak emission intensity as a function of time, shown in Figure \[fig:pint\]. The peak intensity is found to decay to 40% of its initial value during degradation and found to fully recover (within uncertainty) after the pump beam is turned off. This observation is consistent with ASE measurements of DO11/PMMA without dispersed NPs [@howel04.01; @howel02.01; @embaye08.01], where the ASE signal is found to fully recover after degradation. ![Peak intensity as a function of time during decay and recovery.[]{data-label="fig:pint"}](11414pint) While we observe full reversibility for a degree of decay of up to 60%, we also perform decay measurements with extreme degrees of decay ($\approx 95$ %) and observe partial recovery. This suggests that as the degree of degradation increases past some threshold value full recovery is lost and the material incurs some irreversible damage. To quantify this threshold degree of degradation at which point reversibility is lost we are planning measurements to systematically vary the degree of degradation and measure the degree of recovery. In addition to performing preliminary measurements of how reversible photodegradation changes with the degree of damage, we also consider how cycling through degradation and recovery effects the material’s self-healing. These measurements so far have consisted of two decay and recovery cycles with full reversibility observed in both cycles, which is consistent with DO11/PMMA without disperesed NPs [@howel02.01]. Further work is planned to consider how many cycles can be completed before full reversibility is lost. ### Random Lasing Intensity Decay and Recovery To further characterize the influence of the dispersed NPs on DO11/PMMA’s photodegradation and self-healing we determine the decay and recovery parameters by fitting the peak intensity as a function of time to a simple model of the RL intensity. Assuming that only undamaged molecules (with fractional number density $n(t)$) participate in RL, the material’s laser gain will be proportional to $n(t)$ leading to a RL intensity of [@Menzel07.01], $$I_{RL}(t)=I_0e^{\sigma n(t)},\label{eqn:int}$$ where $I_0$ is the initial peak intensity and $\sigma$ is the RL cross section. To model the population dynamics of the molecules we use Embaye *et al.*’s two-species non-interacting molecule model, in which undamaged molecules reversibly transition into a damaged state during degradation [@embaye08.01]. In this model the undamaged population’s fractional number density during decay ($t\leq t_D$) is $$n(t)=\frac{\beta}{\beta+\alpha I}+\frac{\alpha I}{\beta+\alpha I}e^{-(\beta+\alpha I)t},$$ and during recovery ($t>t_D$) is, $$n(t)=1-[1-n(t_D)]e^{-\beta(t-t_D)},\label{eqn:rec}$$ where $t_D$ is the time at which the pump is turned off, $\alpha$ is the decay parameter, $I$ is the pump intensity, and $\beta$ is the recovery rate. Using Equations \[eqn:int\]–\[eqn:rec\] we can model the RL peak intensity’s decay and recovery as a function of time and extract the relevant dynamical parameters, with $\alpha=3.16(\pm 0.10) \times 10^{-2}$ cm$^2$W$^{-1}$min$^{-1}$ and $\beta =3.75( \pm 0.18) \times 10^{-2}$ min$^{-1}$. The decay parameter, $\alpha$, is found to be smaller than the previously measured values for DO11/PMMA [@howel02.01; @howel04.01; @embaye08.01; @Ramini12.01; @Ramini13.01], which means that the addition of nanoparticles improves the materials photostability. Additionally, we find that the recovery rate of DO11+ZrO$_2$/PMMA is larger than any previous measurement [@howel02.01; @howel04.01; @embaye08.01; @Anderson11.01; @Anderson13.01; @Anderson14.01; @Anderson14.02; @Ramini12.01; @Ramini13.01; @Anderson15.04], implying that the addition of ZrO$_2$ NPs helps aid the recovery process. An explanation for these effects is that the introduction of NPs can change the free energy advantage $\lambda$, and density parameter, $\rho$, such that the average domain size is greater with NPs than without [@Ramini12.01; @Ramini13.01; @Anderson14.02]. While a precise determination of the domain parameters is beyond the scope of the current study, we can estimate the modified domain parameters by considering the effect of the average domain size on the recovery rates. Previously it was shown that the average domain size is given by [@Anderson14.02]: $$\langle N \rangle=\frac{\beta_M\Omega_1(\rho,\lambda)(1+z\Omega_1(\rho,\lambda))}{\rho |z\Omega_1(\rho,\lambda)-1|^3}, \label{eqn:N}$$ where $\Omega_1(\rho,\lambda)$ is the density of unitary domains and $z=\exp\{\lambda/kT\}$ with $T$ being the temperature and $k$ being Boltzmann’s constant. Using the average domain size (Equation \[eqn:N\]) and Equations \[eqn:domdec\] and \[eqn:domrec\] the measured decay and recovery rates can be approximated as $$\begin{aligned} \alpha&\approx\frac{\alpha_1}{\langle N\rangle}, \label{eqn:alp} \\\beta&\approx \beta_1\langle N \rangle,\label{eqn:bet}\end{aligned}$$ where once again $\alpha_1$ and $\beta_1$ are the unitary domain decay and recovery rates, respectively. Therefore, assuming the unitary domain recovery rate is the same for DO11/PMMA both with and without NPs, we can determine the ratio of average domain sizes by taking the ratio of recovery rates between a sample with NPs and a sample without NPs: $$\begin{aligned} \frac{\beta}{\beta_0}=\frac{\langle N\rangle}{\langle N_{0}\rangle} \\ =\frac{\rho_0\Omega_1(\rho,\lambda)}{\rho\Omega_1(\rho_0,\lambda_0)}\frac{1+z\Omega_{1}(\rho,\lambda)}{1+z_0\Omega_1(\rho_0,\lambda_0)}\frac{ |z_0\Omega_1(\rho_0,\lambda_0)-1|^3}{ |z\Omega_1(\rho,\lambda)-1|^3}\label{eqn:ratio}\end{aligned}$$ where the subscript 0 corresponds to the parameters for the system with no nanoparticles and no subscript corresponds to the system with nanoparticles. Comparing the recovery rate of DO11+ZrO$_2$/PMMA to a similarly dye-doped DO11/PMMA without NPs [@Ramini13.01; @raminithesis] we find a ratio of $\beta/\beta_0\approx 12.5$, which means that with the inclusion of NPs the domain size is an order of magnitude larger. Given this large difference, and the linear relationship between domain size and the density parameter, we conclude that primary influence of the NPs is on the free energy advantage. Assuming that the density parameter is unchanged by the introductions of NPs, we can numerically solve Equation \[eqn:ratio\] for the new free energy advantage and find $\lambda \approx 0.41$ eV, which is 0.12 eV larger than DO11/PMMA’s value of 0.29 eV [@Ramini12.01; @Ramini13.01; @Anderson14.02]. With the new free energy advantage determined, we can estimate the average domain size for our system and the unitary domain decay and recovery rates. Substituting the new free energy advantage into Equation \[eqn:N\] we find that the average domain size of our system is $\langle N \rangle = 375$. Using this domain size, along with Equations \[eqn:alp\] and \[eqn:bet\], we determine the unitary domain decay rate to be $\alpha_1\approx 11.86 \pm 0.38 $ cm$^2$W$^{-1}$min$^{-1}$ and the unitary domain recovery rate to be $\beta_1\approx 1.00(\pm0.10)\times 10^{-4}$ min$^{-1}$, which are both found to be within uncertainty of the measured values for DO11/PMMA [@Ramini13.01]. This agreement of the unitary domain decay and recovery rate between DO11/PMMA and DO11+ZrO$_2$/PMMA implies that we are correct in our original assumption that the NPs only influence the domain size (via the free energy advantage) and do not influence the underlying molecular interactions leading to reversible photodegradation. Additionally, the success of the CCDM to correctly account for the influence of NPs on DO11/PMMA’s decay and recovery is a strong indication that the CCDM is a robust description of reversible photodegradation for DO11 dye-doped polymers, with the addition of NPs resulting in the free energy advantage increasing. One possible explanation for this increase in free energy advantage is that the introduction of NP’s affects the local electric field experienced by the dye molecules, thereby influencing the underlying interactions behind the free energy advantage. This effect on the local electric field arises due to the introduction ZrO$_2$ NP’s (with a dielectric constant of $\approx 4.88$) into the polymer (with dielectric constant $\approx 2.22$) resulting in an increased dielectric constant of the dye’s local environment. To estimate the magnitude of this effect on the local field factor we recall that for a spherical cavity in a uniform dielectric medium, with dielectric constant $\epsilon$, the local field factor is given by [@jacks96.01]: $$L\propto \frac{3\epsilon}{2\epsilon+1}.\label{eqn:LFF}$$ Using the relevant concentrations of NPs, dye, and polymer we determine that the permitivity is approximately 10% larger, which using Equation \[eqn:LFF\] results in the local field factor becoming 3% larger. Since the dielectric energy of a system depends on the square of the local field factor, we conclude that the dielectric influence of the NPs increases the free energy advantage by 6%, or 0.018 eV, which is too small to account for the total change in energy calculated above. This suggests two different possibilities: (1) the change in the free energy advantage also includes a contribution from another unidentified effect, most likely related to the interactions of dye, polymer, and NPs or (2) our simplistic treatment of the dielectric constant and local field factor may underestimate the actual enhancement in the local electric field, especially since ZrO$_2$ is a transparent conductive oxide [@Brune98.01; @Naik13.01], which can lead to plasmonic effects that can drastically increase the electric field [@Naik13.01]. ### Effect of photodegradation and recovery on linewidth and wavelength of random lasing In addition to considering the decay and recovery dynamics of the RL intensity we also quantify the changes in the lasing peak location and linewidth during decay and recovery, as shown in Figure \[fig:wave\]. From Figure \[fig:wave\] we find that during degradation the lasing peak blueshifts and the FWHM increases. After the pump beam is blocked (except for measurements during recovery), both the lasing peak and FWHM are found to return to within uncertainty of their initial values, suggesting that the photodegradation process is truly *reversible*. This result is different than observed for R6G+NP/PU, where the peak intensity fully recovers, but the lasing wavelength and linewidth are irreversibly changed due to photodegradation [@Anderson15.01; @Anderson15.03]. ![Peak wavelength and RL linewidth as a function of time during decay and recovery.[]{data-label="fig:wave"}](11414wavelength) The other difference, observed in Figure \[fig:wave\], between the photodegradation and self-healing of DO11+ZrO$_2$/PMMA and R6G+NP/PU, is that the lasing wavelength is found to blueshift during decay for DO11+ZrO$_2$/PMMA, while it is found to redshift for R6G+NP/PU. In R6G+NP/PU, the redshifting of the lasing peak during photodegradation and recovery is attributed to photothermal heating and the formation of R6G dimers and trimers [@Anderson15.01; @Anderson15.03]. The observation of the opposite effect in DO11+ZrO$_2$/PMMA suggests a different mechanism responsible, with the most likely mechanism being related to the observation of a blueshift in DO11/PMMA’s absorbance spectrum during photodegradation [@embaye08.01; @Anderson13.01; @andersonthesis; @Anderson14.02]. The effect of a blueshift in the absorbance spectrum leads to the shorter wavelengths experiencing less loss which results in RL emission blueshifitng, such that the gain-to-loss ratio is maximized. This effect is also observed in the ASE spectrum of DO11 in different solvents, where a blueshift in the absorbance peak leads to a blueshift in the emission peak [@howel04.01]. Additionally, this blueshift will lead to a larger portion of the emission spectrum being amplified resulting in the linewidth increasing, which is observed in the RL spectrum and has been observed in the ASE spectrum [@howel04.01]. Conclusions =========== Based on the observation of reversible photodegradation in DO11/PMMA and the desirability of robust organic-dye based random lasers for a variety of applications (such as speckle-free imaging [@Redding12.01], tunable light sources [@Cao03.01] and optical physically unclonable functions [@Anderson14.04; @Anderson14.05]), we investigate the emission properties of DO11+ZrO$_2$/PMMA under nanosecond optical pumping. We find that for dye concentrations of less than 1.0 wt%, DO11+ZrO$_2$/PMMA lases in the IFRL regime; while for a dye concentration of 1.0 wt%, no lasing is observed for pump intensities up to the ablation threshold ($I\approx 754$ MW/cm$^2$). The lasing threshold is found to increase with concentration, with the 0.1 wt% sample having a threshold intensity of $75.8 \pm 9.4$ MW/cm$^2$ and the 0.5 wt% sample having a threshold intensity of $121.1 \pm 2.1$ MW/cm$^2$. Both concentrations are found to have lasing wavelengths near 645 nm with a linewidth of approximately 10 nm. This lasing wavelength region is attractive for use with hydrocarbon based polymers as these polymers have an absorption minimum near 650 nm [@howel02.01]. Along with considering the random lasing properties of DO11+ZrO$_2$/PMMA, we also measure the materials photodegradation and recovery. We find that DO11+ZrO$_2$/PMMA photodegrades reversibly, with both the RL spectra before and after a photodegradation and recovery cycle being identical. During photodegradation the lasing peak is found to blueshift, widen, and decrease in intensity, while during recovery the opposite process is found to occur with the lasing peak returning to its initial intensity, location, and linewidth. This suggests that the observed degradation is truly reversible, which contrasts to measurements of R6G+NP/PU random lasers where the linewidth and peak wavelength are changed after decay and recovery [@Anderson15.01; @Anderson15.03]. While DO11+ZrO$_2$/PMMA is found to reversibly photodegrade like DO11/PMMA, the introduction of NPs into the dye-doped matrix is found to affect the decay and recovery rates, with DO11+ZrO$_2$/PMMA found to display increased photostability and to recover more quickly than similarly dye-doped DO11/PMMA. These changes are explicable within the CCDM [@raminithesis; @andersonthesis; @Ramini13.01; @Anderson14.02], with the NPs resulting in the free energy advantage increasing to an estimated value of 0.41 eV, but having little to no effect on the unitary domain decay and recovery rates, which are found to be in agreement with previous measurements of DO11/PMMA without NPs [@Ramini13.01]. While we have provided estimates of the CCDM model parameters for DO11+ZrO$_2$/PMMA, a more thorough study is required to determine the precise values. Therefore we are planning measurements of DO11+ZrO$_2$/PMMA’s decay and recovery for different temperatures, applied electric fields, and dye concentrations, which allow for the calculation of all CCDM parameters [@Ramini12.01; @Ramini13.01; @raminithesis; @Anderson14.02]. Finally, the observation of fully reversible photodegradation in DO11+ZrO$_2$/PMMA has promising prospects in the development of robust photodegradation resistant random lasers for real-world applications. We forsee using random lasers based on DO11+ZrO$_2$/PMMA in low duty-cycle applications such that the effects of photodegradation are mitigated by the self-healing mechanism of DO11+ZrO$_2$/PMMA, thus allowing prolonged use of the material. While we use a very low duty cycle during our recovery measurements – to minimize photodegration and maximize self-healing – we hypothesize that the material can function without significant photodegradation at higher duty cycles, dependent on the pump intensity. Further work is required to determine the actual break even duty cycle at which photodegradation and self-healing are balanced. Acknowledgements ================ This work was supported by the Defense Threat Reduction Agency, Award \# HDTRA1-13-1-0050 to Washington State University.
{ "pile_set_name": "ArXiv" }
--- author: - | Alessandro Mirone\ European Synchrotron Radiation Facility, BP 220, F-38043 Grenoble Cedex, France title: ' Ground state and excitation spectra of a strongly correlated lattice by the coupled cluster method.' --- We apply Coupled Cluster Method to a strongly correlated lattice Hamiltonian and we extend the Coupled Cluster linear response method to the calculation of electronic spectra. We do so by finding an approximation to a resolvent operator which describes the spectral response of the Coupled Cluster solution to excitation operators. In our Spectral Coupled Cluster Method the ground and excited states appear as resonances in the spectra and the resolvent can be iteratively improved in selected spectral regions. We apply our method to a $MnO_2$ plane model which corresponds to previous experimental works. Introduction ============ The numerical methods for solid state physics span a wide range of techniques which aim to provide approximate solutions to the problem of many body interactions in correlated systems; the exact solution being unknown. Between these techniques some of the most notables are: the DMF theory, which uses a self-energy correction term obtained from an Anderson impurity many-body solution, the GW equation, which calculates the self-energy with neglection of vertex corrections, the quantum monte-carlo method and the Coupled-Cluster Method (CCM). The Coupled-Cluster method has been conceived in the $50s$ by Fritz Coester and Hermann Kummel for nuclear physics, and has since progressively gained other domains[@reviews]. In quantum-chemistry, in particular, CCM is widely regarded as the most reliable choice when high accuracy is needed[@ccmchemistry]. Concerning the lattice models of strongly interacting electrons CCM has recently been applied to spin lattices[@bishop] and to Hubbard model[@roger]. Although the CCM was initially formulated as a ground state approximation, the recent developpment of CCM time dependent linear response [@linearresp] has stretched the CCM applicability to excited states. In particular Crawford and Ruud have calculated vibrational eigenstates contributions to Raman optical activity[@ramspectra] while Govind et al. have calculated excitonic states in potassium bromide[@excitons]. In this paper we extend the coupled cluster linear response method to the calculation of electronic excitation spectra. To do so we represent an initial wave-function as the product of the probe operator times the CCM solution, and we develop an original solution method of the resolvent equation for the CCM ansatz. This paper is organised in the following way. In section \[metodo\] we detail the equations. In section \[modello\] we describe the model of a $MnO_2$ plane derived from previous absorption and scattering x-ray spectroscopy studies, on which we test our method. We discuss the results of our Spectral Coupled Cluster Method in section \[discussione\]. There we also validate the method by comparing it to the exact solution that we can obtain when we restrict the Hilbert space dimension to such an extent that the exact diagonalisation is possible. Method {#metodo} ====== In the Coupled-Cluster method[@reviews] one searches an approximated solution to the eigen-problem $$H \left| \Psi \right> =E \left| \Psi \right>$$ where $H$ is the Hamiltonian in second quantization and is formed by a sum of products of one-particle creation and annihilation operators. The one-particle operators change, between $1$ and $0$, the occupation integer number of the one-particle orbitals contained in the model. The solution is represented, given a reference state $\left| \Phi_0 \right>$, by the exponential ansatz $$\left| \Psi \right>= e^{S} \left| \Phi_0 \right> \simeq e^{S_N} \left| \Phi_0 \right>$$ where $S$ is the ideal exact solution and $S_N$ is a sum, truncated to $N$ terms, of products of electron-hole pair excitations : $$S_N = \sum^N_{i=1}{ t_i ~ Symm { \left\{ \prod_{k=1}^{n_i} c^\dag_{\alpha_{i,k}} c^\dag_{a_{i,k}} \right\} }} \label{Ssum}$$ In this formula $N$ is the number of degrees of freedom of the ansatz. The larger is this number the more accurate is the representation. The $t_i$’s are free coefficients that must be obtained from the CCM equations below. Each term in the sum is the product of a set of electron(hole)-creation operators $c^\dag$. Each term is determined by a choice of indexes $\alpha_{i,k}$( $a_{i,k}$), with the greek(latin) letter $\alpha$(a) ranging over empty(occupied) orbitals. One can cosiders the reference state as the [*vacuum*]{} state and that each term $i$ in the sum $S_N$ creates, from vacuum, an excited state which is populated by $n_i$ particles (holes and/or electrons). The $Symm$ operator makes the ansatz symmetric for the Hamiltonian symmetry subgroup which transforms, up to a factor, the reference state $\left| \Phi_0 \right>$ into itself. In the Coupled Cluster method, the rationale for the exponential ansatz resides in its size extensivity property. This means that for a system composed of two non correlated parts, $A$ and $B$, the coupled cluster ansatz operator can be factorized as the product of two operators $e^{S^{A+B}}= e^{S^{A}}e^{S^{B}}$. This simple factorisability relation has deep consequences[@reviews], whose one of the most important is that, in a system with periodic translational symmetry, the calculation complexity for a given accuracy does not depends on the system size. The CCM equations are obtained substituting $ \left| \Psi \right>$, in the eigen-equation, with its ansatz and by multiplying at the left with $e^{-S_N}$, the inverse of the ansatz operator . One obtains for the eigenvalue $$E = \left< \Phi_0 \right| e^{-S_N} H e^{S_N} \left| \Phi_0 \right>$$ while the free parameters are obtained setting the eigen-equation residue to zero in the space of excited states which enter the $S_N$ sum : $$0 = \left< \Phi_0 \right|\left( \prod_{k=1}^{n_i} c_{a_{i,k}} c_{\alpha_{i,k}} \right) e^{-S_N} H e^{S_N} \left| \Phi_0 \right> ~ \forall i \in [1,N] \label{ccmeqs}$$ The Coupled Cluster Method expands these equations by means of the Hausdorff expansion formula which for two arbitrary operators $A$ and $B$ states that: $$e^{-A}B e^{A}= B+[B,A]+1/2[[B,A],A]+..1/n![[B,A]...],A] +... \label{Hausdorff}$$ The numerical applicability of CCM relies on the fact that, when $A$ is substituted with $S_N$ and $H$ replaces $B$, only the first five terms in the series, can be non zero. This can be demonstrated considering that $S_N$ is formed by creation operators only, and that the interactions contained $H$ are composed by products of up to four single-particle operators for the Coulomb interaction. For each term of the expansion every $S_N$ entering in the commutators must have at least one one-particle creation operator contracted with one annihilation operator of $H$, for the term not to be identically zero. Equation \[ccmeqs\] gives $N$ polynomial equations by which we can determine the $N$ unknowns $t_i$. These equations have order up to the fourth in the $t_i$ variables because this is the maximum order in $S_N$ for the non-zero terms of the Hausdorff expansion. The number of solutions of a system of polynomial equations explodes exponentially with the number of equations and it is not possible, except for small systems, to explore systematically the whole solutions space. To solve the equations, instead, we use the Newton’s method to follow the solution, increasing iteratively the number of free parameters and using as a starting point the for $N$ parameters the $N-1$ ones found at the previous iteration plus a random choice for the $N^{th}$ one. The accuracy of the CCM solution increases with $N$. At each iteration the new $(N+1)^{th}$ term is constructed, in equation \[Ssum\], by assigning its order $n_{(N+1)}$ and by choosing the concerned electron and hole orbitals which are expressed by the sets of indexes $\alpha_{(N+1),k}$ and $a_{(N+1),k}$, with $k$ ranging from $1$ to $n_{(N+1)}$ . We denote the ensemble of all possible choices with the symbol $$\left\{ \left( n^{\prime}_\zeta ,\alpha^{\prime}_{\zeta,1}...,a^{\prime}_{\zeta,1}... \right) \right\}$$ where the possible choices satisfy the condition $$0 \neq \left< \Phi_0 \right|\left( \prod_{\zeta=1}^{n^{\prime}_\zeta} c_{a^{\prime}_{\zeta,k}} c_{\alpha^{\prime}_{\zeta,k}} \right) e^{-S} H e^{S} \left| \Phi_0 \right> \label{residue}$$ The simplest choice consists in choosing the $\zeta$ which gives the largest residue in equation \[residue\]. Once we have obtained the CCM ground state and its ground energy $E$, we are interested in the transition probability for a time dependent perturbation $exp(i \omega_D t) D$, where $D$ is an arbitrary product of $c^\dag$ operators. The transition rate is given by the Fermi golden rule which states that the probability for the absorption of an energy quantum $\hbar \omega_D = \hbar \omega -E$, with $\hbar \omega$ being the final state energy, is proportional to: $$\rho_D(\omega,\gamma) = \Im m \frac{ \left< \Phi_0 \right|e^{S^\dag} D^\dag (H-\omega-i \gamma)^{-1} D e^{S} \left| \Phi_0 \right>} { \left< \Phi_0 \right|e^{S^\dag} e^{S} \left| \Phi_0 \right> }$$ where $\gamma$ is a small line width. In order to calculate the above expression we have to solve two problems : find an approximate solution $R$ for the resolvent equation : $$(H-\omega-i \gamma) |R> = D e^{S} \left| \Phi_0 \right>$$ and calculate the scalar product. We represent an approximated solution for the resolvent, introducing the approximating operator $R_{D,\omega,\gamma}$ and the following ansatz which is similar to the ansatz for $S$ with the difference that it contains both annihilation and creation operators and that, in order to accede to the whole spectra, no symmetrization is done : : $$\begin{aligned} |R> =& R_{D,\omega,\gamma} D e^{S} \left| \Phi_0 \right> \label{rsolution}\\ R_{D,\omega,\gamma} =& r^0_{D,\omega,\gamma} + \sum^{N^r}_{i=1}{ r^i_{D,\omega,\gamma} \prod_{k=1}^{n^r_i} \hat c_{j_{i,k}} } \label{ransatz}\end{aligned}$$ in this expression $r^i$ are free parameters and we have introduced the notation $\hat c$ to represent in a compact way both creation and annihilation operators. The definition of the $\hat c$ operator is, naming by $N_{orbs}$ the total number of represented orbitals (occupied and empty) : $$\hat c_j= \left\{ \begin{matrix} c^\dag_j & j \in [1,N_{orbs}] \\ c_{j-N_{orbs}} & j \in [N_{orbs}+1,2*N_{orbs}] \end{matrix} \right|$$ We build our spectral CCM equations (SCCM equations) by multiplying at the left with $e^{-S_N}$, and by setting the residue to zero : $$\begin{aligned} 1 =& \left< \Phi_0 \right|D^\dag e^{-S} (H-\omega-i \gamma) R_{D,\omega,\gamma} D e^{S} \left| \Phi_0 \right> \\ 0 =& \left< \Phi_0 \right|D^\dag \left( \prod_{k=1}^{n^r_i} \hat c_{j_{i,k}} \right)^{*} e^{-S}(H-\omega-i \gamma) R_{D,\omega,\gamma} D e^{S} \left| \Phi_0 \right> ~ \forall i \in [1,N^r] \label{reseqs}\end{aligned}$$ Note that the validity of the above equations relies on the fact that $D$, being a product of $c^\dag$ operators, commutes with $S$. These equations are expanded by the Hausdorff expansion formula substituting, in equation \[Hausdorff\], $A$ with $S_N$ and $B$ with $ (H-\omega-i \gamma) R_{D,\omega,\gamma} $. The Hausdorff expansion contains, also in this case, a finite number of non-zero terms because each term of the the resolvent operator , in equation \[ransatz\], contains a finite number of annihilation operators while, as discussed above, each term of $H$ contains a maximum of four annihilation operators. The expansion gives a set of linear equations for the $r$ parameters. The resolvent equation accuracy is improved by systematically increasing $N^r$, selecting, at each iteration, the set of numbers $$\left\{ \left( n^{r}_{_{N^r+1}} ,j_{_{N^r+1,0}},....,j_{_{N^r+1,n^r_{N^r+1}}} \right) \right\}$$ corresponding to the largest residue in the SCCM equations. When we calculate the residue we fix $\omega=\omega_r$ at the center of the spectral region of interest. Over the spectral region of interest the $r$ parameters are given by a linear algebra operations of the kind $ r=(M_1)^{-1} (M_2+\omega M_3) $ where the $M$’s are matrices obtained from SCCM expansion. Once we know the $R$ operator we can calculate the spectra with the following equation : $$\rho_D(\omega,\gamma) = \Im m \frac{ \left< \Phi_0 \right|e^{S^\dag} D^\dag R_{D,\omega,\gamma} D e^{S} \left| \Phi_0 \right>} { \left< \Phi_0 \right|e^{S^\dag} e^{S} \left| \Phi_0 \right> } \label{Rspettro}$$ This expression can be expanded using the Wick’s theorem and the linked-cluster theorem as already done by Sourav et al.[@Sourav]. Contracting in all possible ways the operators contained in $ D ^\dag R_{D,\omega,\gamma} D$ with themselves one obtains sum of products of Green’s function of different orders. To simplify this we use the simplest approximation which consists in setting to zero all the connected Green’s function excepted the one-particle Green’s function : $$G(j_1,j_2) = \frac{ \left< \Phi_0 \right|e^{S^\dag} \hat c_{j_1} \hat c_{j_2} e^{S} \left| \Phi_0 \right> }{ \left< \Phi_0 \right|e^{S^\dag} e^{S} \left| \Phi_0 \right> } \label{greendef}$$ We expand this equation for $G$ using the Wick’s and the linked-cluster theorems. We obtain a hierarchical set of equations involving Green’s functions of arbitrary order. This expansion needs to be truncated choosing a closure relation. This closure relation is already provided by the choice that we have made setting to zero all the connected Green’s functions except the two points one. To obtain the Dyson equation for the Green’s function we proceed in the following way. Each time we contract a $\hat c$ operator with one of the terms contained in $S$, on the right, or with $S^\dag$ on the left, a new vertex is obtained from which a number of new lines, equal to the term order minus one, are coming out. We consider all the combinatorial ways of contracting all these lines with themselves, except one branch which propagates further the Green’s function. This is the analogous of the Hartree-Fock approximation where two of the four legs of each Coulomb vertex are contracted with each other. The Dyson equation is solved iteratively. The final result for the spectral function of equation \[Rspettro\] depends linearly on the parameters $r$, functions of $\omega$, and contain products of Green’s functions ( defined by equation \[greendef\]). The spectral resonances positions depend on the $r$ parameters, which are found by the SCCM equation and whose behavior accounts for many-body correlations. The resonances intensities, instead, depend on our Hartree-Fock-like truncation which still accounts many-body interactions but in the mean-field approximation. Model {#modello} ===== In previous studies on manganites we applied exact diagonalisation, and Lanczos method, to the study of resonant X-ray scattering [@Mirone] at $L_2$,$L_3$ edges and $K_\beta$ fluorescence [@Herrero]. The spectroscopy data were modeled with a small planar cluster, described in second quantization. The model consisted of the central Mn atom open shells orbitals, plus some selected orbitals localised on the first neighbouring shell of oxygens atoms and Mn atoms. These studies revealed a pronounced O 2p character of the doped charge carriers, and the non-local nature of the forces governing the charge redistribution phenomena which are very important in these systems. The accounting of few extra orbitals from neighbouring shells, beside the resonating atom, is crucial in describing these phenomena but one rapidly ecounters the limit of the exponential growth of the Hilbert space dimension, when trying to extend the size of the cluster. To calculate the ground state and spectra of larger systems, while still keeping a good description of the many-body correlations, we have developed the methods described in this paper. We will compare SCC method to exact numerical results that we will obtain in a truncated Hilbert space.. To keep the system numerically affordable for the exact diagonalisation technique we consider a small $2x2$ $MnO_2$ lattice with periodic boundary conditions. The Mn sites are placed at integer coordinates $(2 i, 2 j)$ with $i$ and $j$ taking the values $0$ and $1$ , while the oxygen atoms are at positions $(2 i+1, 2 j)$ and $(2 i, 2 j+1)$. In order to limit to the maximum the dimension of the Hilbert space we restrict the degrees of freedom to those orbitals which are the most important for the physics of manganites. These are the $e_g$ $3d$ orbitals of $Mn$, namely the $x^2-y^2$ and $3z^2-r^2$ orbitals, and the $p$ oxygen orbitals which point toward $Mn$ sites. For oxygens we restrict to $p_x$ for the $(2 i+1, 2 j)$ sites and $p_y$ at the $(2 i, 2 j+1)$ sites. These are the oxygen orbitals which bridge the Mn sites along the $x$ and $y$ directions. The system Hamiltonian is composed of several terms ; $$H = H_{bare}+H_{hop} +H^{Mn}_U +H^{Mn}_J + H^{O}_U$$ namely $H_{bare}$ which contains the one-particle energies of the orbitals, the hopping Hamiltonian $H_{hop}$ which moves electron between neighboring sites, the Hubbard correlations $H^{Mn}_U$, $H^{Mn}_J$ and $ H^{O}_U$ for Manganese and Oxygen. The $ H^{O}_U$ term is used because, applying exact diagonalisation, we truncate the Hilbert space by limiting the $p$ orbitals occupation numbers between $1$ and $2$. In the CC method, instead, we cannot truncate because this would destroy commutation relations. We have the possibility instead, in CCM, of choosing a high value of $U$ in the Hubbard correlation $ H^{O}_U$ , in conjunction with the oxygen part of $H_{bare}$ to effectively limit the oxygen $p$ orbitals occupation numbers thus making the comparison, with the truncated model exact solution, possible. The bare Hamiltonian is $$\begin{aligned} H_{bare}= \sum_{i,j,g_d,\sigma} \epsilon_{d, \sigma}~ d^\dag_{_{g_d,\sigma,2i,2j}}d_{_{g_d,\sigma,2i,2j}} + \nonumber \\ (\epsilon_{p}-U_p) \sum_{i,j,\sigma} \left( p^\dag_{_{x,\sigma,2i+1,2j}}p_{_{x,\sigma,2i+1,2j}} +p^\dag_{_{y,\sigma,2i,2j+1}}p_{_{y,\sigma,2i,2j+1}} \right)\end{aligned}$$ where the $g_d$ index takes the values $ g_d=x^2-y^2,3z^2-r^2$, with $x,y$ being in plane and $z$ out of plane. The $Mn$ one-particle energies $\epsilon_{\sigma}$ are spin-dependent to take into account the mean-field exchange with the Mn $t_{2g}$ occupied orbitals ($xy,xz,yz$)( whose degrees of freedom are discarded from the model). The oxygen orbitals term takes into account the Hubbard coefficient $-U_p$ to compensate $ H^{O}_U$ and favoring double and single occupations on oxygens. The hopping term is $$H_{hop} = t~ \sum_{i,j,g_d,\sigma} \sum_{s=\pm 1} s \left( f_{g_d,x} p^\dag_{_{x,\sigma,2i-s,2j}} d_{_{g_d,\sigma,2i,2j}} + f_{g_d,y} p^\dag_{_{y,\sigma,2i,2j-s}} d_{_{g_d,\sigma,2i,2j}} + c.c. \right)$$ where $$\begin{aligned} f_{3z^2-r^2,x}=f_{3z^2-r^2,y}=1/2 \nonumber \\ -f_{x^2-y^2,y}=f_{x^2-y^2,x}=\sqrt 3/2 \nonumber\end{aligned}$$ The Coulomb intra-site repulsive interaction for $Mn$ is made by a part for an electron pair on the same orbital, and another part for two different orbitals : $$H^{Mn}_U \sum_{i,j,g_d} U_{d} ~ n_{_{g_d,\sigma=+\frac{1}{2},2i,2j}}n_{_{g_d,\sigma=-\frac{1}{2},2i,2j}} + \sum_{i,j,\sigma_1,\sigma_2} U^\prime_{d} ~ n_{_{3 z^2-r^2,\sigma_1,2i,2j}}n_{_{x^2-y^2,\sigma_2,2i,2j}}$$ The Coulomb exchange for $e_g$ orbitals is $$H^{Mn}_J = J_{d} ~ \sum_{i,j,\sigma_1, \sigma_2} d^\dag_{_{3 z^2-r^2,\sigma_2,2i,2j}}d^\dag_{_{x^2-y^2,\sigma_1,2i,2j}} d_{_{3 z^2-r^2,\sigma_1,2i,2j}}d_{_{x^2-y^2,\sigma_2,2i,2j}}$$ while the $e_g$-$t_{2g}$ exchange is included as a mean-field term inside $H_{bare}$. Finally the oxygen Hubbard term is $$\sum_{i,j} U_{p} ~ \left( n_{_{p_x,\sigma=+\frac{1}{2},2i+1,2j}}n_{_{p_x,\sigma=-\frac{1}{2},2i+1,2j}} + n_{_{p_y,\sigma=+\frac{1}{2},2i,2j+1}}n_{_{p_y,\sigma=-\frac{1}{2},2i,2j+1}} +2 \right)$$ The contributions of the terms factored by $U_p$, in the total Hamiltonian, is identically zero when we restrict the $n_p$ occupations between $1$ and $2$. To fix the free parameters of the model we use knowledge from our previous work on manganites[@Mirone]. Parameters are given in $eV$ units. The effective Slater integrals used in that work correspond, in the present model, to $ U_{d}=6.88 $, $ U^\prime_{d} = 5.049$, $J_{d}=-0.917$. The exchange with occupied polarized $t_{2g}$ orbitals gives a $\simeq 2 eV$ splitting between $\epsilon_{d,\sigma=-\frac{1}{2}}=2$ and $\epsilon_{d,\sigma=+\frac{1}{2}}=0$, in the case of ferromagnetic alignement. We use a hopping $t=1.8$ taken from our previous work[@Mirone]. The parameter $\epsilon_p$ controls the amount of charge back-donation from oxygen to manganese. The predominant $O$ $2p$ character of doped holes found in manganites [@Herrero] corresponds to a value $\epsilon_p$ which raises the bare oxygen orbitals energies above the bare Mn ones. The value of $\epsilon_p$ influences the average occupation of the $e_g$ orbitals. These occupancies match the ones found in the previous works for a value $\epsilon_p \simeq 2$. Discussion {#discussione} ========== To find the CCM ground state and determine the resolvent equation we have adapted our $Hilbert++$ [@hilbertxx; @Mirone] code. This code was originally created to calculate x-ray spectroscopies of small strongly correlated clusters by exact diagonalisation. It implements a second quantisation representation of operators and determinants. We have implemented automatic computing of commutators and automatic extension of the excitations set for CCM and for our SCC method. The exact diagonalisation and the Lancsoz tridiagonalisation for spectra calculation are performed with $Hilbert++$. The code generates the Hilbert space by applying several times the Hamiltonian on a vector basis which is beforehand initialized with a seed state. In this seed state, the occupied spinorbitals are all the oxygen ones and, for ferromagnetic alignement on the $Mn$ sites, all the spinorbitals $3 z^2 -r^2$ with spin $\sigma=+1/2$ . This state is named, in the rest of this paper, [*nominal reference configuration*]{}. The configurations having one or more oxygen sites unoccupied , are discarded in the exact calculation. With this limitation on the configurations, the generated Hilbert space growths up to a dimension which is slightly less than $7$ millions. To reproduce with the $CCM$ method the exact calculation done on the truncated space we set $U_p$ as high as $10^2eV$. We show in figure \[initialwf\] the convergence of the $CCM$ energy as a function of the number of symmetrized eccitations contained in the $S$ operator when we take the [*nominal reference configuration*]{} as reference state The CCM energy converges, for the nominal reference, to the first excited eigenenergy, given by exact diagonalisation, above the ground state. We have analysed the ground and the first excited states that we obtain by exact diagonalisation. The largest component of the first excited state is found to be the nominal reference state. This explains why the CCM method, which takes this state as reference, converges to this eigen-state. The ground state, instead, has a different symmetry. We find that there are four components which have the largest factor and each of this component is obtained rotating the $e_g$ electron, on one of the four $Mn$ sites, from the $3 z^2 -r^2$ orbital to the $x^2-y^2$ one . More in details, the ground state has the same symmetry of the state $\left| \hat \Phi_0 \right>$ given by $$\left| \hat \Phi_0 \right> = \sum_{i,j} (-1)^{i+j} d^\dag_{_{x^2-y^2,\sigma=1/2,2i,2j}} d_{_{3 z^2-r^2,\sigma=1/2,2i,2j}} \left| \hat \Phi_0 \right> \label{groundsymm}$$ This state cannot be obtained starting from the nominal reference with the $CC$ method because it has completely different symmetry properties. Notice for example that a $\pi/2$ rotation around the center of the cluster gives a factor $1$ if applied on the nominal reference but the same rotation gives a factor $-1$ when applied on $\left| \hat \Phi_0 \right>$. To force the $CC$ method to converge to such a symmetry state, a possible solution would be using the multi-reference $CC$ method. We have not implemented this method, which requires an a-priori knowledge of the solution, because our work is focussed on our Spectral Cluster Method that, as we will see in this section, is able to detect these states exploring selected spectral regions. To test further the capabilities of CCM to converge to the true ground state we have allowed a possible convergence to $\left| \hat \Phi_0 \right>$ symmetry by using a reference state $\left| \bar \Phi_0 \right>$ of lower symmetry : $$\left| \bar \Phi_0 \right> = d^\dag_{_{x^2-y^2,\sigma=1/2,0,0}} d_{_{3 z^2-r^2,\sigma=1/2,0,0}} \left| \Phi_0 \right>$$ We show in figure \[initialwf\_lowU\] the convergence of the CCM energies for $U_p=5eV$ using two different choices of the reference state : the high symmetry state $\left| \Phi_0 \right>$ and the lower symmetry $\left| \bar \Phi_0 \right>$ . The CCM solution for reference state $\left| \bar \Phi_0 \right>$ converges to a lower energy than the one obtained with the nominal reference state. We cannot compare this calculation, done with $U_p=5eV$, with the results of exact diagonalisation because the small value of $U_p$ gives access to a larger Hilbert space which is computationally more expensive. On the other hand for an high value of $U_p=10^2eV$, when comparaison with exact diagonalisation is possible, we have not been able to obtain the ground state starting from the low symmetry reference state $\left| \bar \Phi_0 \right>$. We think that this difficulty can be explained in the following way : the lower energy of the $\left| \hat \Phi_0 \right>$ symmetry state is due, in the $CCM$ equations, to a kind of bridges, made of operators which link the components of $\left| \hat \Phi_0 \right>$ to each other. These bridges are created when an excitation operator, which composes $S$, is transformed by the Hausdorff commutation expansion into another excitation operator which will subsequently enter $S$, and so on. For the particular symmetry of $\left| \hat \Phi_0 \right>$ to be obtained from $\left| \bar \Phi_0 \right>$, these bridges must be long enough to transform one component into another. The problem of using a high value of $U_p$ is that for every pair of excitation operators which both create a hole on the same oxygen site, a new term coming from their product, will appear in the residue containing two holes that site. This will necessitate a new higher order excitation to be subsequently included in $S$ whose contributions will cancel the product of the two operators. This because the very high value of $U_p$ forbids double hole occupancies on oxygen sites. The need of accounting more operators requires more iterations. During these iteration the $\left| \hat \Phi_0 \right>$ symmetry is non favorable and our procedure converges to higher eigenvalues. The CCM wavefunction corresponding to the true ground state becomes energetically favorable for a number of eccitation operators about $40$. The use of a multireference could have been used to force a particular symmetry. This procedure would have been feasible for the small system that we have treated in this work, because we can know the ground state symmetry from the exact solution. For larger systems, however, even if one could a-priori know the correct symmetry, the number of determinants in the multireference state grows exponentially with the size of the system. Moreover, the Newton method, used for solving the CCM equations, does not guarantee that the lowest energy solution will be found, because this method allows to follow just one solution which might be not the good one. The spectral method that we present in this work allows instead to explore, focussing on selected spectral regions, a larger set of solutions which are observed as resonances. We show in figure 3 the spectra for the first excited eigenvalue at $\epsilon_p=2eV$, $U_p=10^2eV$, considering a probe operator $D_{cf}$ which induces a crystal-field rotation in the $e_g$ space, on one $Mn$ site : $$D_{cf}= d^\dag_{_{x^2-y^2,\sigma=1/2,0,0}} d_{_{3 z^2-r^2,\sigma=1/2,0,0}}$$ The SSCM equations reproduces well the exact diagonalisation results. The spectra shows a peak at negative energy. This is the ground state which was not accessible starting from the nominal reference state but it is visible as a resonance in the SCCM equations. The SCCM residues, to expand the $R$ operator, have been calculated fixing $\omega_r$ at zero because crystal field excitations are found at low energies. The SCCM spectra has been calculated with $N^r=10^4$. Figure 4 shows the spectra for the same initial state, but for a probe operator which transfers charge from a oxygen site to a neighbouring manganese: $$D_{cf}= d^\dag_{_{x^2-y^2,\sigma=+1/2,0,0}} p_{_{p_x,\sigma=+\frac{1}{2},1,0}}$$ The SCCM spectra has been calculated considering two energy windows : one around the charge transfer peak using $\omega_r=3eV$,$N^r=8 \times 10^3 $ , and another window around the ground state, using $\omega_r=-0.5eV$,$N^r= 10^4 $. Figure 5 show the same spectra calculated for a non-truncated Hilbert space, using $U_p=5eV$. Comparison to exact calculation is not possible in this case but we can see that the most important features are preserved, namely the charge transfer peak, the ground state peak and the peak due to the overlap with the initial state at zero absorbed energy. The convergence of the spectra in this case of low $U_p$ is easier and the spectra can be calculated with only one energy window, using $\omega_r=3eV$. The graph shows two curves, one calculated with $N^r= 10^3$, where the charge transfer peak is already in place, and another done at a higher value of $N^r= 7 \times 10^3$ which is necessary to have a proper convergence on the ground state peak at $\simeq -0.7eV$. The different behaviour for the two peaks can be seen as a consequence of the non-locality of the ground state derived from $\left| \hat \Phi_0 \right>$ symmetry. The non-locality implies a larger set of terms entering the resolvent sum. Conclusions =========== We have applied CCM equations to a strongly correlated lattice in the case of strong departure from the reference state. We have developped the spectral coupled cluster equations, by finding an approximation to the resolvent operator, that gives the spectral response for the class of probes that are writable as products of creation operators We have applied the method to a $MnO_2$ plane model for a parameters choice which makes the ground state particularly difficult to find with the CCM equations because of its peculiar symmetry which corresponds to a non-nominal reference state. We have shown that this state can be spectrally observed using SCCM equations by probing a CCM solution for the nominal reference state. In this case one observes a negative energy solution which corresponds to the true ground state. We think that CCM and SCCM equations have a strong potential, for strongly correlated lattices, not only for the study of the ground state but also for all those excitations that can be represented by a resolvent operator $R$ that can be written as the sum of localised terms. Acknoweledgment =============== I dedicate this work to the memory of my father Paolo. I acknoweledge Javier Fernandez Rodrigues who helped me in setting up the exact diagonalization of the $MnO_2$ plane model during a post-doctoral stage financed by the [*Gobierno del Principado de Asturias*]{} in the frame of the [*Plan de Ciencia, Tecnologia e Innovacion PCTI de Asturias 2006-2009.*]{}. I thank Markus Holzmann for critically reading the paper. [0]{} Hermann G. Kummel, A Biography of the Coupled Cluster Method, in Recent Progress in Many-body theories, Proceedings of the 11th International Conference Manchester, UK, 9 - 13 July 2001 Rodney J. Bartlett and Monika Musial, Rev. Mod. Phys. 79, 291–352 (2007) R. F. Bishop and P. H. Y. Li, Phys. Rev. A 83, 042111 (2011) F. Petit and M. Roger, Phys. Rev. B 49, 3453–3456 (1994) Henrik Koch and Poul Jørgensen, J. Chem. Phys. 93, 3333 (1990) Crawford, T. D. and Ruud, K. (2011), Coupled-Cluster Calculations of Vibrational Raman Optical Activity Spectra. ChemPhysChem, 12: 3442–3448 N. Govind et al., Chemical Physics Letters Volume 470, Issues 4–6, 5 March 2009, Pages 353–357 Sourav Pal, M. Dourga Prasad and Debashis Mukherjee Theoretica chimica Acta (1985) 68: 125-138 A. Mirone, S. S. Dhesi, and G. Van der Laan, Eur. Phys. J. B 53, 23 (2006). J. Herrero-Martin, A. Mirone, J. Fernandez-Rodriguez et al. Phys. Rev. B 82, 075112 (2010) Alessandro Mirone Hilbert++ Manual http://arxiv.org/abs/0706.4170
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this note, we revisit the *relaxation and rounding* technique employed several times in algorithmic mechanism design. We try to introduce a general framework which covers the most significant algorithms in mechanism design that use the relaxation and rounding technique. We believe that this framework is not only a generalization of the existing algorithms but also can be leveraged for further results in algorithmic mechanism design. Before presenting the framework, we briefly define algorithmic mechanism design, its connections to game theory and computer science, and the challenges in the field.[^1]' author: - Salman Fadaei bibliography: - 'literature.bib' title: A Note on Relaxation and Rounding in Algorithmic Mechanism Design --- Introduction ============ Relaxation and Rounding ======================= [^1]: This work was done while the author was a graduate student in the Department of Informatics, TU München, Munich, Germany.
{ "pile_set_name": "ArXiv" }
--- author: - Eric Mamajek title: 'Kinematics of the Interstellar Vagabond 1I/$\!$‘Oumuamua (A/2017 U1)' --- The discovery of an asteroid of likely interstellar origin was recently made by the Pan-STARRS survey – A/2017 U1 = 1I/[$\!$‘Oumuamua]{}[^1]. [*Can [$\!$‘Oumuamua]{}’s velocity before it entered the solar system provide any clues to its origin?*]{} The best available orbit from the JPL Small-Body Database Browser[^2] (solution JPL-13 produced by Davide Farnocchia) lists perihelion distance $q$ = 0.255287 $\pm$ 0.000079 au, eccentricity $e$ = 1.19936 $\pm$ 0.00021 and semi-major axis $a$ = -1.28052 $\pm$ 0.00096 au. This value of $a$ is consistent with an initial velocity before encountering the solar system of $v_{\circ}$ = 26.3209$\pm$0.0099 kms$^{-1}$, assuming no non-gravitational forces. The ephemeris shows that the object entered the solar system from the direction $\alpha_{ICRS}$, $\delta_{ICRS}$ = 279$^{\circ}$.804, +33$^{\circ}$.997 ($\pm$0$^{\circ}$.032, $\pm$0$^{\circ}$.015; 1$\sigma$). This divergent point and $v_{\circ}$ value translates to a heliocentric Galactic velocity [@Perryman98 $U$ towards Galactic center] of $U, V, W$ = -11.457, -22.395, -7.746 kms$^{-1}$ ($\pm$0.009, $\pm$0.009, $\pm$0.011 kms$^{-1}$).\ [*Could [$\!$‘Oumuamua]{} be a member of the Oort Cloud of the $\alpha$ Centauri system?*]{} Such a scenario might not be unexpected as the tidal radius for the 2.17 $M^{N}_{\odot}$ triple system [@Kervella17] is of order $r_t$ $\simeq$ 1.7 pc [@Mamajek13]. As the system lies only 1.34 pc away, the solar system may be on the outskirts of $\alpha$ Cen’s cometary cloud [see @Hills81; @Beech11]. @Kervella17 calculated updated heliocentric Galactic velocities for $\alpha$ Cen AB of $U, V, W$ = -29.291, 1.710, 13.589 ($\pm$0.026, $\pm$0.020, $\pm$0.013) kms$^{-1}$ and for Proxima Centauri ($\alpha$ Cen C) of $U, V, W$ = -29.390, 1.883, 13.777 ($\pm$0.027, $\pm$0.018, $\pm$0.009) kms$^{-1}$. The velocity difference of 36.80$\pm$0.04 kms$^{-1}$ between [$\!$‘Oumuamua]{} and the $\alpha$ Cen system, and the fact they were further apart in the past ($\Delta$ $\simeq$ 5 pc 100 kyr ago), [*argues that it has no relation to $\alpha$ Cen*]{}. Members of $\alpha$ Cen’s cometary cloud would appear to have motions diverging from the vicinity of $\alpha$, $\delta$ = 293$^{\circ}$, -42$^{\circ}$ with $v_{\circ}$ $\simeq$ 32 kms$^{-1}$.\ The Galactic velocity of $\!$‘Oumuamua is plotted against those of the nearest stars (parallax $>$ 300 mas) in Fig. \[fig1\]. Besides the velocity of $\alpha$ Cen AB and C from @Kervella17, velocities for the nearest stars are drawn from @Anderson12 and @Hawley97. The velocity for the substellar binary Luhman 16 is calculated using data from @Garcia17 and @Kniazev13: $U, V, W$ = -18.3, -27.5, -6.9 kms$^{-1}$. [$\!$‘Oumuamua]{}’s velocity is more than 20 kms$^{-1}$ from any of the stars, and 9 kms$^{-1}$ off from Luhman 16, so [*[$\!$‘Oumuamua]{} does not appear to be comoving with any of these nearest systems*]{}.\ [*What velocities might be expected of interstellar field objects?*]{} We might first suspect that interstellar planetesimals share the velocity distribution of nearby stars. The XHIP catalog [@Anderson12] contains velocities for 1481 stars within 25 pc with distances of $<$10% accuracy. The XHIP sample has median velocity $U, V, W$ = -10.5, -18.0, -8.4 kms$^{-1}$ ($\pm$33, $\pm$24, $\pm$17 kms$^{-1}$; 1$\sigma$ range), similar to that for volume-limited samples of nearby M dwarfs [$U, V, W$ = -9.7, -22.4, -8.9 kms$^{-1}$ ; $\pm$37.9, $\pm$26.1, $\pm$20.5; 1$\sigma$; @Reid02]. @BlandHawthorn16 provides a recent consensus estimate for the Local Standard of Rest (LSR) of $U, V, W$ = -10.0, -11.0, -7.0 kms$^{-1}$ ; $\pm$1, $\pm$2, $\pm$0.5; 1$\sigma$). An object with the median velocity of the local XHIP sample would have speed 22.5 kms$^{-1}$ coming from $\alpha$, $\delta$ = 273$^{\circ}$, +33$^{\circ}$, within only $\sim$6$^{\circ}$ of [$\!$‘Oumuamua]{}’s divergent point. The velocity is very close to the median for the XHIP sample ($\Delta v$ $\simeq$ 4.5 kms$^{-1}$; $\chi^2$/$\nu$ = 0.036/3; P = 0.0018), the mean for the local M dwarfs ($\Delta v$ $\simeq$ 2.1 kms$^{-1}$; $\chi^2$/$\nu$ = 0.0053/3; P = 0.0001) and the LSR ($\Delta v$ $\simeq$ 11.5$\pm$2.3 kms$^{-1}$), compared to the typical 3D velocity of nearby stars. Compared to the LSR, [$\!$‘Oumuamua]{} has negligible radial and vertical Galactic motion[^3], and its sub-Keplerian circular velocity trails by 11 kms$^{-1}$.\ Robotic reconnaissance of [$\!$‘Oumuamua]{}, or future interstellar planetesimals passing through the solar system, might constitute logical precursors to interstellar missions, and provide the opportunity to conduct chemical studies and radiometric dating of extrasolar material which formed around other stars.\ ![Galactic velocities for 1I/[$\!$‘Oumuamua]{} (filled triangle), nearby stars (open circles), and LSR (cross)\[fig1\].](f1.pdf) This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. EEM acknowledges support from the NASA NExSS program, and thanks Davide Farnocchia (JPL) for discussions. This work used the JPL Small-Body Database Browser, HORIZONS system, and Vizier.\ Anderson, E., & Francis, C. 2012, Astronomy Letters, 38, 331 Beech, M. 2011, The Observatory, 131, 212 Bland-Hawthorn, J., & Gerhard, O. 2016, , 54, 529 Gaidos, E., Williams, J. P., & Kraus, A. 2017, arXiv:1711.01300 Garcia, E. V., Ammons, S. M., Salama, M., et al. 2017, , 846, 97 Hawley, S. L., Gizis, J. E., & Reid, N. I. 1997, , 113, 1458 Hills, J. G. 1981, , 86, 1730 Kervella, P., Th[é]{}venin, F., & Lovis, C. 2017, , 598, L7 Kniazev, A. Y., Vaisanen, P., Mu[ž]{}i[ć]{}, K., et al. 2013, , 770, 124 Mamajek, E. E., Bartlett, J. L., Seifahrt, A., et al. 2013, , 146, 154 Perryman, M. A. C., Brown, A. G. A., Lebreton, Y., et al. 1998, , 331, 81 Reid, I. N., Gizis, J. E., & Hawley, S. L. 2002, , 124, 2721 [^1]: See: http://www.minorplanetcenter.net/mpec/K17/K17UI1.html, https://www.minorplanetcenter.net/mpec/K17/K17V17.html. [^2]: https://ssd.jpl.nasa.gov/sbdb.cgi?sstr=A%2F2017%20U1 [^3]: @Gaidos17 have proposed that the object’s velocity is due to birth in a nearby $\sim$40 Myr stellar association.
{ "pile_set_name": "ArXiv" }
--- abstract: 'It was argued in the past that bulges of galaxies cannot be formed through collisionless secular evolution because that would violate constraints on the phase-space density: the phase-space density in bulges is several times larger than in the inner parts of discs. We show that these arguments against secular evolution are not correct. Observations give estimates of the coarsely grained phase-space densities of galaxies, $=\rho_s/\sigma_R\sigma_{\phi}\sigma_z$, where $\rho_s$ is stellar density and $\sigma_R, \sigma_{\phi}, \sigma_z$ are the radial, tangential, and vertical rms velocities of stars. Using high-resolution N-body simulations, we study the evolution of  in stellar discs of Galaxy-size models. During the secular evolution, the discs, which are embedded in live Cold Dark Matter haloes, form a bar and then a thick, dynamically hot, central mass concentration. In the course of evolution   declines at all radii. However, the decline is different in different parts of the disc. In the inner disc, $(R)$ develops a valley with a minimum around the end of the central mass concentration. The final result is that the values of   in the central regions are significantly larger than those in the inner disc. The minimum, which gets deeper with time, seems to be due to a large phase mixing produced by the outer bar. We find that the shape and the amplitude of $(R)$ for different simulations agree qualitatively with the observed $(R)$ in our Galaxy. Curiously enough, the fact that the coarsely grained phase-space density of the bulge is significantly larger than the one of the inner disc turns out to be an argument in favor of secular formation of bulges, not against it.' author: - | V. Avila-Reese$^{1}$, A. Carrillo$^{1}$, O. Valenzuela$^{2}$ and A. Klypin$^{3}$\ $^{1}$Instituto de Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264, 04510 México, D.F.\ $^{2}$Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195\ $^{3}$Astronomy Department, New Mexico State University, Las Cruces, NM 88001 title: 'Secular evolution of galactic discs: constraints on phase-space density' --- \[firstpage\] Galaxy: evolution – Galaxy: structure – galaxies: kinematics and dynamics – galaxies: evolution. Introduction ============ Formation of galactic spheroids remains as a major unsolved problem in astronomy. This is an important problem, specially if one takes into account that at least half of the stars in the local Universe are in spheroids: either bulges or ellipticals (e.g., Fukugita, Hoggan & Peebles 1998; Bell et al. 2003). The key question is how and where these stars formed. One possibility is that stars in present-day spheroids were formed in a self-regulated quiescent fashion characteristic of galactic discs, and then the disc stars were dynamically heated by mergers and/or secular disc processes. In this case the spheroid formation is predominantly collisionless. Another possibility is that spheroid star formation (SF) was highly dissipative and proceeded in a violent, possibly bursting and dust enshrouded mode during a dissipative disc merging event or during a phase of fast gas (monolithic) collapse. Although both possibilities happen certainly in the real Universe, it is important to evaluate the feasibility of each one as well as the physical/evolutionary context in which one or another possibility dominates. In the former case the SF rate is traced by UV/optical emission, while in the latter by FIR/submillimetre emission. Thus, understanding the mechanisms of spheroid formation and the regimes of formation of their stars is of crucial relevance for interpreting and modeling the contribution of present-day stars in spheroids to the cosmic SF rate history. Secular bulge formation mechanism --------------------------------- In this paper we will study some aspects of the disc secular evolution. According to the secular scenario, the formation of a central mass concentration (a bar or a pseudobulge) happens in a predominantly dissipationless fashion in the course of development of gravitational instabilities in the central region of a galactic stellar disc. The evolution of the bar can give rise to a central component that is denser and thicker than the initial thin stellar disc (Kormendy 1979, 1982). In earlier simulations the bar in most of cases was dissolving, leaving behind a pseudobulge (e.g., Combes & Sanders 1981; Pfenniger & Norman 1990; Combes et al. 1990; Raha et al. 1991; Norman, Sellwood & Hassan 1996). However, more recent simulations, which have many more particles and have more realistic setup, do not produce typically decaying bars (Debattista & Sellwood 2000; Athanassoula & Misiriotis 2002; O’Neill & Dubinsky 2003; Valenzuela & Klypin 2003, hereafter VK03; Shen & Sellwood 2003; Debattista et al. 2004). In those simulations bars typically slightly grow over billions of years. In the VK03 simulations of discs inside live Cold Dark Matter (CDM) haloes, the redistribution of the angular momentum of the stellar disc is driven by the evolving bar and by interactions with the dark matter halo. This evolution produces a dense central mass concentration with nearly exponential profile, which resembles surface brightness profiles of late-type galaxy bulges (see also Athanassoula & Misiriotis 2002; O’Neill & Dubinsky 2003). Shen & Sellwood (2003) and Debattista et al. (2004) argued that neither a small central mass concentration (e.g. black hole) nor the buckling instability are efficient enough to destroy a bar. Whether the bar is destroyed or not, the heating of the central parts of the stellar disc and accumulation of mass at the centre are common features in all models of secular evolution. Further exploration, including a wide range of realistic initial conditions and inclusion of processes such as gas infall (e.g., Bournaud & Combes 2002), minor mergers and satellites (Aguerri et al. 2001), hydrodynamics, SF and feedback are certainly necessary. All the processes are likely to play some role in evolution of galaxies. Here we intentionally do not include these complex processes in order to isolate the effects of the secular evolution: we include only stellar and dark matter components and do not consider any external effects. Models of secular disc evolution gradually find their place in the theory of galaxy formation. Encouraging results were obtained when a prescription for secular bulge formation was incorporated in CDM semi-analytical models of disc galaxy formation and evolution (Avila-Reese & Firmani 2000,1999; see also van den Bosch 1998). These models successfully reproduce the observed correlations of the bulge-to-disc ratio with other global properties for late-type galaxies. Secular disc evolution should be considered as a complementary path of spheroid formation rather than a concurring alternative to the dissipative merging mechanism. From the observational side, an increasing evidence shows that structural, kinematic, and chemical properties of the bulges of late-type galaxies are tightly related with properties of the inner discs (for reviews and references see Wyse, Gilmore & Franx 1997; MacArthur, Courteau & Holtzmann 2003; Carrollo 2004; Kormendy & Kennicutt 2004). Besides, bars – signature of secular evolution – are observed in a large fraction of spiral galaxies. These pieces of evidence strongly favor the secular evolution scenario. Do phase-space constraints pose a difficulty for the secular mechanism? ----------------------------------------------------------------------- According to the Liouville’s theorem, in a collisionless system the phase-space density $f({\bf x,v})$ is preserved along trajectories of individual stars. Thus, one expects that a collisionless system “remembers” its initial distribution of $f({\bf x,v})$, and this can be used to test the secular evolution scenario. In fact what is “observed” is not $f({\bf x,v})$, but a rough estimate of the coarsely grained phase-space density $$\fp = \frac{\rho_s}{\sigma_R\sigma_{\phi}\sigma_z},$$ where, $\rho_s$ is stellar density and $\sigma_R,\sigma_{\phi},\sigma_z$ are radial, tangential, and vertical rms velocities of stars. The coarse-grained phase-space density is not preserved. Still, there are significant constraints on the evolution of , which are imposed by the mixing theorem (Tremaine, Henon, Lynden-Bell 1986). The process that changes   is the mixing. Bringing and mixing together two patches of stars with different fine-grained phase-densities result in , which is lower than the maximum phase-space density of the two patches. In other words, the mixing results in reducing . The only way to increase   is to bring in stars with initially large . Indeed, when a bar forms, there is a substantial radial infall of mass to the central region. Yet, this does not help much because the bar is formed from the central region of the disc where the initial   is low. Additional mixing produced by the bar seems to make the things even worse by lowering down already low . Simple estimates for elliptical and spiral galaxies indicate that the coarse-grained phase space-densities in spirals are lower than in ellipticals (Carlberg 1986), making difficult to produce elliptical galaxies by merging of stellar discs (Hernquist, Spergel & Heyl 1993). Wyse (1998) discussed similar arguments but for the Galactic disc in the context of the secular bulge formation scenario. For an exponential disc with constant height $h_z$ and with an isotropic velocity-dispersion tensor one finds that $\fp(R)\propto\rho_s/\sigma_z^3\propto$ ($\Sigma_s/2$h$_z$)/($\Sigma_s$h$_z$)$^{3/2}\propto \exp(R/2h_d)$, where $\Sigma_s$ and $h_d$ are the disc central surface brightness and scale-length radius, respectively. Therefore, $(R)$ is lower toward the centre. Wyse (1998) then states: “one should not find a higher phase-space density in stellar progeny, formed by a collisionless process, than in its stellar progenitor”. Observational inferences for our Galaxy show actually that   is higher in the bulge than in the inner disc. This discrepancy led Wyse (1998) to conclude that the secular scenario has a serious difficulty, unless dissipative physics is included. The problem is partially mitigated if one considers that the Toomre parameter $Q$ is constant along the initial disc. For $Q=$const, the rms velocity is $\sigma_R \propto \exp(-R/h_d)/\kappa(R)$, where the epicycle frequency $\kappa$ increases as $R$ decreases. In this case $\fp(R)$ has an $U-$shaped profile with a maximum at the centre (Lake 1989) and a valley in the inner regions. How steep or shallow is this valley depends on the inner behavior of $\kappa(R)$. The situation is actually more complicated than the simple picture outlined above because of the spacial and dynamical properties of the system evolve. Formation of bars is a complex process that affects a large fraction of the disc – not just the central region. Thus, to study the overall evolution of $(R)$ one needs to turn to numerical simulations. The main question, which we address in this paper is how the macroscopic (observational) phase-space density profile, , of stellar discs inside CDM haloes evolves during the formation (and potential dissolution) of a bar, and whether the shape of this profile agrees with estimates from observed disc/bar/bulge galaxy systems. We analyze state-of-the-art high-resolution N-body simulations of Galaxy-like discs embedded in live CDM haloes. The secular evolution of the disc in these simulations yields bars that redistribute particles and produce a dynamically hot mass central concentration. In §2 we present a brief description of the simulations and the procedure to estimate the phase-space density. The results are given in §3, and in §4 we discuss some aspects of the simulations. In §5 a comparison with observational estimates is presented. Our summarizing conclusions are given in §6. MODELS AND SIMULATIONS ====================== We study four N-body simulations for evolution of bars in stellar discs embedded in live CDM haloes. Two simulations, $A_1$ and $C$, are taken from VK03 and the other two, $D_{\rm hs}$ and $D_{\rm cs}$ are from Klypin et al. (2005). The models were chosen to cover some range of initial conditions and parameters, with the aim to test the sensitivity of the results to them. For example, the initial Toomre parameter $Q$ is constant along the disc for models $C$, $D_{\rm hs}$, and $D_{\rm cs}$. The $Q$ parameter is variable for model $A_1$ (increases in the central regions). Instead, this model is initially set to have a radial velocity dispersion as $\sigma_R^2(R)\propto \exp(-R/$h$_d)$. As the result, all the models have different initial profiles of the azimutally averaged coarsely grained phase-space density. Parameters of the models and details of simulations are presented in Table 1. Parameter $C$ $A_1$ $D_{\rm hs}$ $D_{\rm cs}$ ----------------------------------- ------ --------- -------------- -------------- Disc mass ($10^{10}\msun$) 4.8 4.3 4.35 5.0 Total mass ($10^{12}\msun$) 1.0 2.0 1.22 1.4 Initial disc scale-length (kpc) 2.9 3.5 2.25 2.57 Initial Toomre parameter [*Q*]{} 1.2 $<1.2>$ 1.8 1.3 Initial disc scale-height (kpc) 0.14 0.25 0.17 0.20 Halo concentration $c_{NFW}$ 19.0 15 18 17 Number of disc particles ($10^5$) 12.9 2.0 4.6 2.3 Number of halo particles ($10^6$) 8.48 3.3 3.3 2.2 Particle mass ($10^{5}\msun$) 0.37 2.14 0.93 2.14 Formal force resolution (pc) 100 22 19 22 : Parameters of models The particle mass refers to the mass of “stellar” disc particles. This is also the mass of the least massive halo particles. The initial conditions are generated using the method introduced by Hernquist (1993). The galaxy models initially have exponential discs in equilibrium inside a dark matter halo with a density profile consistent with CDM cosmological simulations (Navarro, Frenk & White 1997). The halo concentrations are set somewhat larger than the expected concentration $c_{NFW}\approx 12$ for a halo without baryons, which should host our Galaxy (Klypin et al. 2002). This is done to mimic the adiabatic compression of the dark matter produced by baryons sinking to the centre of the halo in the process of formation of the galaxy. The haloes are sampled with particles of different masses: particle mass increases with distance. The lightest dark matter particles have the same mass as the disc particles. At any time there are very few large particles in the central 20-30 kpc region. The time steps of simulations were forced to be short. For example for model $C$, the minimum time step is $1.2\ 10^{5}$ yrs, while for model $D_{\rm hs}$ it is $1.5\ 10^{4}$ yrs. For a reference, in model $C$ the typical time that a star requires to travel the vertical disc extension of the disc at the radius of 8 kpc requires 372 steps, and the orbital period in the disc plane at the same distance takes 1900 steps. Model simulations $C$, $A_1$, $D_{\rm cs}$, and $D_{\rm hs}$ were followed for $\sim 4.4$, 4, 4, and 7 Gyrs, respectively, using the Adaptive Refinement Tree (ART) code (Kravtsov et al. 1997). In the models $C$ and $A_1$ the bar is strong even at the end of simulations without any indication that it is going to die. In the model $D_{\rm hs}$ the bar is gradually getting weaker, but it is still clearly visible. The bars typically buckle at some stage producing a thick and dynamically hot central mass concentration with a peanut shape. The model $D_{\rm cs}$ is the only model where bar dissolved completely and produced a (pseudo)bulge. The galaxy models used in the simulations are scaled to roughly mimic the Galaxy. For example, they have realistic disc scale lengths ($\approx 3$ kpc), scale heights ($\approx 200-300$ pc), and they have nearly flat rotation curves with $V_c\approx 220$ km/s. Yet, we do not make an effort to reproduce detailed structure of our Galaxy. For example, the radius of the bar in model $C$ is $5-5.5$ kpc – too large as compared with real bar which has radius $3-3.5$ kpc. Model $D_{\rm hs}$ presents a shorter bar ($3$ kpc) and it is a better model in that respect[^1], and we describe it in more detail to show the reliability of our approach. With the exception of disc mass (which is somewhat small), model $D_{\rm hs}$ makes a reasonable match for our Galaxy. Its stellar surface density at the “solar” distance of 8 kpc is $\Sigma_s=54\msun {\rm pc}^{-2}$. For comparison, for our Galaxy Kuijken & Gilmore (1989) find $\Sigma_s=48\pm 8\msun {\rm pc}^{-2}$ while Siebert et al.(2003) find $\Sigma_s=67\msun {\rm pc}^{-2}$. Stellar rms velocities in radial and vertical directions in the model are 47 and 17 . Dehnen & Binney (1998) give for the old thin disc stellar population of our Galaxy 40 and 20 , respectively. Within the solar radius the model has a ratio of dark matter to total mass of $M_{\rm DM}/M_{\rm tot}=0.6$. This ratio is significantly lower inside the bar radius of 3 kpc: $M_{\rm DM}/M_{\rm tot}=0.35$. The bar pattern speed is $\Omega_p=54{\rm Gyr}^{-1}$. Bissantz et al. (2003) give $\Omega_p=60\pm 5{\rm Gyr}^{-1}$ for our Galaxy although their estimate is also based on a model. In order to make a more detailed comparison of the mass distribution in the model $D_{\rm hs}$ with that of our Galaxy, we mimic the position-velocity (P-V) diagram for neutral hydrogen and CO in the plane of our Galaxy. Observations of Doppler-shifted 21-cm and CO emission along a line-of-sight performed at different galactic longitudes $l$ provide the P-V diagram. Because the gas is cold, it provides a good probe for mass profile. There are two especially interesting features in the P-V diagram. Envelops of the diagram in the first quadrant ($0<l<90$, $V>0$) and in the third quadrant ($0>l>-90$, $V<0$) are the terminal velocities (Knapp et al. 1985; Kerr et al. 1986). Data in the second and forth quadrants are coming for regions outside of Solar radius (Blitz & Spergel 1991). They have information on the motion in the outer part of our galaxy. For distances larger than the radius of the bar 3-4 kpc ($|l|<30^o$) the terminal velocities can be converted into the circular velocity curve (assuming a distance to the Galactic center). At smaller distances, perturbations produced by the Galactic bar are large and cannot be taken into account in a model-independent way. This is why we use the P-V diagram, not the rotation curve. We mimic the P-V diagram for the $D_{\rm hs}$ model by selecting “stellar” particles, that are close to the plane $|z|< 300$ pc and have small velocities relative to their local environment. A particle velocity must deviate not more than  20from the velocity of its background defined by the nearest 50-70 particles. This gives the rms line-of-sight velocity of 8 , which is compatible with the random velocities of the cold gas. We place an “observer” in the plane of the disc at the distance 8 kpc. Its position was chosen so that the bar major axis is 20 degrees away from the line joining the “observer” and the galactic center. The observer has the same velocity as the local flow of “stars” at that distance. We then measure the line-of-sight velocity of each cold particle and plot the particles in the longitude-velocity coordinates. With the procedure just described we are selecting a population of “stellar” particles, which has small asymmetric drift and yet has the bulk flows induced by the bar. The procedure is insensitive to particular set of parameters as long as rms velocities stay significantly smaller than the rotational velocity 220and as long as the radius for the background particles is significantly smaller than the distance to the center. Figure \[fig:PV\] shows the P-V diagram for the cold “stellar” particles in model $D_{\rm hs}$. The envelop of the diagram very closely follows that of our Galaxy, indicating that the mass distribution in the inner part of the model is compatible with the data on our Galaxy. Small deviations in the outer part of the galaxy are due to lopsidedness of our Galaxy (Blitz & Spergel 1993), which our model cannot reproduce. We assumed that cold “stellar” particles resemble neutral and molecular gas in the P-V diagram. To test this assumption we use a simulation presented elsewhere (Valenzuela et al. 2005, in preparation), which includes not only collisionless particles (“stellar” and dark matter), but also gas. This simulation has been run with the Gasoline N-body+SPH code (Wadsley, Stadel & Quinn 2004). The galaxy model is similar to the one discussed above. The simulation had $5\times 10^5$ dark matter particles, $2\times 10^5$ disc (“stellar”) particles, and $5\times 10^4$ gas particles. The force resolution was 200 pc for gas and “stars”, and 600 pc for dark matter. The simulation was run until 1.6 Gyrs. The simulated “galaxy” develops a strong bar with a radius $\approx 3~$kpc. We selected an “observer” at distance 8 kpc from the center at the angle of 20$\circ$ relative to the major axis of the bar. Figure \[fig:PVtest\] shows P-V diagrams for different components. P-V diagram for the cold gas with $T<10^4$K shows remarkable complexity: there are lumps and filaments. Those are due to spiral arms and shock waves. The cold “stellar” particles do not have those details, yet they remarkably well follow the same envelops in the P-V diagram as the cold gas. This is exactly what we want to demonstrate. The whole stellar population (the top panel) shows large velocities in the central region – well in excess of cold gas motions. In this case, significant (20-30 percent) corrections are indeed required to account for the asymmetric drift. Measuring the phase space density --------------------------------- The phase-space density is defined as the number of stars in a region of phase space around a point ([**$x,v$**]{}) divided by the volume in phase space of that region [*as*]{} this volume tends to zero (e.g., Binney & Tremaine 1987; Wyse 1998). The measurable quantity is the coarsely grained phase-space density that is defined in finite phase space volumes. However, this quantity is still difficult to infer observationally. The measure of the phase-space density commonly derived from observations is the stellar spatial density, $\rho_s$, divided by the cube of the stellar velocity dispersion (eq. 1). For the latter, one typically uses either the projected line of sight velocity dispersion, $\sigma_p$, or the inferred radial velocity dispersion, or the product of the three components of the velocity dispersions, $\sigma_R, \sigma_{\phi},$ and $\sigma_z$ if they are known. We will refer to  as the azimuthally averaged “observational” phase-space density as defined above. Because our main aim is to analyze the coarsely grained phase-space density evolution in the N-body simulations in a way that mimics observations, we do the following. We calculate the average density of “stellar” particles, $\rho_s$(R), and their velocity dispersions within cylindrical (equatorial) rings of width $\Delta R$ and thickness $\Delta Z$. Thus,   is estimated in a representative region above and below the disc plane. Nonetheless, we have checked that qualitatively similar results are obtained when using other binning or even other geometries, in particular the spherical one for the centre. The thickness $\Delta Z$ and $\Delta R$ were assumed to be $2h_z$(R) and 200 pc, respectively. The results do not change significantly for a large range of assumed values for $\Delta Z$ and $\Delta R$. EVOLUTION OF THE COARSELY GRAINED PHASE-SPACE DENSITY ====================================================== Figure 3 shows the evolution of the surface and the volume densities averaged azimuthally for the simulation of model $C$, as an example. Dotted lines correspond to the beginning of the simulation, and the short dashed and solid lines are for moments of time separated approximately by 2 Gyr. The effect of bar evolution on the disc surface density is significant: matter is accumulated at the centre while the slope of the outer disc becomes shallower than the initial slope, with the disc scale-length increasing by $\sim 30\%$ (VK03). The volume density shows similar behavior, but including also the disc vertical expansion with time. The disc gets hotter and thicker as clearly demonstrated in Fig. 4, which shows the evolution of the three components of the velocity dispersion as well as h$_z$(R) for the same model $C$. The disc heating is very large in the central $2-4$ kpc region where the bar forms. The heating happens also in the outer disc but to a much lesser degree. Here the heating is due to spiral waves, which form in the initially unstable disc. The waves gradually decay and heat the disc, but most of the heating occurs in the plane of the disc: the radial and tangential rms velocities increase substantially more than the vertical rms velocity, which changes by 25-30 percent over 4 Gyrs (see also VK03). Finally, the evolution of the “observational” radial phase-space density profile, , of model $C$ is shown in Fig. 5(a) . Because the initial disc of this model has a constant $Q$, the initial (R) profile is $U-$shaped: the maximum at the centre is followed by a minimum at the inner disk. The steepness of the central maximum depends on the behavior of $\kappa$(R) at small radii. For model $C$, the inner (R) profile is almost flat. In Fig. 5 (panels b, c and d) are shown also the evolution of (R)  for models $D_{\rm cs}$, $D_{\rm hs}$, and $A_1$, respectively. The initial inner  profile of the first two models are significantly steeper than for model $C$, while for model $A_1$,   decreases toward the centre. In the latter case, $\sigma_R^2(R)\propto \exp(-R/$h$_d)$ was assumed initially instead of $Q=const$ (see §1.1). One clearly sees that the macroscopic (observational) azimuthally averaged phase-space density decreases with time along the whole disc of all the models. In the outer parts of the disc the bar and the spiral arms heat and thicken the disc. Here the surface density at each radius remains almost constant. The disc heating explains why (R) decreases at all radii. In the inner disc, the   profile of all the models develops a valley whose depth increases with time. For example, for model $C$ the minimum of this valley is at $\sim 1.5-2$kpc and this is close to the radius where the central mass concentration ends. Around this radius one observes the maximum radial mass exchange as well as the maximum vertical heating and disc thickening due to the bar. Here the outer region of the bar produces a large phase-space mixing – larger than in the centre. As the result,  at some radius inside the central mass concentration is higher than  measured at the inner disc. The largest changes in the (R) profile, as well as in other quantities (such as $h_z$ and rms velocities) occur during typically the first $\sim 1$ Gyr of evolution. The evolution continues, but much slower at later moments. Bulge-like structure formation and robustness of the results ============================================================ The bar-driven evolution produces a dense central concentration (or even a pseudobulge as is the case of model $D_{\rm cs}$) with a slope steeper than the original one. Figure 3 clearly illustrates this point for model $C$. For almost all of our simulations the surface density in the inner $\sim 2$kpc region is well approximated by a Sérsic profile (Sérsic 1968) with slope index $n<4$. For example, for model $C$, $n\approx 1$. This is similar to what is observed for bars and bulges of late type galaxies (e.g., de Jong 1996; Graham 2001; Mac Arthur et al. 2003; Hunt et al. 2004). Comparison of different models indicate remarkable similarities in the evolution and the shape of the phase-space density profile (R). In all the models (R) has a maximum at the centre followed by a deep minimum at $\sim 1-2$kpc, where the corresponding outer bars live (see Fig. 5). It seems that the results do not depend on the numerical resolution and are not particularly sensitive to initial conditions. For example, for model $A_1$ the resolution is lower than for model $C$, and $Q$ was not assumed constant initially. Still, the evolution and shape of the  profile are similar to those of the model $C$ (panels a and d in Fig. 5). Although resolution is an important factor in simulations aimed to explore gravitational instabilities of [*thin*]{} discs embedded in large hot haloes (O’Neill & Dubinsky 2003; VK03), we find that the shape and evolution of (R) is qualitatively the same in simulations with different resolutions. When we look closely at the results, we clearly see differences between models. For example, the minimum of (R)  is at different radii and the depth of the minimum varies from model to model. The smallest radius and the deepest minimum are attained by model $D_{\rm cs}$, where the bar is dissolved and a pseudobulge forms. The differences in the evolution of (R) are expected because the lengths and strengths of bars are different in different models. Yet, it seems that the overall generic shape of   after evolution is a robust prediction of the secular collisionless scenario. Comparison with observations ============================ Wyse (1998) presented estimates of  and  for the Galaxy from available observational information. We use more recent data to give updated estimates. We measure  and   at the typical radii r$_e$/2 (r$_e$ is the effective bulge radius, r$_e\approx 0.7$ kpc, see Tremaine et al. 2002) and =2.5 kpc, respectively. The bulge and inner disc stellar volume densities are estimated from the galaxy model used in Bissantz & Gerhard (2002), where the parameters were fixed from fittings to the dust-corrected COBE-DIRBE $L-$band maps (Spergel et al. 1995). Taking into account the bulge ellipticity and averaging vertically the disc density within h$_z=330$ pc (inner disc scale height, Chen et al. 2001), we obtain $\rho_b($r$_e/2)=6.7 \msunpc$ and $\rho(\Rdi)= 0.31\msunpc$. Regarding velocity dispersions, for the bulge we use an interpolation of measured $\sigma_{p}$’s at different projected galactocentric distances as compiled by Tremaine et al. (2002) ($\sigma_{p}$(r$_e$/2) = 116.7 km/s, corrected for ellipticity), and we assume velocity isotropy and that $\sigma_r\approx \sigma_{p}$. This approximation is valid for Sérsic profiles within $0.1\lesssim r/r_e \lesssim 10$ (Ciotti 1991). For the disc, the three velocity dispersions at  are calculated by using the radial profiles given in Lewis & Freeman (1989). Thus, at , $\sigma_R=78.9$ km/s, $\sigma_{\phi}= 73.1$ km/s, and $\sigma_z=41.8$ ($\sigma_z=0.53\sigma_R$ was assumed). For completeness, we also calculate  in the outer bulge (at 1.5 kpc) and in the solar neighborhood (8.5 kpc). For the former, the same Bissantz & Gerhard (2002) bulge model and the Tremaine et al. (2002) recompilation for $\sigma_{p}$ were used. For the latter, we use the local estimate for the stellar density, $\rho_s=0.09$  (Holmberg & Flynn 2000), and the velocity dispersion profiles from Lewis & Friedmann (1989). The values of $\fb (r_e/2)$, $\fb(1.5$kpc$)$,  and $f'_{\odot}$ ($4.3\ 10^{-6}$, $7.8\ 10^{-7}$, $9.6\ 10^{-7}$, and $2.8\ 10^{-6}$ , respectively) are indicated with empty squares in Fig. 5. The qualitative agreement in the shape of   between observations and numerical predictions is remarkable, in spite of the fact that the observations have large uncertainties and the models do not include gas, SF processes or gas infall and minor mergers. Note also that (R) tends to stabilize after $1-2$ Gyr, although one sees still changes after this period. We consider that the collisionless secular scenario (not including dissipative physics) is a good approximation for Milky Way-like galaxies for their last 4-7 Gyr. Before this time, the discs were much more gaseous and dissipative phenomena should be taken into account. The models predict that the phase-space density decreases with time even at large radii. This decline of   in outer regions is produced by spiral waves, which develop in the unstable disc. In real galaxies the same effect should be produced by real spiral waves and possibly by molecular clouds. It is interesting to compare the models with what is observed for our Galaxy at the solar neighborhood. At $R\sim 8$ kpc in the models   roughly decreases by a factor 5-10 during 4-7 Gyrs of evolution. Studies in the solar neighborhood show that the components of the stellar velocity dispersion increase with the age of stars. This is the so called age-velocity relation (for recent estimates see Rocha-Pinto et al. 2004 and the references therein). Observations indicate that each of the three velocity dispersion components increases by $20-60\%$ during the last $\sim 6-7$Gyr of galactic evolution. If this is an indication how the whole stellar population evolves, we can make rough estimates of the evolution of  . For an equilibrium disc with stellar surface density $\Sigma$ and total rms velocity $\sigma$, the phase-space density scales as $\fp\propto \Sigma^2/\sigma^5$. If for the sake of argument we assume that the rms velocity of the whole stellar population increased by factor of 2 during the last 6 Gyrs without substantial change in the stellar surface density, then we expect decline in   by 32 times. Likely increase in the stellar surface density $\Sigma$ (e.g., Hernández, Avila-Reese & Firmani 2001) should reduce this very large factor. Yet, naively one does expect for our Galaxy a large drop in   just as our models predict. Finding  for external galaxies is not easy, specially when it concerns the velocity dispersion. Here we estimate the ratios of , calculated at r$_e$/2, to , calculated at 0.8h$_d$ for four spiral galaxies of different morphological types (Sa - Sbc) studied by Shapiro et al. (2003). We use the $K-$ and $I-$band surface brightness profiles and the $\sigma_{p}$ profiles reported in Shapiro et al. (2004). The disc volume density is assumed to be proportional to the surface brightness divided by 2h$_z$ (h$_z$ is set equal to 0.125h$_d$, Kregel 2003). We also use the disc ($\sigma_R,\sigma_{\phi},\sigma_z $) profiles, which Shapiro et al. fit to their spectroscopic observations. The bulge luminosity density profile is estimated as follows: (i) the surface brightness profiles reported in Shapiro et al. are decomposed in a Sérsic bulge and an exponential disk, (ii) spherical symmetry is assumed for the bulge, (iii) its luminosity density profile is calculated with the found Sérsic parameters by using the approximations given in Lima-Neto, Gerbal & Marquez (1999). We use $\sigma_p^3$ for the bulge, assuming an isotropic velocity-dispersion tensor and that $\sigma_r = \sigma_p$; therefore, our $f_b$ is a lower limit. To pass from luminosity to mass, we assume that the M/L ratios in the $I$ and $K$ band are 2 and 1.5 times larger for the bulge than for the disc, respectively. Table 2 shows the estimates of (r$_e$/2)/(0.8h$_d$) for the spirals from Shapiro et al. (2004). The (r$_e$/2)/(0.8h$_d$) ratio is indeed $>1$ for three galaxies and close to one for the last one. NGC 4030 has the latest type (Sbc) among the four and probably it was not affected significantly by the secular evolution. Overall, the results are consistent with what we find for the Galaxy. Name Type n (r$_e$/2)/(0.8h$_d$) ---------- ------ ----- ---------------------- NGC 1068 Sb 2.1 $>6.7$ NGC 2460 Sa 1.7 $>3.2$ NGC 2775 Sab 1.8 $>1.3$ NGC 4030 Sbc 2.0 $>0.9$ : Bulge Sérsic index n and phase-space density bulge-to-disc ratio for four galaxies. Conclusions =========== We studied the evolution of the observational measure of the coarsely grained phase-space density, , in high-resolution N-body simulations of Galaxy-like models embedded in live CDM halos. In our models the initially thin stellar disc is unstable. As the system evolves, a bar with almost exponential density profile is produced. The bar redistributes matter in such a way that the disc ends with a high accumulation of mass in the centre and and extended outer disc with a density profile shallower than the exponential law. During the secular evolution, the disc is also dynamically heated and thickened, mainly in the inner parts where a bulge-like structure (peanut-shaped bar or pseudobulge) arises. The secular evolution produces dramatic changes in the radial distribution of the coarsely grained phase-space density (R). As the disc is heated and expanded vertically, (R) decreases at every radius $R$. The outer region of the bar produces a large phase mixing in the inner disc –larger than at the centre. As the result, the (R) profile develops an increasing with time valley, with a pronounced minimum in the inner disc, where the central, bulge-like mass concentration ends. In this region the vertical heating and the radial mass exchange in the disc are maximum. Our results on the evolution and shape of the  profile are qualitatively robust against initial conditions and assumptions, numerical resolution, and the way of measuring the volume density and dispersion velocities. We conclude that the secular evolution of a [*collisionless*]{} galactic disc is able to form a thick, dynamically hot, central mass concentration (eventually a pseudobulge), where the phase-space density is much higher than in the inner disc. Using observational data we have estimated   at several radii for the Galaxy. In particular we estimated the   bulge-to-inner disc ratio. The qualitative agreement with our numerical results is remarkable. Therefore, the secular evolution of a collisionless disc yields a radial coarsely grained phase-space density profile in agreement with that it is observationally inferred for the Galaxy. The phase-space density constraints favor the bulge secular formation scenario. The inclusion of other important physical ingredients, as gas dissipative effects and satellite accretion, will likely enhance the secular evolution of disc models. Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to J. Gersen for sending to us their observational data in electronic form, and to S. Courteau for useful discussions. We acknowledge J. Wadsley and T. Quinn for allowing us to mention results from a simulation carried out with the the Gasoline code before its publication. We thank the referee for useful comments. This work was partially funded by CONACyT grant 40096-F, and by a DEGEP UNAM grant to A. C. He also thanks CONACyT for a graduate fellowship. In addition, O. V. acknowledges support by NSF ITR grant NSF-0205413, and A. K. acknowledges support by NASA and NSF grants. [99]{} Aguerri J. A. L., Balcells M., & Peletier R. F., 2001, A&A, 367, 428 Athanassoula E. & Misiriotis A., 2002, MNRAS, 330, 35 Avila-Reese V. & Firmani C., 2000, RevMexAA, 36, 23 \_\_\_\_\_\_\_. 1999, in “Star Formation in Early-Type Galaxies”, Eds. J. Cepa & P. Carral, ASP Conf. Ser. 163, 243 Balcells M., Graham A., Dominguez-Palmero L., & Peletier R., 2003, ApJ, 582, L79 Bell E. F., McIntosh D. H., Katz N. & Weinberg M. D., 2003, ApJSS, 149, 289 Binney, J. & Tremaine S., 1987, “Galactic Dynamics”, Princeton Univ. Press, Princeton Bissantz N. & Gerhard O., 2002, MNRAS, 330, 591 Bissantz, N., Englmaier, P., & Gerhard, O. 2003, MNRAS, 340, 949 Blitz, L., & Spergel, D. N. 1991, ApJ, 370, 205 Bournaud F. & Combes F., 2002, A&A, 392, 83 Carollo M., 2004, Carnegie Observatories Astrophys. Ser., Vol 1, Cambridge Univ. Press, in press Carlberg R., 1986, ApJ, 310, 593 Ciotti L., 1991, A&A, 249, 99 Chen, B. et al. 2001, ApJ, 553, 184 Combes F. & Sanders R.H., 1981, A&A, 96, 164 Combes F., Debbasch F., Friedly D. & Pfenniger D., 1990, A&A, 233, 82 Debattista V. P., Carollo C. M., Mayer L. & Moore B., 2004, ApJ, 604, L93 Dehnen, W., & Binney, J. 1998, MNRAS, 298, 387 de Jong R. 1996, A&A, 313, 45 Falcon-Barroso J., Balcells M., Peletier R. & Vazdekis A., 2003, A&A, 405, 455 Fukugita M., Hogan C. J. & Peebles P. J. E., 1998, ApJ, 503, 518 Graham A., 2001, ApJ, 121, 820 Hernández X., Avila-Reese V. & Firmani C., 2001, MNRAS, 327, 329 Hernquist L., 1993, ApJSS, 86, 389 Hernquist L., Spergel D. N. & Heyl J. S., 1993, ApJ, 416, 415 Holmberg J. & Flynn C., 2000, MNRAS, 313, 209 Hunt L.K., Pierini D. & Giovanardi C., 2004, A&A, 414, 905 Kerr, F. J., Bowers, P. F., Jackson, P. D., & Kerr, M. 1986, A&A Suppl., 66, 373 Klypin A., Zhao H., Somerville R., 2002, ApJ, 573, 597 Klypin A. et al. 2005, in preparation Knapp, G. R., Stark, A. A., & Wilson, R. W. 1985, AJ, 90, 254 Kravtsov A.V., Klypin A.A., & Khokhlov A.M., 1997, ApJS 111, 73 Kormendy J., 1979, ApJ, 227, 714 Kormendy J., 1982, ApJ, 257, 75 Kormendy J. & Kennicutt R., 2004, ARA&A, in press Kregel, M., 2003, PhD. Thesis, University of Groningen Kuijken, K., & Gilmore, G. 1989, MNRAS, 239, 605 Lake G., 1989, AJ, 97, 1312 Lewis J. R. & Freeman K. C., 1989, AJ, 97, 139 Lima-Neto G.B., Gerbal D. & Marquez I., 1999, MNRAS, 309, 481 MacArthur L., Courteau S. & Holtzmann J., 2003, ApJ, 582, 689 Norman C., Sellwood J., Hasan H., 1996, ApJ, 462,114 Navarro J. F., Frenk C. S. & White S. D. M., 1997, ApJ, 490, 493 O’neill J.K. & Dubinsky J., 2003, MNRAS, 346, 2510 Pfenniger D. & Norman C., 1990, ApJ, 363, 391 Raha N., Sellwood J. A., James R. A. & Kahn F. D., 1991, Nature, 352, 411 Rocha-Pinto H. J., Flynn C., Scalo J., Hänninen J., Maciel W.J. & Hensler G., 2004, A&A, 423, 517 Sellwood J.A., 1980, A&A, 89, 296 Sérsic J. L., 1968, Atlas de Galaxias Australes (Córdoba, Argentina: Observatorio Astronómico) Shapiro K. Gersen J. & van der Marel P.R., 2003, AJ, 126, 2707 Shen J. & Sellwood J.A., 2004, ApJ, 604, 614 Siebert, A., Bienaym[' e]{}, O., & Soubiran, C. 2003, A&A, 399, 531 Spergel D. N., Malhotra S., Blitz L., 1995, in Spiral Galaxies in the Near-IR, eds. Minniti D. & Rix H., Springer, Berlin, p. 128 Tremaine S., Hénon M., & Lynden-Bell D., 1986, MNRAS, 219, 285 Tremaine S. et al., 2003, ApJ, 574, 740 Valenzuela O., & Klypin A., 2003, MNRAS, 345, 406 (VK03) van den Bosch F.C., 1998, ApJ, 507, 601 Wadsley J. W., Stadel J. & Quinn T., 2004, New Astr., 9, 137 Wyse R., 1998, MNRAS, 293, 429-433 Wyse R., Gilmore G., & Franx M., 1997, ARA&A, 35, 637 \[lastpage\] [^1]: In order to make a better match of the model with the Milky Way, we rescaled the model: all coordinates and masses were scaled down by factor 1.15. As any pure gravitational system, it can be arbitrary scaled using two free independent scaling factors. In this case we chose mass and distance; time, velocity, surface density, and so on are scaled accordingly.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $\mathbf{k}$ be a differential field and let $[A]\,:\,Y''=A\,Y$ be a linear differential system where $A\in\mathrm{Mat}(n\,,\,\mathbf{k})$. We say that $A$ is in a reduced form if $A\in\mathfrak{g}(\bar{\mathbf{k}})$ where $\mathfrak{g}$ is the Lie algebra of $[A]$ and $\bar{\mathbf{k}}$ denotes the algebraic closure of $\mathbf{k}$. We owe the existence of such reduced forms to a result due to Kolchin and Kovacic [@Ko71a]. This paper is devoted to the study of reduced forms, of (higher order) variational equations along a particular solution of a complex analytical hamiltonian system $X$. Using a previous result [@ApWea], we will assume that the first order variational equation has an abelian Lie algebra so that, at first order, there are no Galoisian obstructions to Liouville integrability. We give a strategy to (partially) reduce the variational equations at order $m+1$ if the variational equations at order $m$ are already in a reduced form and their Lie algebra is abelian. Our procedure stops when we meet obstructions to the meromorphic integrability of $X$. We make strong use both of the lower block triangular structure of the variational equations and of the notion of associated Lie algebra of a linear differential system (based on the works of Wei and Norman in [@WeNo63a]). Obstructions to integrability appear when at some step we obtain a non-trivial commutator between a diagonal element and a nilpotent (subdiagonal) element of the associated Lie algebra. We use our method coupled with a reasoning on polylogarithms to give a new and systematic proof of the non-integrability of the Hénon-Heiles system. We conjecture that our method is not only a partial reduction procedure but a complete reduction algorithm. In the context of complex Hamiltonian systems, this would mean that our method would be an effective version of the Morales-Ramis-Simó theorem.' address: - 'XLIM, Université de Limoges, France' - 'XLIM, Université de Limoges, France' author: - '[A. [Aparicio Monforte]{}]{}' - '[ J.-A. [Weil]{}]{}' date: 'June 2010 and, in revised form, Oct 12, 2010.' title: A Reduction Method for Higher Order Variational Equations of Hamiltonian Systems --- [^1] Introduction ============ Let $(\mathbf{k}\,,\, '\,)$ be a differential field and let $[A]: \; Y'=AY$ be a linear differential system with $A\in \mathcal{M}_{n}(\mathbf{k})$. We say that the system is in [*reduced form*]{} if its matrix can be decomposed as $A=\sum^{d}_{i=1} \alpha_i A_i$ where $\alpha_i \in \mathbf{k}$ and $A_i\in Lie(Y'=AY)$, the Lie algebra of the differential Galois group of $[A]$. This notion of reduced form was introduced in [@Ko71a] and subsequently used (for instance [@MiSi96a] and [@MiSi96b]) to study the inverse problem. It has been revived, with a constructive emphasis, in [@ApWea]. It is a powerful tool in various aspects of linear differential systems. The main contribution of this work lies in the context of Hamiltonian mechanics and Ziglin-Morales-Ramis theory [@MoRaSi07a]: reduced forms provide a new and powerful effective method to obtain (non-)abelianity and integrability obstructions from higher variational differential equations. This article is structured in the following way. First we lay down the background on Hamiltonian systems, differential Galois theory, integrability and Morales-Ramis-Simó theorem. In section \[section: reduced forms\], we define precisely the notions of reduced form and Wei-Norman decomposition and the link between them. Section \[section: reduced VEm\] contains the theoretical core of this work: we focus on the application of reduced forms to the study of the meromorphical integrability of Hamiltonian systems. We introduce a reduction method for block lower triangular linear differential systems and apply it to higher variational equations, in particular when the Lie algebra of the diagonal blocks is abelian and of dimension 1. In section \[section: new proof\], we demonstrate the use of this method, coupled with our reduction algorithm for matrices in $\mathfrak{sp}(2,\mathbf{k})$ [@ApWea] by giving a new, effective and self-contained Galoisian non-integrability proof of the degenerate Hénon-Heiles system ([@Mo99a] ,[@MoRaSi07a], [@MaSi09a]) which has long served as a key example in this field. Background ========== Hamiltonian Systems ------------------- Let $(M\,,\,\omega)$ be a complex analytic symplectic manifold of complex dimension $2n$ with $n\in\mathbb{N}$. Since $M$ is locally isomorphic to an open domain $U\subset\mathbb{C}^{2n}$, Darboux’s theorem allows us to choose a set of local coordinates $(q\,,\,p)=(q_1 \,\ldots q_n\,,\, p_1\ldots p_n)$ in which the symplectic form $\omega$ is expressed as $J:=\tiny\left[\begin{array}{cc}0 & I_n \\-I_n & 0\end{array}\right]$. In these coordinates, given a function $H\in C^{2}(U)\,:\,U\,\longrightarrow\,\mathbb{C}$ (the Hamiltonian) we define a Hamiltonian system over $U\in\mathbb{C}^{2n}$, as the differential equation given by the vector field $X_H:= J\nabla H$: $$\label{(1)} \begin{array}{cccc} \dot{q}_i = \frac{\partial H }{\partial p_i}(q\,,\,p) &,& \dot{p}_i = -\frac{\partial H }{\partial q_i}(q\,,\,p)& \text{for} \,\, i=1\ldots n \end{array}$$ The Hamiltonian $H$ is constant over the integral curves of (\[(1)\]) because $X_H\cdot H:=\langle \nabla H\,,\, X_H\rangle = \langle \nabla H \,,\, J\nabla H\rangle =0$. Therefore, integral curves lie on the energy levels of $H$. A function $F\,:\, U\, \longrightarrow \,\mathbb{C}$ meromorphic over $U$ is called a *meromorphic first integral of* (\[(1)\]) if it is constant over the integral curves of (\[(1)\]) (equivalently $X_H \cdot F =0$). Observe that the Hamiltonian is a first integral of (\[(1)\]). The Poisson bracket $\lbrace \,,\,\rbrace$ of two meromorphic functions $f, g$ defined over a symplectic manifold, is defined by $\lbrace f \,,\, g \rbrace:=\langle \nabla f \,,\, J\nabla g\rangle$; in the Darboux coordinates its expression is $\lbrace f \,,\, g \rbrace = \sum^{n}_{i=1} \frac{\partial f}{\partial q_i}\frac{\partial g}{\partial p_i}-\frac{\partial f}{\partial p_i}\frac{\partial g}{\partial q_i}$. The Poisson bracket endows the set of first integrals with a structure of Lie algebra. A function $F$ is a first integral of (\[(1)\]) if and only if $\lbrace F\,,\, H\rbrace=0$ (i.e $H$ and $F$ are *in involution*). A Hamiltonian system with $n$ degrees of freedom, is called *meromorphically Liouville integrable* if it possesses $n$ first integrals (including the Hamiltonian) meromorphic over $U$ which are functionally independent and in pairwise involution. Variational equations {#subsection:variational equations} --------------------- Among the various approaches to the study of meromorphic integrability of complex Hamiltonian systems, we choose a Ziglin-Morales-Ramis type of approach. Concretely, our starting points are the Morales-Ramis [@Mo99a] Theorem and its generalization, the Morales-Ramis-Simó Theorem [@MoRaSi07a]. These two results give necessary conditions for the meromorphic integrability of Hamiltonian systems. We need to introduce here the notion of variational equation of order $m\in\mathbb{N}$ along a non punctual integral curve of (\[(1)\]). Let $\phi(z,t)$ be the flow defined by the equation (\[(1)\]). For $z_0 \in \Gamma$, we let $\phi_{0}(t):=\phi(z_0 \,,\, t)$ denote a temporal parametrization of a non punctual integral curve $\Gamma$ of (\[(1)\]) such that $z_0 = \phi(w_0 , t_0)$. We define $\mathrm{(VE^{m}_{\phi_0})}$ the *$m^{th}$ variational equation* of (\[(1)\]) along $\Gamma$ as the differential equation satisfied by the $\xi_{j}:=\frac{\partial^{j}\phi(z\,,\,t)}{\partial z^j}$ for $j\leq m$. For instance, $\mathrm{(VE^{3}_{\phi_0})}$ is given by (see [@Mo99a] and [@MoRaSi07a]): $$\begin{aligned} \nonumber \dot{\xi}_1 &=& d_{\phi_0} X_H \xi_1\\ \nonumber \dot{\xi}_2 &=& d^{2}_{\phi_0}X_H(\xi_1\,,\, \xi_1) + d_{\phi_0}X_H \xi_2\\ \nonumber \dot{\xi}_2 &=& d^{3}_{\phi_0}X_H(\xi_1\,,\, \xi_1\,,\, \xi_1) + 2 d^2_{\phi_0}X_H (\xi_1\,,\,\xi_2) + d_{\phi_0} X_H \xi_3.\end{aligned}$$ For $m=1$, the equation $\mathrm{(VE^{1}_{\phi_0})}$ is a linear differential equation $$\dot{\xi}_1 = A_{1} \xi_1\text{ where }A_{1}:= d_{\phi_0} X_H= J\cdot Hess_{\phi_0}(H)\in\mathfrak{sp}(n\,,\, \mathbf{k})\text{ and } \mathbf{k}:=\mathbb{C}\langle \phi_0(t) \rangle.$$ Higher order variational equations are not linear in general for $m\geq 2$. However, taking symmetric products, one can give for every $\mathrm{(VE^{m}_{\phi_0})}$ an equivalent linear differential system $\mathrm{(LVE^{m}_{\phi_0})}$ called the [*linearized*]{} $m^{th}$ [*variational equation*]{} (see [@MoRaSi07a]). Since the $\mathrm{(LVE^{m}_{\phi_0})}$ are linear differential systems, we can consider them under the light of differential Galois theory ([@PuSi03a; @Mo99a]). We take as base field the differential field $\mathbf{k} := \mathbb{C}\langle \phi_{0} \rangle$ generated by the coefficients of $\phi_{0}$ and their derivatives. Let $K_m$ be a Picard Vessiot extension of $\mathrm{(LVE^{m}_{\phi_0})}$ for $m\geq 1$. The differential Galois group $G_m := \text{Gal}(K_m /\mathrm{k} )$ of $\mathrm{(LVE^{m}_{\phi_0})}$ is the group of all differential automorphisms of $K_m$ that leave the elements of $\mathbf{k}$ fixed. As $G_m$ is isomorphic to a algebraic linear group over $\mathbb{C}$, it is in particular an algebraic manifold and we can define its Lie algebra $\mathfrak{g}_m:=T_{I_{d_m}} G^{\circ}_m$, the tangent space of $G_m$ at $I_{d_m}$ (with $ d_m= \tiny\sum^{m}_{i=1} \binom{n+i-1}{n-1}$ the size of $\mathrm{(LVE^{m}_{\phi_0})}$). The Lie algebra $\mathfrak{g}_m$ is a complex vector space of square matrices of size $d_m$ whose Lie bracket is given by the commutator of matrices $[M\,,\,N] = M\cdot N - N \cdot M$. We say that $\mathfrak{g}_m$ is abelian if $[\mathfrak{g}_m\,,\, \mathfrak{g}_m] = 0$. Following the notations above, we can finally give the Morales-Ramis-Simó theorem: \[MRS\]([@MoRaSi07a]): If the Hamiltonian system (\[(1)\]) is meromorphically Liouville integrable then the $\mathfrak{g}_m$ are abelian for all $m\in \mathbb{N}^{\star}$. Partial effective versions of this theorem have been proposed. In [@MoRaSi07a] (and already [@Mo99a]), a local criterion is given for the case when the first variational equation has Weierstrass functions as coefficients ; in [@MaSi09a], a powerful approach using certified numerical computations is proposed. In the case of Hamiltonian systems with a homogeneous potential, yet another approach is given in [@CaDuMaPr10a].\ Our aim is to propose an alternative (algorithmic) method using a (constructive) notion of reduced form for the variational equation. This strategy should supply new criteria of non-integrability as well as some kind of “normal form along a solution”. We will now explain this notion of reduced form (which we started investigating in [@ApWea]) and show how to apply it. We will then apply our reduction method in detail on the well-known degenerated case of the Henon-Heiles system proposed in [@MoRaSi07a]. Reduced Forms {#section: reduced forms} ============= Let $(\mathbf{k}\,,\,'\,)$ be a differential field with field of constants $C$ and let $Y'=AY$ be a linear differential system with $A=(a_{i j})\in \mathcal{M}_{n}(\mathbf{k})$. Let $G$ be the differential Galois group of this system and $\mathfrak{g}$ the Lie algebra of $G$. We sometimes use the slight notational abuse $\mathfrak{g}=Lie(Y'=AY)$.\ Let $a_{1},\ldots,a_{r}$ denote a basis of the $C$-vector space spanned by the entries $a_{i,j}\in k$ of $A$. Then we have $$A:=\sum^{r}_{i=1} a_{i}(x) M_i ,\quad M_i \in\mathcal{M}_{n}(C).$$ This decomposition appears (slightly differently) in [@WeNo63a], we call it a *Wei-Norman decomposition* of $A$. Although this decomposition is not unique (it depends on the choice of the basis $(a_{i})$), the $C-$vector space generated by the $M_i$ is unique. With these notations, the Lie algebra generated by $M_1 ,\ldots , M_r$ and their iterated Lie brackets is called *the Lie algebra associated to $A$*, and will be denoted as $Lie(A)$. Consider the matrix $$A_1:=\left[\begin{array}{cccc} 0 & 0 & 2/x & 0 \\ 0 & 0 & 0 & 2/x\\ \frac{2(x^4 - 10 x^2 + 1 )}{x(x^2 + 1)^2} & 0 & 0 & 0\\ 0 & -\frac{12 x }{(x^2 + 1)^2 } & 0 & 0\end{array}\right].$$ Expanding the fraction $\frac{2(x^4 - 10 x^2 + 1 )}{x(x^2 + 1)^2}$ gives a Wei-Norman decomposition as $$A_{1}=\frac2{x} M_{1} -\frac{12 x }{(x^2 + 1)^2 } M_{2},$$ where $$M_{1}= \left[\begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right],\, M_{2}=\left[\begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 2 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\end{array}\right]$$ and $Lie(A_{1})$ has dimension $6$. A celebrated theorem of Kovacic (and/or Kolchin) states that $\mathfrak{g}\subset Lie(A)$. This motivates the following definition : We say that $A$ is in *reduced form* if $Lie(A)=\mathfrak{g}$. A [*gauge transformation*]{} is a change of variable $Y=PZ$ with $P\in \mathrm{GL}(n\,,\, \mathbf{k}) $. Then $Z'=RZ$ where $R:=P^{-1} (AP-P')$. In what follows, we adopt the notation $P[A]:=P^{-1} (AP-P')$ for the system obtained after the gauge transformation $Y=PZ$.\ The following theorem due to Kovacic (and/or Kolchin) ensures the existence of a gauge transformation $P\in \mathrm{GL}(n\,,\, \bar{\mathbf{k}}) $ such that $P[A]\in\mathfrak{g}(\bar{\mathbf{k}})$ when $ {\mathbf{k}} $ is a $C_1$-field[^2] \[Kovacic\] Let $k$ be a differential $C_1$-field. Let $A\in\mathcal{M}_n (k)$ and assume that the differential Galois Group $G$ of the system $Y'=AY$ is connected. Let $\mathfrak{g}$ be the Lie algebra of $G$. Let $H$ be a connected algebraic group such that its Lie algebra $\mathfrak{h}$ satisfies $A\in\mathfrak{h}(k)$. Then $G\subset H$ and there exists $P\in H(k)$ such that the equivalent differential equation $F'=\tilde{A}F$, with $Y=PF$ and $\tilde{A}=P[A]=P^{-1}AP-P^{-1}P'$, satisfies $\tilde{A}\in \mathfrak{g}(k)$. We say that a matrix $P\in\mathrm{GL}_n (\mathbf{k})$ is a *reduction matrix* if $P[A]\in\mathfrak{g}(\mathbf{k})$, i.e $P[A]$ is in reduced form. We say that a matrix $Q\in\mathrm{GL}_n (\mathbf{k})$ is a *partial reduction matrix* when $Q[A]\in\mathfrak{h}(\mathbf{k})$ with $\mathfrak{g}\subsetneq\mathfrak{h}\subsetneq Lie(A)$. The general method used to put $A$ in a reduced form consists in performing successive partial reductions until a reduced form is reached.\ In our paper [@ApWea], we provide a reduction algorithm that computes a reduction matrix $P_1 \in\mathrm{Sp}(2,\mathbf{k})$ for $4\times 4$ linear differential systems $Y'=A_1Y$ with $A_1\in\mathfrak{sp}(2,\mathbf{k})$ (and also for $2\times 2$ systems). The first variational equation of a Hamiltonian system with $n=2$ degrees of freedom belongs to this class of systems. If $P_1$ is a reduction matrix for $A_1$ then $Sym^{m} P_1$ is a reduction matrix for $sym^{m} A_1$ because $Sym^m $ is a group morphism (see [@PuSi03a], chapter 2 or [@FuHa91a] appendix B2).\ In what follows, we will assume that we have reduced the first variational equation, that its Lie algebra is abelian (so that the Morales-Ramis theorem gives no obstruction to integrability), and use this to start reducing higher variational systems.\ We will follow the philosophy of Kovacic’s theorem \[Kovacic\] and look for reduction matrices inside $\exp(Lie(A))$. We remark that, in the context of Lie-Vessiot systems, an analog of the above Kolchin-Kovacic reduction theorem is given by Blazquez and Morales ([@BlaMor10], section 5, in particular theorems 5.3 and 5.8) in relation to Lie reduction.\ The notion of a reduced form is useful in many contexts, such as: inverse problems (where the notion was first studied), the computation of the transcendence degree of Picard Vessiot extensions, fast resolution of linear differential systems with an abelian Lie algebra and to implement the Wei-Norman method for solving linear differential systems with a solvable Lie algebra (using the Campbell-Hausdorff formula) [@WeNo63a]. Reduced forms are also a new and powerful tool that provides (non-)abelianity and integrability obstructions for (variational) (see Theorem \[MRS\]) linear differential equations arising from Hamiltonian mechanics, as we will now see. Reduced Forms for Higher Variational Equations {#section: reduced VEm} ============================================== Preliminary results {#preliminary} ------------------- Let $(\mathbf{k}\,,\, ' )$ be a differential field and let $d\in\mathbb{N}$. Consider a linear differential system $Y' = AY$ whose matrix $A\in {{\mathcal M}}_{d}(\mathbf{k})$ is block lower triangular as follows: $$A:=\left[\begin{array}{cc}A_{1} & 0 \\ A_{3} & A_{2}\end{array}\right]= A_{diag} + A_{sub} \text{ where } A_{diag}=\left[\begin{array}{cc} A_1 & 0 \\ 0 & A_2\end{array}\right] \text{ and } A_{sub}=\left[\begin{array}{cc} 0 & 0 \\ A_3 & 0\end{array}\right].$$ The submatrices satisfy $ A_{1}\in {{\mathcal M}}_{d_1}(\mathbf{k})$, $ A_{2}\in {{\mathcal M}}_{d_2}(\mathbf{k})$, $A_3\in {{\mathcal M}}_{d_2 \times d_1} (\mathbf{k})$ and their dimensions add-up $d=d_1 + d_2$. Let $${{{\mathcal M}}}_{diag}:=\left\{\left[\begin{array}{cc}A_{1} & 0 \\ 0 & A_{2}\end{array}\right], A_{i}\in{{\mathcal M}}_{d_{i}}(\mathbf{k})\right\}$$ and $${{{\mathcal M}}}_{sub}:=\left\{\left[\begin{array}{cc}0 & 0 \\ B_{1} & 0\end{array}\right], B_{1}\in{{\mathcal M}}_{d_{2}\times d_{1}}(\mathbf{k})\right\}$$ \[diagsub\] Let $M_{1},M_{2}\in {{{\mathcal M}}}_{diag}$ and $N_{1},N_{2}\in {{{\mathcal M}}}_{sub}$. Then $M_{1}.M_{2}\in {{{\mathcal M}}}_{diag}$, $N_{1}.N_{2}=0$ (so that $N_{1}^{2}=0$ and $\exp(N_{1})=Id + N_{1}$), and $[M_{1},N_{1}]\in {{{\mathcal M}}}_{sub}$. The proof is a simple linear algebra exercise.\ Let $\mathfrak{g}:=Lie(Y'=AY)$ be the Lie algebra of the Galois group of $Y'=AY$ and let $\mathfrak{h}:=Lie(A)$ denote the Lie algebra associated to $A$. We write $\mathfrak{h}_{diag}:=\mathfrak{h} \cap {{{\mathcal M}}}_{diag}$ and $\mathfrak{h}_{sub}:=\mathfrak{h} \cap {{{\mathcal M}}}_{sub}$. The lemma shows that they are both Lie subalgebras (with $\mathfrak{h}_{sub}$ abelian) and $\mathfrak{h}=\mathfrak{h}_{diag}\oplus \mathfrak{h}_{sub}$. Furthermore, $[\mathfrak{h}_{diag},\mathfrak{h}_{sub}]\subset \mathfrak{h}_{sub}$ (i.e $\mathfrak{h}_{sub}$ is an ideal in $\mathfrak{h}$). When $\mathfrak{h}_{diag}$ is abelian, obstructions to the abelianity of $\mathfrak{h}$ only lie in the brackets $[\mathfrak{h}_{diag},\mathfrak{h}_{sub}]$. A first partial reduction for higher variational equations {#subsection: first partial reduction} ---------------------------------------------------------- Using the algorithm of [@ApWea], we may assume that the first variational equation has been put into a reduced form. We further assume that the first variational equation has an abelian Lie algebra (so that there is no obstruction to integrability at that level).\ As stated in section \[subsection:variational equations\], each $\mathrm{(VE^{m}_{\phi_0})}$ is equivalent to a linear differential system $\mathrm{(LVE^{m}_{\phi_0})}$ whose matrix we denote by $A_m$. The structure of the $A_m$ is block lower triangular , to wit $$A_m :=\left[\begin{array}{cc} sym^{m}(A_1) & 0 \\ B_m & A_{m-1}\end{array}\right]\in M_{d_m} (\mathbf{k})$$ where $A_1$ is the matrix of $\mathrm{(LVE^{1}_{\phi_0})}$. Assume that $A_{m-1}$ has been put in reduced form by a reduction matrix $P_{m-1}$. Then the matrix $Q_m \in \mathrm{GL}(d_m\,,\,\mathbf{k})$ defined by $$Q_m:=\left[\begin{array}{cc} Sym^{m}(P_1) & 0 \\ 0 & P_{m-1}\end{array}\right]$$ puts the diagonal blocks of the matrix $A_m$ into a reduced form (i.e the system would be in reduced form if there were no $B_{m}$) and preserves the block lower triangular structure. Indeed, $$Q_m [A_m] = \left[ \begin{array}{cc} Sym^m(P_1)[sym^m A_1] & 0 \\ \tilde{B}_{m} & P_{m-1}[A_{m-1}]\end{array}\right]$$ where $$\tilde{B}_{m}:=P^{-1}_{m-1} B_m Sym^{m}(P_1).$$ Applying the notations of the previous section to $\tilde{A}:=Q_m [A_m]$, we see that $Lie(\tilde{A})_{diag}$ and $Lie(\tilde{A})_{sub}$ are abelian. Obstructions to integrability stem from brackets between the diagonal and subdiagonal blocks. To aim at a reduced form, we need transformations which “remove” as many subdiagonal terms as possible while preserving the (already reduced) diagonal part. Recalling Kovacic’s theorem \[Kovacic\], our partial reduction matrices will arise as exponentials from subdiagonal elements. Reduction tools for higher variational equations ------------------------------------------------ \[partial reduction\] Let $A:=Q_m [A_m]$ as above be the matrix of the $m$-th variational equation $Y'=AY$ after reduction of the diagonal part. Write $A=A_{diag} + \sum^{d_{sub}}_{i=1} \beta_i B_i$ with $\beta_i \in\mathbf{k}$, where the $B_{i}$ form a basis of $Lie(A)_{sub}$ (in the notations of section \[preliminary\]).\ Let $[A_{diag}\,,\, B_1]=\sum_{i=1}^{{d_{sub}}} \gamma_{i} B_{i}$, $\gamma_{i} \in\mathbf{k}$. Assume that the equation $y'=\gamma_{1}y+\beta_{1}$ has a solution $g_{1}\in k$. Set $P:=\exp(g_{1}B_{1})=(Id+g_{1}B_{1})$. Then $$P[A] = A_{diag} + \sum_{i=\bf{2}}^{d_{sub}} \left[\beta_{i}+g_{1}\gamma_{i}\right]B_{i},$$ i.e $P[A]$ no-longer has any terms in $B_{1}$. Recall that $P[A] = P^{-1}(AP-P')$ and let $P=Id + g_1 B_1$. We have $P'=g'_1 B_1$ whence $$AP=(A_{diag} + \sum^{d_{sub}}_{i=1} \beta_i B_i)(I+g_{1} B_1) = A_{diag} + \sum_{i\geq 1} \beta_i B_i + g_{1} A_{diag} B_1$$ since $B_i B_j = 0$. Therefore we have $AP-P' = A_{diag}+ g_{1} A_{diag} B_1 + (\beta_{1}-g_{1}')B_{1} + \sum^{d}_{i=2}\beta_i B_i$ which implies $$\begin{aligned} \nonumber P^{-1}(AP-P') &=& (Id-g_{1} B_1) \left[A_{diag}+ g_{1} A_{diag} B_1 + (\beta_{1}-g_{1}')B_{1} + \sum^{d}_{i=2}\beta_i B_i \right]\\ \nonumber&=&A_{diag} + g_{1}[A_{diag}\,,\, B_1] + (\beta_{1}-g_{1}')B_{1} + \sum^{d}_{i=2}\beta_i B_i\end{aligned}$$ because $B_1 A_{diag} B_1 = B_1 [A_{diag} \,,\, B_1] + A_{diag} B_{1} B_{1} = B_{1}\left[\sum \gamma_i B_i\right]=0$. So, as $g_{1}'=\gamma_{1}g_{1}+\beta_{1}$, we obtain $$P[A] = A_{diag} + \sum_{i=\bf{2}}^{d_{sub}} \left[\beta_{i}+g_{1}\gamma_{i}\right]B_{i}.$$ If $\gamma_{1}=0$ then we simply have $g_{1}=\int\beta_{1}$. In that case, suppose that $\mathbf{k}=\mathbb{C}(x)$ and that $\beta_1 = R'_1 + L_1$ where $R_1 \in \mathbb{C}(x)$ and $L_1 \in \mathbb{C}(x)$ has only simple poles, then $\int \beta_1 \notin \mathbb{C}(x)$. However, if we apply proposition \[partial reduction\] with the change of variable $Y= (I + R_1 B_1) Z$ a term in $B_1$ will be left that will only contain simple poles. This proposition gives a nice formula for reduction. However, it is hard to iterate unless $Lie(A)$ has additional properties (solvable, nilpotent, etc) because the next iteration may “re-introduce” $B_{1}$ in the matrix (because of the expression of the brackets). This proposition provides a reduction strategy when the map $[A_{diag},.]$ admits a triangular representation.\ To achieve this, we specialize to the case when the Lie algebra $\mathfrak{g}_{diag}$ has dimension (at most) $1$. Then we have $A_{diag} = \beta_{0} A_{0}$ where $\beta_{0}\in k$ and $A_{0}$ is a constant matrix. The above proposition specializes nicely : If $A_{diag}=\beta_0 A_0$ with $\beta_0\in\mathbf{k}$, $A_{0}\in\mathcal{M}_{n}(\mathbb{C})$ and $[A_0\,,\, B_1] = \lambda B_1$ for some constant eigenvalue $\lambda \neq 0$ then the change of variable $Y=PZ$ with $P:=(Id + g B_1)$, with $g'= \lambda g\beta_0 + \beta_1$, satisfies $P[A] = \beta_0 A_0 + \sum^{d_{sub}}_{i\geq 2} \beta_i B_i$. To implement this (and obtain a general reduction method), we let $\Psi_{0} : \mathfrak{h}_{sub} \rightarrow \mathfrak{h}_{sub}$, $B\mapsto [A_{0},B]$. This is now an endomorphism of a finite dimensional vector space ; up to conjugation, we may assume the basis $(B_{i})$ to be the basis in which the matrix of $\Psi_{0}$ is in Jordan form. We are then in position to apply the proposition iteratively (see the example below for details on the process). Not that $A_0$ needs not be diagonal. The calculations of lemma \[partial reduction\] and subsequent proofs remain valid when $A_0$ is block lower triangular. We have currently implemented this in Maple for the case when $A_{diag}$ is monogenous, i.e. its associated Lie algebra has dimension $1$. We will show the power of this method and of the implementation by giving a new proof of non-integrability of the degenerate Henon-Heiles system whose first two variational equations are abelian but which is not integrable. A new proof of the non integrability of a degenerate Hénon-Heiles system {#section: new proof} ======================================================================== In this section we consider the following Hénon Heiles Hamiltonian [@Mo99a], [@MoRaSi07a], $$\label{HH} H:=\frac{1}{2}(p^2_1 + p^2_2) + \frac{1}{2}(q^2_1 + q^2_2) + \frac{1}{3}q^3_1 + \frac{1}{2}q_1 q^2_2$$ as given in [@Mo99a]. This Hamiltonian’s meromorphic non integrability was proved in [@MoRaSi07a]. The Hamiltonian field is $$\dot{q}_1 = p_1 \,,\, \dot{q}_2 = p_2 \,,\, \dot{p}_1 = -q_1 (1+q_1) - \frac{1}{2}q^2_2 \,,\,\dot{p}_2 = -q_2 (1+ q_1 ).$$ This degenerate Hénon Heiles system was an important test case which motivated [@MoRaSi07a]. Its non integrability was reproved in [@MaSi09a] to showcase the method used by the authors. We follow in this tradition by giving yet another proof using our systematic method. Our reduction provides a kind of “normal form along $\phi$” in addition to a non integrability proof. The readers wishing to reproduce the detail of the calculations will find a Maple file at the url Êhttp://www.unilim.fr/pages_perso/jacques-arthur.weil/charris/ It contains the commands needed to carry on the reduction of the $\mathrm{(LVE^{m}_{\phi})}$ for $i=1\ldots 3$. The reduction of $\mathrm{(LVE^{3}_{\phi})}$ may take several minutes to complete. Reduction of $\mathrm{(VE^{1}_{\phi})}$ --------------------------------------- On the invariant manifold $\lbrace q_2 = 0 \,,\, p_2 =0 \rbrace$ we consider the non punctual particular solution $$\phi(t) = \left(\,\frac{3}{2}\frac{1}{\cosh(t/2)^2} - 1 \,,\, 0\,,\, -\frac{3}{2}\frac{\sinh(t/2)}{\cosh(t/2)^3}\,,\,0\right).$$ and the base field is $\mathbf{k}=\mathbb{C}\langle \phi \rangle = \mathbb{C}(e^{t/2})$. Performing the change of independent variable $x=\mathrm{e}^{t/2}$, we obtain an equivalent system with coefficients in $\mathbb{C}(x)$ given by $$A_1:=\left[\begin{array}{cccc} 0 & 0 & 2/x & 0 \\ 0 & 0 & 0 & 2/x\\ \frac{2(x^4 - 10 x^2 + 1 )}{x(x^2 + 1)^2} & 0 & 0 & 0\\ 0 & -\frac{12 x }{(x^2 + 1)^2 } & 0 & 0\end{array}\right].$$\ Applying the reduction algorithm from [@ApWea] we obtain the reduction matrix $$P_1:=\left[\begin{array}{cccc} -\frac{6(x-1)(x+1)x^2}{(x^2 + 1)^3} & 0 & -\frac{x^{10} + 15 x^8 - 16 x^6 - 144x^4+15x^2+1}{12x^2 (x^2 +1)^3} & 0\\ 0 &\frac{x^4-4x^2 + 1}{(x^2 + 1)^2} & 0 & -\frac{5 x^4 + 16 x^2 - 13}{3(x^2 + 1 )^2}\\ \frac{6 x^2 (x^4 - 4x^2 + 1)}{(x^2 + 1) ^4} & 0 & -\frac{x^{12} + 4 x^{10} + 121 x^8 + 256 x^6 - 249 x^4 - 4 x^2 - 1}{12 x^2 (x^2 + 1) ^4} & 0 \\ 0 & \frac{6(x^2 - 1)x^2}{(x^2 + 1)^3} & 0 & \frac{x^6 - x^4 - 17 x^2 + 1}{(x^2 + 1)^3} \end{array}\right]$$ that yields the reduced form $$A_{1,R}=\frac{5}{3 x}\left[\begin{array}{cccc}0 &0 & 1 & 0\\ 0 & 0 & 0& 6/5 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right].$$ We see that $\mathrm{dim}_{\mathbb{C}}\left(Lie(A_{1,R})\right) =1$ and since $\frac{5}{3 x}$ has one single pole, we cannot further reduce without extending the base field $\mathbf{k}$. We find, $$\mathfrak{g}_1 = \mathrm{span}_{\mathbb{C}}\left\lbrace \tilde{D}_1 := \tiny\left[\begin{array}{cccc} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 6/5 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right]\right\rbrace$$ which is trivially abelian and therefore doesn’t give any obstruction to integrability. Reduction of $\mathrm{(LVE^{2}_{\phi})}$ ---------------------------------------- We want now to put the matrix $A_2$ of $\mathrm{(LVE^{2}_{\phi})}$ into a reduced form. First we reduce the diagonal blocks as indicated in section \[subsection: first partial reduction\] using the partial reduction matrix $Q_2:=\tiny\left[\begin{array}{cccc}Sym^2 P_1 & 0 \\ 0 & P_1\end{array}\right]$ so that we obtain a partially reduced matrix (its diagonal blocks are reduced whereas its subdiagonal block is not): $$Q_2[A_2]:=\left[\begin{array}{cccc} sym^{2} A_{1,R} & 0 \\ \tilde{B}_2 & A_{1,R}\end{array}\right] \text{ with } \left\lbrace\begin{array}{ccc} Q_2[A_2]_{diag} & = & \tiny\left[\begin{array}{cc} sym^2 A_{1,R} & 0 \\ 0 & A_{1,R}\end{array}\right]\\ Q_2[A_2]_{sub} & = & \tiny\left[\begin{array}{cc} 0 & 0 \\ \tilde{B}_2 & 0\end{array}\right]\end{array}\right\rbrace$$ We compute a Wei-Norman decomposition and we obtain an associated Lie algebra $Lie(Q_2[A_2])$ of dimension $11$ such that: - On one hand we obtain $Lie(Q_2[A_2])_{diag} = \mathrm{span}_{\mathbb{C}}\left\lbrace D_{2,0}:=\left[\begin{array}{cc} sym^2\tilde{D}_1 & 0 \\ 0 & \tilde{D}_1 \end{array}\right]\right\rbrace$ with coefficient $\beta_0:= \frac{5}{3 x}$. - On the other hand, $Lie(Q_2[A_2])_{sub} = \mathrm{span}_{\mathbb{C}}\lbrace \mathcal{B}_2 \rbrace$ where $$\mathcal{B}_2 := \lbrace B_i := {\tiny\left[\begin{array}{cc} 0 & 0 \\ \tilde{B}_i & 0\end{array}\right] , i=1\ldots 10\rbrace}\text{ and }Q_2[A_2]_{diag} = \sum^{10}_{i=1} \beta_{2,i} B_{2,i}\text{ with }\beta_i \in \mathbf{k}.$$ The matrix of the application $$\Psi_{2,0}\,:\, Lie(Q_2[A_2])_{sub}\,\longrightarrow\, Lie(Q_2[A_2])_{sub} \,,\, B_j \,\mapsto\, [D_{2,0}\,,\, B_j]$$ expressed in the base $\mathcal{B}_2$ takes the following form: $$\Psi_{2,0}:=\tiny \left[ \begin {array}{cccccccccc} 0&0&0&0&0&0&0&0&1&0 \\\noalign{\medskip}-2&0&0&0&0&0&0&0&0&0\\\noalign{\medskip}0&0&0&0&0&0 &0&0&0&1\\\noalign{\medskip}0&0&-6/5&0&0&0&-1&0&0&0 \\\noalign{\medskip}0&-3&0&0&0&0&0&0&0&0\\\noalign{\medskip}0&0&0&-{ \frac {12}{5}}&0&0&0&-1&0&0\\\noalign{\medskip}0&0&0&0&0&0&0&0&0&6/5 \\\noalign{\medskip}0&0&0&0&0&0&-{\frac {12}{5}}&0&0&0 \\\noalign{\medskip}0&0&0&0&0&0&0&0&0&0\\\noalign{\medskip}0&0&0&0&0&0 &0&0&0&0\end {array} \right].$$ We denote by $J_{\Psi_{2,0}}$ the matrix of $\Psi_{2,0}$ expressed in its Jordan basis, given by the matrices $C_{2,i} =\tiny\left[\begin{array}{cc} 0 & 0 \\ \tilde{C}_{2,i} & 0 \end{array}\right]$ and their coefficients $\gamma_{2,i}$ with $i=1\ldots 10$. So the Jordan form is $$J_{\Psi_{2,0}}=\tiny \left[ \begin {array}{cccccccccc} 0&1&0&0&0&0&0&0&0&0 \\\noalign{\medskip}0&0&1&0&0&0&0&0&0&0\\\noalign{\medskip}0&0&0&1&0&0 &0&0&0&0\\\noalign{\medskip}0&0&0&0&0&0&0&0&0&0\\\noalign{\medskip}0&0 &0&0&0&1&0&0&0&0\\\noalign{\medskip}0&0&0&0&0&0&1&0&0&0 \\\noalign{\medskip}0&0&0&0&0&0&0&1&0&0\\\noalign{\medskip}0&0&0&0&0&0 &0&0&0&0\\\noalign{\medskip}0&0&0&0&0&0&0&0&0&1\\\noalign{\medskip}0&0 &0&0&0&0&0&0&0&0\end {array} \right].$$ To perform reduction we will use the Jordan basis $\mathcal{C}_2 := \lbrace C_{2,i} \,,\, i=1\ldots 10\rbrace$. The decomposition given by the Jordan basis $\mathcal{C}_2$ is $Q_2[A_2]:=D_0 + \sum^{10}_{i=1} \gamma_i C_i$ with $\gamma_i \in\mathbf{k}$ , $i=1\ldots10$. We notice that $J_{\Psi_0}$ is made of three Jordan blocks - two blocks of dimension $4$ : $$\lbrace C_{2,4}\,,\,C_{2,3}\,,\,C_{2,2}\,,\,C_{2,1} \rbrace$$ and $$\lbrace C_{2,8}\,,\,C_{2,7}\,,\,C_{2,6}\,,\,C_{2,5} \rbrace$$ - and one block of dimension $2$ : $\lbrace C_{2,10}\,,\,C_{2,9} \rbrace$ The hypothesis of the first section of Proposition \[partial reduction\] are satisfied. Therefore the partial reduction of $Q_2[A_2]$ is done in the following way: - Choose a Jordan block of dimension $d$ : $\lbrace C_{2,i}\,\ldots\, C_{2,i+d-1}\rbrace$. It satisfies $\Psi_{2,0}(C_{2,i+s}) = C_{2,i+s-1}$ for $s=1\ldots d-1$. Set $\tilde{A}_{2}:=Q_2[A_2]$ and set $s:=d-1$. - For $s$ from $d-1$ to $1$, compute the decomposition $\gamma_{2,i+s}= R'_{2,i+s} + L_{2,i+s}$ where $R_{2,i+s}\,,\, L_{2,i+s}\in\mathbf{k}$ and $L_{2,i+s}$ has only simple poles.\ Take the change of variable $P_{2,i+s}=Id + R_{2,i+s} C_{2,i+s}$ and perform the gauge transformation $P_{2,i+s}[\tilde{A}_{2}]$.\ If $L_{2,i+s}=0$ then the Wei-Normal decomposition of $P_{2,i+s}[\tilde{A}_{2}]$ does not contain $C_{2,i+s}$ so $C_{2,i+s}\notin\mathfrak{g}_2$.\ Set $\tilde{A}_{2} := P_{2,i+s}[\tilde{A}_{2}]$ and set $s:=s-1$. Repeat this procedure recursively until $s=1$. - Choose a Jordan block that has not been treated. Repeat until there are no more Jordan blocks left untreated. In this way, only will be left in the subdiagonal block the $C_{2,i}$ that have coefficients $L_{2,i}$ (after the procedure) containing only simple poles. In our case, we obtain a reduced matrix for $\mathrm{(LVE^{2}_{\phi})}$: $A_{2,R}:=\frac{1}{x} \tilde{C}_0$ and $$\tilde{C}_0:=\tiny\left[ \begin {array}{cccccccccccccc} 0&0&\frac53\, &0&0&0&0&0&0&0 &0&0&0&0\\\noalign{\medskip}0&0&0&2\, &0&\frac53\, &0&0&0&0&0 &0&0&0\\\noalign{\medskip}0&0&0&0&0&0&0&\frac{10}{3}\, &0&0&0&0&0&0 \\\noalign{\medskip}0&0&0&0&0&0&0&0&\frac53\, &0&0&0&0&0 \\\noalign{\medskip}0&0&0&0&0&0&2\, &0&0&0&0&0&0&0 \\\noalign{\medskip}0&0&0&0&0&0&0&0&2\, &0&0&0&0&0 \\\noalign{\medskip}0&0&0&0&0&0&0&0&0&4\, &0&0&0&0 \\\noalign{\medskip}0&0&0&0&0&0&0&0&0&0&0&0&0&0\\\noalign{\medskip}0&0 &0&0&0&0&0&0&0&0&0&0&0&0\\\noalign{\medskip}0&0&0&0&0&0&0&0&0&0&0&0&0&0 \\\noalign{\medskip}0&0&-\frac{10}{3}\, &0&0&0&2\, &\frac{95}{18} &0&-\frac{20}{3}\, &0&0&\frac53\, &0 \\\noalign{\medskip}0&0&0&0&0&2\, &0&0&-\frac{20}{3}&0&0&0&0&2\, \\\noalign{\medskip}0&0&0&0&0&0&0&\frac{10}{3}\, &0&0&0&0&0&0\\\noalign{\medskip}0&0&0&0&0&0&0&0&-2\, &0&0&0&0&0 \end {array} \right]$$ As in the case of $A_{1,R}$, this matrix $A_{2,R}$ is in a reduced form because $Lie(A_{2,R})$ is monogenous and $\frac{1}{x}$ only has simple poles. Therefore $Lie(A_{2,R}) =\mathfrak{g}_2$ and $\mathfrak{g}_2$ is once more abelian bringing in no obstruction to integrability. We have then to look at $\mathrm{(LVE^{3}_{\phi})}$. Reduction of $\mathrm{(LVE^{3}_{\phi})}$ ---------------------------------------- We denote $P_2$ the reduction matrix of $A_2$. Once more we build a partial reduction matrix $Q_3 := \tiny\left[\begin{array}{cc} Sym^3 P_1 & 0 \\ 0 & P_2\end{array}\right]$ that puts the diagonal blocks of matrix $A_3$ into a reduced form and we obtain the partially reduced matrix $Q_3[A_3] := \tiny\left[\begin{array}{cc} sym^3 A_{1,R} & 0 \\ \tilde{B}_3 & A_{2,R}\end{array}\right]$. In this case we have a Wei-Norman decomposition of $Q_3[A_3]$ of dimension $18$, and $\mathrm{dim}_{\mathbb{C}}(Lie(Q_3[A_3]))=38$. We thus have - $\mathrm{dim}_{\mathbb{C}}(Lie(Q_3[A_3])_{diag})=1$ where $$Lie(Q_3[A_3])_{diag}=\mathrm{span}_{\mathbb{C}} \lbrace D_{3,0}:=\tiny\left[\begin{array}{cc} Sym^3 \tilde{D}_1 & 0 \\ 0 & \tilde{C}_{2,0}\end{array}\right]\rbrace$$ - and $\mathrm{dim}_{\mathbb{C}}(Lie(Q_3[A_3])_{sub})=37$ and $Lie(Q_3[A_3])_{sub} =\mathrm{span}_{\mathbb{C}}(\mathcal{B}_3)$ with $$\mathcal{B}_3=\tiny\lbrace B_{3,i}=\left[\begin{array}{cc} 0 & 0 \\ \tilde{B}_{3,i} & 0\end{array}\right] \,,\,{\tiny i=1\ldots 38} \rbrace$$ a base of generators of $Lie(Q_3[A_3])_{sub}$. We define $\Psi_{3,0} \, : \, \mathfrak{h}_{3,sub}\,\longrightarrow \, \mathfrak{h}_{3,sub}\,,\, B \, \mapsto \, [D_{3,0} \,,\, B]$. It is nilpotent and its Jordan basis will satisfy the conditions of the first section of Proposition \[partial reduction\]. In the Jordan basis $\mathcal{C}_{3}:=\lbrace C_{3,i}\,,\, i=1\ldots 37\rbrace$, the Jordan form of $J_{\Psi_{3,0}}$ is formed by the following Jordan blocks: 1. three Jordan blocks of dimension $5$ corresponding to : $\lbrace C_{3,5},\ldots , C_{3,1}\rbrace,$ $\lbrace C_{3,11},\ldots , C_{3,6}\rbrace,$ $\lbrace C_{3,17},\ldots , C_{3,12}\rbrace$ 2. three Jordan blocks of dimension $4$: $\lbrace C_{3,18},\ldots , C_{3,21}\rbrace$ , $\lbrace C_{3,22},\ldots , C_{3,26}\rbrace$ and $\lbrace C_{3,31},\ldots , C_{3,27}\rbrace,$ 3. and two Jordan blocks of dimension $2$: $$\lbrace C_{3,34},\ldots , C_{3,32}\rbrace \text{ and }\lbrace C_{3,37},\ldots , C_{3,35}\rbrace.$$ In the basis $\mathcal{C}_3$, a Wei-Norman decomposition is $$Q_3[A_3]= \beta_{0} D_{3,0} + \sum^{37}_{i=1} \gamma_{3,i} C_{3,i}.$$ We proceed blockwise as in the case of the second variational equation. This time, possible obstructions to integrability appear when handling the Jordan block $\lbrace C_{3,31}\,,\ldots \,,\, C_{3,27}\rbrace$. By decomposition $\gamma_{3,i}=R'_{3,i} + L_{3,i}$ (with $i=27\ldots 31$), we see that in particular $L_{3,30}$ and $L_{3,29}$ are non zero (and have “new poles”, i.e not the pole zero of the coefficient of the reduced form of $(VE_2)$) and therefore we suspect that $C_{3,29} , C_{3,30}$ (or some linear combination) lie in $\mathfrak{g}_{3}$. Since neither $C_{3,30}$ nor $C_{3,29}$ commute with $D_{3,0}$ that would suggest that $\mathfrak{g}_{3}$ is not abelian and therefore, intuitively, the Hamiltonian (\[HH\]) would be non integrable. We prove this rigorously in the following subsection. Proof of non-integrability -------------------------- After performing the partial reduction recursively for all blocks, we obtain the matrix $\tilde{A}_{3,R}$. It has a Wei-Norman decomposition $\tilde{A}_{3,R} =a_1 M_{3,1} + a_2 M_{3,2}$ where $M_{3,1}, M_{3,2}\in\mathcal{M}_{34}(\mathbb{C})$, $a_1 :=\frac{1}{x}$, $a_2:=\frac{x}{x^2 +1}$. The matrix $M_{3,1}$ is lower block triangular and $M_{3,2}\in Lie(\tilde{A}_{3,R})_{sub}$. We let $M_{3,3}:=[M_{3,1}\,,\, M_{3,2}]$, $M_{3,4}:=[M_{3,1}\,,\, M_{3,3}]$, $M_{3,5}:=[M_{3,1}\,,\, M_{3,4}]$ and check that $[M_{3,i}\,,\, M_{3,j}]=0$ otherwise. So $Lie(\tilde{A}_{3,R})$ has dimension $5$ and is generated by the $M_{3,i}$. Note that $M_{3,i}\in \mathcal{M}_{34,sub}(\mathbb{C})$ for $i\geq 2$. Again we let $$\Psi\,:\, Lie(\tilde{A}_{3,R})\,\longrightarrow\, Lie(\tilde{A}_{3,R})\quad,\quad M\mapsto [M_{3,1}\,,\, M].$$ By construction, the matrix of $\Psi$ is $\tiny\left[\begin{array}{ccccc} 0& 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 &0\end{array}\right]$. $\tilde{A}_{3,R}$ is a reduced form for $\mathrm{(LVE^3_{\phi})}$ and $\mathfrak{g}_{3}$ is not abelian so the degenerate Hénon-Heiles Hamiltonian (\[HH\]) is not meromorphically integrable. We know that $Lie(\tilde{A}_{3,R})$ is non abelian so we just need to prove that $\tilde{A}_{3,R}$ is a reduced form. To achieve this we will construct a Picard Vessiot extension $K_{3}$ still using our “reduction” philosophy and we prove that it has transcendence degree $5$: as $\mathfrak{g}_3\subset Lie(\tilde{A}_{3,R})$ and $\mathrm{dim}_{\mathbb{C}}(Lie(\tilde{A}_{3,R})) = 5$ this will show that $\mathfrak{g}_{3}=Lie(\tilde{A}_{3,R})$ because $\mathrm{dim}_{\mathbb{C}}(\mathfrak{g}_{3})=\mathrm{dtr}(K_3/ \mathbf{k})$ (see [@PuSi03a] Chap. 1.). We apply proposition \[partial reduction\] to $\tilde{A}_{3,R}$. Apply the partial reduction $P_{1} = (Id + \int a_1 M_{3,1}) = Id+\ln(x) M_{3,1}$: $P_1[\tilde{A}_{3,R}]$ contains no terms in $M_{3,2}$ and $ P_{1}[\tilde{A}_{3,R}] = a_1 M_{3,1} + \left(a_1 \int a_2\right) M_{3,3} $; we call $I_2 =\int ( a_1 \int a_2) = Li_{2}(x^2)$ where $Li_2$ denotes the classical dilogarithm (see e.g [@Ca02a]). Similarly we obtain $I_3$ and $I_4$ as coefficients of successive changes of variable. We are left with a system $Y'=a_1 M Y$, the Picard-Vessiot extension is $$K_3= \mathbb{C}(x)(\ln(x)\,,\, \ln(1+x^2)\,,\, Li_{2}(x^2)\,,\, Li_{3}(x^2)\,,\, Li_{4}(x^2))$$ It is known to specialists that $\mathrm{dtr}(K_3/\mathbf{k})=5$ (and reproved for convenience below). A self-contained proof of $\mathrm{dtr}(K_3/\mathbf{k}) =5$ {#section: appendix} ----------------------------------------------------------- To remain self-contained we propose a differential Galois theory proof of the following classical fact (see [@Ca02a] for instance). The proof is simple and beautifully consistent with our approach. To simplify the notations, we write the proof in the case of the classical iterated dilogarithms $Li_{j}(-x)$ but, of course, it applies mutatis mutandis to our case of $Li_{j}(x^2)$. \[appendix\] Let $K_3= \mathbb{C}(x)(\ln(x)\,,\, -\ln(1-x)\,,\, Li_{2}(-x)\,,\, Li_{3}(-x)\,,\, Li_{4}(-x))$, then $\mathrm{dtr}(K_3/\mathbf{k}) =5$ Let us prove that the functions $$x\,,\,\ln(x)\,,\, -\ln(1-x)\,,\, Li_{2}(-x)\,,\, Li_{3}(-x)\,,\, Li_{4}(-x)$$ are algebraically independent using a differential Galois theory argument. That $\ln(x)$ and $-\ln(1-x)$ are transcendent and algebraically independent over $\mathbb{C}(x)$ is a classical easy fact. We focus in proving the transcendence and algebraic independence of $Li_{2}(-x)\,,\, Li_{3}(-x)$ and $Li_{4}(-x)$. Set the following relations, $$Li_0(-x) := \frac{x}{1-x},\quad Li_{1}(-x) := -\ln(1-x),\quad Li_{2}(-x):=\int\frac{Li_{1}(-x)}{x} dx ,$$ $$Li_{3}(-x):=\int\frac{Li_{2}(-x)}{x} dx ,\quad Li_{4}(-x):=\int\frac{Li_{3}(-x)}{x} dx$$ and therefore $K_3 = \mathbb{C}(x)(\ln(x)\,,\,Li_{0}(-x),\ldots ,Li_{4}(-x))$ is a differential field (with $Li'_{i}(-x) = \frac{Li_{i-1}(-x)}{x}$). Of course, $\mathrm{dtr}(K_3/\mathbf{k})\leq 5$. Let us define $$V := \mathrm{span}_{\mathbb{C}}\left\lbrace 1\,,\,\ln(x)\,,\, \frac{\ln(x)^2}{2}\,,\, \frac{\ln(x)^3}{6}\,,\, Li_1(-x)\,,\, Li_2 (-x)\,,\, Li_3 (-x)\,,\, Li_4 (-x)\right\rbrace$$ and consider and element $\sigma \in Gal(K_3/\mathbf{k})$. As $\sigma(\ln'(x)) = \sigma(\frac{1}{x}) = \frac{1}{x}=\ln'(x) $ there exists a constant $c_0\in\mathbb{C}$ such that $\sigma(\ln(x)) = c_0$. Similarly, we obtain that $\sigma(\ln(x)^2 /2) = \ln(x)^2 / 2 + c_0\ln(x)+c^2_0$ and $\sigma(\ln(x)^3 /6) = \ln(x)^3 / 6 + c^2_0\ln(x)/2 +c_0\ln(x)^2 /2 c^3_0$. Since $Li'_{1}(-x) =\frac{x}{x^2 +1}\in\mathbf{k}$ we have that $\sigma(Li'_{1}(-x)) = Li'_{1}(-x)$ and therefore there exists $c_1\in\mathbb{C}$ such that $\sigma(Li_{1}(-x)) =Li_{1}(-x) + c_1$. As $Li'_{2}(-x) =\frac{Li_{1}(-1)}{x}$ we have that $\sigma(Li'_{2}(-x) ) =\sigma(\frac{Li_{1}(-1)}{x}) =\frac{Li_{1}(-x) }{x} + \frac{c_1}{x}$ and there exists $c_2\in\mathbb{C}$ such that $\sigma(Li_{2}(-x))=Li_{2}(-x) + c_1 \ln(x) +c_2$. We prove similarly the existence of $ c_3 , c_4 \in\mathbb{C}$ such that $$\begin{aligned} \nonumber\sigma(Li_{3}(-x))&=&Li_{3}(-x) +c_1 \frac{\ln(x)^2}{2} + c_2 \ln(x) + c_3\\ \nonumber\sigma(Li_{4}(-x)) &=&Li_{4}(-x) + c_1\frac{\ln(x)^3}{6} + c_2 \frac{\ln(x)^2}{2} + c_3 \ln(x) + c_4.\end{aligned}$$ We see that $V$ is stable under the action of $Gal(K_3/\mathbf{k})$ and hence is the solution space of a differential operator $L\in\mathbf{k}[\frac{d}{dx}]$ of order $8$. Therefore, in this basis the matrix of the action of $\sigma$ on $V$: $$M_{\sigma}:=\tiny\left[\begin{array}{ccccccccc} 1 & c_0 & c^2_0 /2 & c^3_0/6 & c_1 & c_2 & c_ 3 & c_ 4 \\ 0 & 1 &c_0 & c^2_0 & 0 & c_1 & c_2 & c_3\\ 0 & 0 & 1 & c_0 & 0 & 0 & c_1 & c_2\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & c_1\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\end{array}\right]$$ As $\ln(x)$ and $\ln(1-x)$ are transcendent (and algebraically independent) we know that $c_0$ and $c_1$ span $\mathbb{C}$. It follows that $\mathfrak{g}_3$ contains at least $$m_0 :={\tiny \left[ \begin{array}{cccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]} \quad \text{and}\quad m_1 :={\tiny \left[ \begin{array}{cccccccc} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 &1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]}.$$ Since $m_0$ and $m_1$ do not commute, we know that the Lie algebra generated by the iterated Lie brackets has dimension at least $3$. Iterating the brackets of $m_0$ and $m_1$ we obtain a subalgebra of $\mathfrak{g}_3$ of dimension $5$. Therefore we have $\mathrm{dtr}(K_{3}/ \mathbf{k}) \geq 5$ and since we know that $\mathrm{dtr}(K_{3}/ \mathbf{k})\leq 5$ we obtain the equality and the result follows. Horozov and Stoyanova [@HS07] make use of the properties of the dilogarithm in order to prove the non-integrability of some subfamilies of Painlevé VI equations: namely, they prove the non-abelianity of $\mathfrak{g}_2$, the Lie algebra of its second variational equation. Conclusion ========== The reduction method proposed here is systematic (and we have implemented it in Maple). Although it is currently limited to the case when $Lie(\mathrm{(VE^1_{\phi})})$ is one-dimensional, extensions to higher dimensional cases along the same guidelines are in progress and will appear in subsequent work. In work in progress with S. Simon, we will show another use of reduced forms, namely the expression of taylor expansions of first integrals along $\phi$ are then greatly simplified.\ We conjecture that our method is not only a partial reduction procedure but a complete reduction algorithm : assuming that $\mathrm{(LVE^m_{\phi_0})}$ is reduced (with an abelian Lie algebra), we believe that the output $\tilde{A}_{m+1,R}$ of our reduction procedure of sections 4 and 5 will always be a reduced form. In the context of complex Hamiltonian systems, this would mean that our method would lead to an effective version of the Morales-Ramis-Simó theorem. [A]{} Ainhoa Aparicio and Jacques-Arthur Weil, *A reduced form for linear differential systems and its application to integrability of hamiltonian systems*, (arXiv:0912.3538). D. Blazquez, J. J. Morales-Ruiz, *Differential Galois theory of algebraic Lie-Vessiot systems*. Differential algebra, complex analysis and orthogonal polynomials, 1–58, Contemp. Math., 509, Amer. Math. Soc., Providence, RI, 2010. Pierre Cartier, *Fonctions polylogarithmes, nombres polyzêtas et groupes pro-unipotents*, Astérisque (2002), no. 282, Exp. No. 885, viii, 137–173, S[é]{}minaire Bourbaki, Vol. 2000/2001. Guy Casale, Guillaume Duval, Andrzej J. Maciejewski, and Maria Przybylska, *Integrability of [H]{}amiltonian systems with homogeneous potentials of degree zero*, Phys. Lett. A **374** (2010), no. 3, 448–452. William Fulton and Joe Harris, *Representation theory, A first course*, Graduate Texts in Mathematics, vol. 129, Springer-Verlag, New York, 1991. E. Horozov, T. Stoyanova, *Non-Integrability of Some Painleve VI-Equations and Dilogarithms*, Regular and Chaotic Dynamics, 12 (2007) 622-629. J. Kovacic, *On the inverse problem in the [G]{}alois theory of differential fields. [II]{}.*, Ann. of Math. (2) **93** (1971), 269–284. Juan J. Morales Ruiz, *Differential [G]{}alois theory and non-integrability of [H]{}amiltonian systems*, Progress in Mathematics, vol. 179, Birkhäuser Verlag, Basel, 1999. Juan J. Morales-Ruiz, Jean-Pierre Ramis, and Carles Simó, *Integrability of [H]{}amiltonian systems and differential [G]{}alois groups of higher variational equations*, Ann. Sci. École Norm. Sup. (4) **40** (2007), no. 6, 845–884. C. Mitschi and M. F. Singer, *Connected linear groups as differential [G]{}alois groups*, J. Algebra **184** (1996), no. 1, 333–361. Claude Mitschi and Michael F. Singer, *The inverse problem in differential [G]{}alois theory*, The [S]{}tokes phenomenon and [H]{}ilbert’s 16th problem ([G]{}roningen, 1995), World Sci. Publ., River Edge, NJ, 1996, pp. 185–196. R. Mart[í]{}nez and C. Sim[ó]{}, *Non-integrability of [H]{}amiltonian systems through high order variational equations: summary of results and examples*, Regul. Chaotic Dyn. **14** (2009), no. 3, 323–348. Marius van der Put and Michael F. Singer, *Galois theory of linear differential equations*, Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], vol. 328, Springer-Verlag, Berlin, 2003. James Wei and Edward Norman, *Lie algebraic solution of linear differential equations*, J. Mathematical Phys. **4** (1963), 575–581. [^1]: The first author was supported by a Grant from the Region Limousin (France). [^2]: A field $k$ is called a $C_1$-field (or cohomologically trivial) if any homogeneous polynomial $P\in k[X_1,\ldots,X_n]_{=d}$ of degree $d$ has a non-trivial zero in $k^n$ when $n>d$, i.e the number of variables is bigger than the degree. All differential fields of coefficients considered in this article will belong to the $C_1$ class.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Shape tracking of medical devices using strain sensing properties in optical fibers has seen increased attention in recent years. In this paper, we propose a novel guidance system for intra-arterial procedures using a distributed strain sensing device based on optical frequency domain reflectometry (OFDR) to track the shape of a catheter. Tracking enhancement is provided by exposing a fiber triplet to a focused ultraviolet beam, producing high scattering properties. Contrary to typical quasi-distributed strain sensors, we propose a truly distributed strain sensing approach, which allows to reconstruct a fiber triplet in real-time. A 3D roadmap of the hepatic anatomy integrated with a 4D MR imaging sequence allows to navigate the catheter within the pre-interventional anatomy, and map the blood flow velocities in the arterial tree. We employed Riemannian anisotropic heat kernels to map the sensed data to the pre-interventional model. Experiments in synthetic phantoms and an $in$ $vivo$ model are presented. Results show that the tracking accuracy is suitable for interventional tracking applications, with a mean 3D shape reconstruction errors of $1.6 \pm 0.3$ mm. This study demonstrates the promising potential of MR-compatible UV-exposed OFDR optical fibers for non-ionizing device guidance in intra-arterial procedures.' author: - Francois Parent - Maxime Gerard - Raman Kashyap - Samuel Kadoury bibliography: - 'sample.bib' title: 'UV Exposed Optical Fibers with Frequency Domain Reflectometry for Device Tracking in Intra-Arterial Procedures' --- Introduction ============ Intra-arterial therapies, such as trans-arterial chemoembolization (TACE), are now the preferred therapeutic approach for advanced hepatocellular carcinomas (HCCs). However, real-time localisation of the catheter inside the patient’s vascular network is an important step during embolizations, but remains challenging, especially in tortuous vessels and narrow bifurcations.vTraditional tracking approaches present a number of limitations for TACE, including line-of-sight requirements and tracking of flexible tools using infrared cameras, while workflow hinderances or metallic interferences are linked with electromagnetic (EM) tracking. Therefore alternative technologies have attempted to address these issues. A recent example uses bioimpedance models using integrated electrodes [@fuerst2016bioelectric] which infers the internal geometry of the vessel and mapped to a pre-interventional model, but is limited to the catheter tip. Optical shape sensing (OSS) is another technology measuring light deflections guided into optical fibers in order to measure strain changes in real-time, thereby inferring the 3D shape of the fiber by means of an integrative approach. Fiber Bragg grating (FBG) sensors can be integrated into submillimeter size tools, with no electromagnetic interference. Medical devices have incorporated FBGs in biopsy needles [@Park10], catheters and other minimally invasive tools for shape detection and force sensing capabilities [@Elayaperumal14; @Roesthuis14]. However FBGs only provide discrete measurements, are costly to fabricate and reduce the flexibly of highly bendable tools. Optical frequency domain reflectometry (OFDR) is an alternative interferometric method with truly distributed sensing capabilities, frequently used to measure the attenuation along fibers. Duncan et al. compared the FBG and OFDR strain sensing approaches for optical fibers, showing an accuracy improvement with OFDR [@duncan2007high]. An array of 110 equally distanced FBGs was used, yielding an accuracy of 1.9mm, in comparison to a 3D shape reconstruction accuracy of 0.3mm using OFDR. Loranger et al. also showed that Rayleigh scattering, which is the basis of strain measurements using OFDR, can be considerably enhanced by exposing fibers to a UV beam, leading to an increase in backscattered signal by a factor of 6300 [@loranger2014rayleigh]. In this paper, we present a new paradigm in catheter tracking using high scattering of a UV exposed fiber triplet inserted within a double-lumen catheter to perform real-time navigation in the hepatic arteries (Fig. \[fig:Workflow\]). A custom made benchwork was first used to assemble three fibers in an equidistant geometry. In the proposed system, OFDR is based on Rayleigh scattering, which is caused by a random distribution of the refractive index on a microscopic scale in the fiber core of UV-doped optical fibers. The 3D shape of the fiber triplet was reconstructed according to the strain values measured by OFDR, and it’s accuracy was evaluated both $in$ $vitro$ and $in$ $vivo$ to determine the catheter’s tracking capabilites. In order to navigate the catheter within a patient’s arterial tree, a 3D roadmap is automatically extracted from a 4D-flow MR imaging sequence, providing both anatomical and physiological information used for guidance in super-selective TACE procedures. Mapping between the sensed catheter shape and the anatomy is achieved using anisotropic heat kernels for intrinsic matching of curvature features. Rayleigh scattering processing has been proposed to obtain temperature measurements [@song2014long] and estimate strain properties in [@loranger2014rayleigh] but, to our knowledge, has not been applied to interventional navigation. The relative ordering of curvatures features (e.g. bifurcations) of the pre-operative models with the sensed strain values is not affected using dense intrinsic correspondences. ![In OFDR navigation, strain measurements from UV exposed fibers with Rayleigh scattering are processed to 3D coordinates, which are mapped in real-time to an arterial tree model from a pre-interventional MR angiography.[]{data-label="fig:Workflow"}](Workflow){height="0.75in"} Materials and Methods {#sec:methods} ===================== Fabrication of UV enhanced optical fibers ----------------------------------------- The proposed catheter is composed of three hydrogen loaded SMF-28 optical fibers (each with a 125$\mu$m diameter), exposed to a focused UV beam (UVE-SMF-28). In our system, three fibers are glued together in a triangular geometry set apart by 120$^{\circ}$ (Fig. \[fig:Geometry\]), using UV curing glue. Once the fibers are glued together, the outer diameter is approximately 260$\mu$m. The reusable and sterilizable fiber triplet was incorporated into a 0.67-mm-inner-diameter catheter (5-French Polyamide catheter, Cook, Bloomington, IN). 3D shape tracking using OFDR ---------------------------- The shape of the catheter is tracked using an OFDR method, which uses a frequency swept laser to interrogate the three fibers under test (FUT), successively. The backscatter signal of each FUT is then detected and analyzed in the frequency domain. By using interferometric measurements, the strain along the fibers can be retrieved. A Fast Fourier Transform (FFT) is performed to evaluate the intensity of the backscatter signal as a function of the position along the fiber under test. Small-scale sections (corresponding to the spatial resolution $\Delta x$ of the strain sensor) of this signal is selected by an inverse FFT to evaluate the frequency response of this specific section. By comparing this frequency response of the fiber under strain and with the unstrained fiber, the local strain can be determined. To do so, a cross-correlation of the strained and unstrained spectra is performed. The corresponding cross-correlation spectra allows to precisely evaluate the spectrum drift between the reference and the measured section in the selected fiber length. The spectral drift is proportional to the strain (or temperature), so that the local strain or temperature can be calculated easily. In order to obtain a truly-distributed strain sensor, this process is repeated for each section of the FUT, successively. After selecting the desired length and location of the FUT, the desired fiber length (spatial resolution) ($\Delta x$) and the sensor spacing ($\delta x$), an optical backscattering reflectometer (OBR) provides distributed strain values along the desired region of the FUT as shown in Fig. \[fig:Geometry\]. ![Diagram of the optical systems used during measurements of the fiber triplet catheter. Illustration of a catheter separated in $i$-segments. Each segment defined within its own ($x_i^{'}$,$y_i^{'}$,$z_i^{'}$) frame can then be expressed in the tracking space $(x,y,z)$. The cross-section of the triplet of radius $a$ shows the angle between $x_i^{'}$ and the rotational axis, the distance between the center of the fiber triplet $r_i$, the angle offset $\alpha_i$, as well as the angle $\varphi$ between each fiber.[]{data-label="fig:Geometry"}](FiberGeometry){height="1.8in"} Once OFDR is performed to evaluate the strain distributed along each fiber, a geometrical model proposed by Froggatt et al. [@froggatt1998high] is used to evaluate the position of the fiber triplet in tracking space. The core idea is to divide the triplet into segments $i$ and evaluate the position of the segments in its own frame ($x_i^{'}$,$y_i^{'}$,$z_i^{'}$). We use geometrical assumption to find the angle ($\alpha_i$) between the $x_i^{'}$ axis and the rotational axis of this segment, as shown in Fig. \[fig:Geometry\]. Assuming $a_{ij}$ is the distance between the triplet center and the core of fiber $j$, $\varphi_{ijk}$ is the angle between each fiber core $j$ and $k$ ($j$ and $k = \{1,2,3\}$, $k \neq j$) and $r_i$ is the distance between the triplet center and the rotational axis of this segment, the angle offset $\alpha_i$ and radius $r_i$ of the triplet can be obtained. The curvature and position of the segment tip in its own frame ($x_i^{'}$,$y_i^{'}$,$z_i^{'}$) can be evaluated. By applying a succession of projections and using rotational matrices, one can express these results in the laboratory frame ($x_i$,$y_i$,$z_i$) to reconstruct the entire 3D shape at a time $t$. For more details see [@froggatt2010fiber]. Roadmapping of hepatic arteries -------------------------------- Prior to navigation, the hepatic arterial tree used to map the sensed catheter shape and location onto the patient’s anatomy is obtained through a segmentation algorithm that allows for the extraction of a complete 3D mesh model from a contrast-enhanced MR angiography (MRA) [@BADOUAL]. The algorithm automatically detects the aorta and celiac trunk using an elliptical Hough transform following vesselness filtering on the MRA. An initial cylindrical triangular mesh is created around the detected aorta and deformed to fit the walls of the arteries by minimizing the energy equation $E_{total} = E_{ext} + \beta E_{int}$. The first term represents the external energy driving the deformation of the mesh towards the edges of the vessel, using the magnitude of the intensity gradient vectors on the image. This term drives the triangles’ barycenters towards their most promising positions. The second term in is the internal energy $E_{int}$. It limits the deformation by introducing topological constraints to ensure surface coherence, by measuring the neighbourhood consistency between the initial and optimized meshes. Finally, $\beta$ is a constant which allows for control of the trade-off between flexibility for deformation and surface coherence. Each step of the iterative propagation consists in (a) duplicating a portion of the mesh extremity and translating it to that extremity, (b) orienting it by maximizing the gradient intensity values at its triangles barycenters, and (c) deforming it using the energy term. A multi-hypothesis vessel tracking algorithm is used to detect bifurcations points and vessel paths to guide the adaptation process, generating the paths in the arterial tree and yielding a complete arterial model denoted as $C_{MRA}$. In addition to the earlier arterial phase contrast imaging, a 4D Flow imaging sequence was performed using a flow-encoded gradient-echo sequence with retrospective cardiac triggering and respiratory navigator gating. Anisotropic curvature model matching ------------------------------------ Given a sensed catheter shape at time $t$ and the pre-operative roadmap $C_{MRA}$, the tracked catheter is then mapped to the patient-specific arterial model. We take advantage of the highly accurate curvature properties of the vascular tree to achieve shape correspondance, by using anisotropic heat kernels which are used as weighted mapping functions, enabling to obtain a local description of the intrinsic curvature properties within a manifold sub-space [@boscaini2016anisotropic]. We use an intrinsic formulation where the points are expressed only in terms of the Riemannian metric, which are invariant to isometric (metric-preserving) deformations. Prior to navigation, the 3D hepatic artery mesh model is divided in triangulated regions, which are defined by their unit normal vectors and principal curvature directions. Discretized anisotropic Laplacian sparse matrices are defined for each of these triangles, which include mass and stiffness matrices describing the anisotropic scaling and the rotation of the basis vector around the normal. Once the arteries are expressed in spectral curvature signatures, it can be directly matched in real-time with the sensed OFDR data, compensating for respiratory motion. Experiments and Results {#sec:results} ======================= Experimental setup ------------------ The data processing is performed by an Optical Backscattering Reflectometer (OBR4600, LUNA Inc.). The sampling rate was determined based on the system’s maximal capacity (1Hz), and an optical switch (JDSU SB series; Fiberoptic switch) with a channel transition period of 300ms was used to scan each fiber of the triplet, which were each exposed with a focused UV beam (Fig. \[fig.materials\]a) during fabrication. Further data processing for catheter shape reconstruction considering the triplet characteristics was done by our own navigation software. Synthetic vascular models ------------------------- A set of 5 synthetic phantoms, created from stereolithography of patient-specific MRA’s as shown in Fig. \[fig.materials\]b were used to perform $in$ $vitro$ experiments inside an MR-scanner. The catheter was guided to a pre-defined target within the second segmental branch of the hepatic arterial tree on the MRA. Both tip position accuracy (Euclidean distance between virtual and physical tip) and root-mean-square differences (RMS) in the 3D catheter shape (15cm in length) were measured between a confirmation scan and the registered sensed data. Results were compared to EM tracked data, as shown in Table 1. The 3D shape RMS error was obtained by calculating the average point-to-point distances from a series of equidistant points taken along the virtual 3D shape to the closest point on the actual catheter. Compared to previous reports on FBG tracking [@mandal2016vessel], these results show that the navigation accuracy is reliable, while remaining insensitive to MR magnetic fields. We also tested the tracking accuracy by measuring the amplitude of the backscatter signal from three types of fibers, which are standard single mode fiber (SMF-28), Germanium-boron doped fiber (Redfern) and hydrogen-loaded SMF-28 exposed to a focused UV beam (UVE-SMF-28). The UVE-SMF-28, which has a backscatter signal 6300 times higher than SMF-28, sees an average enhancement of 39%, reaching 47% for a highly curved regions in the phantom. The best accuracy was reached with the UVE-SMF-28, with an average tip accuracy of $1.1 \pm 0.4$mm and 3D shape error of $1.6 \pm 0.3$mm. ![(a) Fabrication setup with benchtest used only once (outside the clinic) for exposing focused UV beam for Rayleigh scattering on fiber triplet. (b) Example of a synthetic arterial phantom used for in vitro navigation for tracking accuracy assessment.[]{data-label="fig.materials"}](Setup3){width="4.7in"} Animal experiment ----------------- The final experiment consisted in an IRB-approved $in$ $vivo$ navigation with an anesthetized pig model. The pre-operative imaging was performed on a clinical 3T system (Achieva TX, Philips Healthcare, Best, The Netherlands), using a 16-channel thoracic surface coil for signal reception and the integrated 2-channel body coil for signal transmission. The field of view was of 240 x 300 x 80 mm, the acquired resolution 2.85 x 2.85 x 2.80 mm, the reconstructed resolution 1.35 x 1.35 x 1.4 mm, TR = 4.7 ms, TE = 2.7 ms, 8$^{\circ}$ flip angle, readout bandwidth of 498.4 Hz/pixel, SENSE acceleration factor of 2.5, a total of 25 reconstructed cardiac phases and velocity encoding (VENC) of 110 cm/s. Cardiac synchronization was performed using a peripheral pulse unit. Pre-injection acquisitions with respective flip angles of 4 and 20 degrees and the same acquisition parameters were also performed to enable the calculation of native T1 maps. For the clinical setup, only the OBR unit and laptop were required in the interventional suite. The experiment consisted in guiding the optical fiber triplet embedded in the catheter with 3 attempts from the femoral artery and into the arterial tree, each following distinct paths. Fig. \[fig.Invivo\]a shows the representation of the arterial tree from the 4D-flow sequence. Fig. \[fig.Invivo\]b presents the corresponding velocities obtained from the 4D model along each of the 3 paths of the the sensed catheter location during guidance. The results illustrate how the velocities drops once the catheter crosses bifurcation B$\#$1 into the common or splenic artery, as well as past B$\#$2.1 into the left or right branch or with B$\#$2.2. This demonstrates the ability to locate the catheter in the arterial tree as it approaches vessel bifurcations. ![(a) Arterial tree model with 4D flow streamlines of an anesthetized pig model with color-coded blood flow velocities. Symbols B$\#$ indicate bifurcations. (b) Mapping of blood flow velocities along various 3 vascular paths, based on tracked catheter location within the pig ’s arterial tree model.[]{data-label="fig.Invivo"}](4DFLOW_.png){height="1.6in" width="2.3in"} (a) ![(a) Arterial tree model with 4D flow streamlines of an anesthetized pig model with color-coded blood flow velocities. Symbols B$\#$ indicate bifurcations. (b) Mapping of blood flow velocities along various 3 vascular paths, based on tracked catheter location within the pig ’s arterial tree model.[]{data-label="fig.Invivo"}](Velocities){height="1.7in" width="2.6in"} (b) Conclusion ========== We proposed a novel MR-compatible guidance system using an optical shape sensing catheter based on optical frequency domain reflectometry. Our system is the first to offer a fully distributed sensing device using Rayleigh scattering on UV exposed SMF fibers for navigation. In comparison to other single mode fibers, the UVE-SMF-28 allows to increase diffusion properties, leading to an improvement in tracking accuracy. Results show that this method offers tracking accuracies similar to theoretical estimations and EM tracking. Because the mapping is obtained with no user interaction using robust heat kernels to match curvature features, the proposed approach could be transposed to clinical practice for TACE of liver HCCs. Future work will improve the refresh rate with a high performance OBR (5Hz) and further experimentation with porcine models.\ **Acknowledgments:** We thank Drs. Guillaume Gilbert and An Tang for their contribution in the 4D-Flow sequence.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A reliable single photon source is a prerequisite for linear optical quantum computation and for secure quantum key distribution. A criterion yielding a conclusive test of the single photon character of a given source, attainable with realistic detectors, is therefore highly desirable. In the context of heralded single photon sources, such a criterion should be sensitive to the effects of higher photon number contributions, and to vacuum introduced through optical losses, which tend to degrade source performance. In this paper we present, theoretically and experimentally, a criterion meeting the above requirements.' author: - 'Alfred B. U’Ren$^{1,2}$' - Christine Silberhorn$^1$ - 'Jonathan L. Ball$^1$' - Konrad Banaszek$^1$ - 'Ian A. Walmsley$^1$' title: 'Characterization of the non-classical nature of conditionally prepared single photons' --- High fidelity single photon sources are an essential ingredient for quantum-enhanced technologies including linear optical quantum computation (LOQC) and secure quantum key distribution. Thus, the endeavor to generate single photons in controlled, well-defined spatio-temporal modes is an active area of research. Current single photon source candidates can be classified into two categories: deterministic sources producing single photons on demand at predefined trigger times and heralded single photon sources relying on the spontaneous emission of distinguishable photon pairs in conjunction with conditional preparation. While the emission times for conditional single photon sources cannot be controlled beyond the restriction of emission time slots through a pulsed pump, it has been shown that waveguided PDC can yield heralded single photons in well defined modes together with high collection efficiencies[@uren04]. Conditional state preparation has been utilized in various physical systems including atomic cascades [@grangier86], ensembles of cold atoms [@chou04] and in parametric downconversion (PDC). In the case of PDC, conditional preparation was first reported by Mandel *et al.*[@hong86] and since then has been optimized to generate approximately true $n=1$ Fock states [@uren04; @rarity87; @kwiat91; @lvovsky01; @alibart04; @pittman04]. In order to assess the performance of heralded single photon sources a criterion that takes into account the detrimental contributions of higher photon numbers and optical losses is needed. In addition, such a criterion should take into full consideration limitations of existing photodetectors such as the binary behavior of avalanche photodiodes operated in the Geiger mode where a single click signifies the detection of one or more photons. In this paper we derive such a criterion and show that our previously reported waveguided PDC source[@uren04] represents a high fidelity source of heralded single photons. A standard approach used to determine whether a light source exhibits classical or quantum photon statistics is the measurement of a $g^{(2)}(\tau)$ second-order intensity autocorrelation function in a Hanbury-Brown Twiss geometry. The semi-classical theory of photodetection predicts, firstly, that $g^{(2)}(0) \geq g^{(2)}(\tau)$ for all time delays $\tau$, and, secondly, that $g^{(2)}(0) \geq 1$. The observation of photon anti-bunching, *i.e.* $g^{(2)}(0) \leq g^{(2)}(\tau)$, has been utilized, for example, to verify the non-classical character of deterministic single photon sources implemented by strongly coupled atom cavity systems [@mcKeever04]. For PDC sources, the probability of generating simultaneously two photon pairs at a given instant of time is of the same order as the probability of generating two independent pairs separated by the interval $\tau$. This obliterates the effect of antibunching, unless we employ selective heralding that identifies specifically a single-pair component. For PDC sources the non-classical character of the generated radiation is usually tested by violating the lower bound on the second-order intensity autocorrelation function $g^{(2)}(0) \geq 1$. The value of $g^{(2)}(0)$ constitutes a figure of merit which determines the degree to which higher photon number contributions degrade the single photon character [@alibart04]. Based on a classical wave description and intensity measurements, Grangier *et al.* derived from the Cauchy Schwarz inequality a similar “anti-correlation” criterion for characterizing conditionally prepared single photons by coincidence detection rates [@grangier86]. For the experimental configuration shown in Fig. \[Fi:BBineqSchematic\] an anti-correlation parameter: $$\alpha=\frac{R_1 R_{123} }{R_{12} R_{13}}$$ can be defined, which indicates non-classical photon statistics for $\alpha <1$, where $R_i$ represents the singles count rates at detector $i$, and $R_{ij}$, $R_{ijk}$ the double and triple coincidences for the respective detectors $i,j,k$. A variant of the $g^{(2)}(0)$ measurement specifically designed to study conditional single photon sources independently from losses, which has been pioneered by Clauser [@Clauser74], has recently been implemented for single photons generated from an ensemble of cold atoms [@chou04]. In the above works, the theoretical modeling of experimental data was carried out in terms of intensity correlation functions. In a typical experiment, however, the count rates are directly related to light intensities only under certain auxiliary assumptions. The reason for this is that standard photodetectors sensitive to single photons, such as avalanche photodiodes operated in the Geiger regime, do not resolve multiphoton absorption events and yield only a binary response telling us whether at least one photon was present in the detected mode or none at all. With such detectors, the light intensity can be read out from the count rates only in the limit of weak fields, where the probability of detecting a single photon is proportional to the intensity. In a general case, the probability of obtaining a click is a nonlinear function of the incident intensity. This aspect is particularly important in schemes utilizing ultrashort pulses, where the incoming light energy is concentrated in sub-picosecond time intervals that cannot be resolved even by the fastest photodetectors. It is therefore interesting to go beyond the basic intensity correlation theory and examine whether count statistics collected with binary non-photon-resolving detectors can serve as a test of source non-classicality. We will demonstrate in the following that this is indeed the case. Furthermore, the non-classicality criterion based on measuring $g^{(2)}(0)$ relies on a coincidence basis measurement so that losses can be neglected. However, for applications such as cascaded logic gates in LOQC[@knill01] and loophole free tests of Bell inequalities[@kwiat95] post-selection is not desirable, as it leads to vacuum contamination. The latter diminishes the usability of the single photon states: heralding no longer necessarily corresponds to the successful generation of a single-photon or LOQC gate operation. In this paper we derive a criterion designed to test the non-classical nature of conditionally prepared single photon states. Our criterion takes into account both, the non-linearity of the detectors and the fidelity of the generated single-photon state, which measures the probability that a single-photon is actually present when it is heralded. The criterion can be tested in a standard setup in which the signal field is subdivided into two submodes, each monitored by a non-photon number resolving detector. Consider a source emitting two light beams whose intensities, integrated over the detector active area, are $W_A$ and $W_B$. In the semiclassical theory of photodetection we will treat these intensities as positive-definite stochastic variables described by a joint probability distribution ${\cal P}(W_A;W_B)$. Beam $B$ is divided by a beam splitter with power reflection and transmission coefficients $r$ and $t$. Finally, the resulting beams are detected by three photodetectors. We will assume that the probability of obtaining a click on the $i$th detector illuminated by intensity $W$ is given by $p_i(W)$, bounded between $0$ and $1$. We furthermore assume $p_i(W)$ to be a monotonic increasing function of its argument $W$. Under these assumptions it is easy to show that the following inequality is satisfied for an arbitrary pair of arguments $W_B$ and $W'_B$: $$[p_2(rW_B) - p_2(rW'_B)][p_3(tW_B) - p_3(tW_B')] \ge 0$$ Indeed, the sign of both the factors in square brackets is always the same, depending on the sign of the difference $W_B-W'_B$; their product is therefore never negative. Let us now multiply both sides of the above inequality by the factor ${\cal P}(W_A;W_B){\cal P}(W_A';W_B')p_1(W_A)p_1(W'_A)$ which is likewise nonegative, and perform a double integral $\int_0^\infty dW_A dW_B \int_0^\infty dW_A' dW_B'$. This yields the inequality: $$\label{R1R123-R12E13}B= R_1 R_{123} - R_{12} R_{13} \ge 0$$ where the single, double, and triple count rates are given by averages $\langle \ldots \rangle = \int_0^\infty dW_A dW_B {\cal P}(W_A;W_B) \ldots$ defined with respect to the probability distribution ${\cal P}(W_A;W_B)$: $$\begin{aligned} R_1 & = & \langle p_1(W_A) \rangle \\ R_{12} & = & \langle p_1(W_A) p_2(rW_B) \rangle \\ R_{13} & = & \langle p_1(W_A) p_3(tW_B) \rangle \\ R_{123} & = & \langle p_1(W_A) p_2(rW_B) p_3(tW_B) \rangle\end{aligned}$$ It is seen that the inequality derived in Eq. (\[R1R123-R12E13\]) which can be transformed into: $$\frac{R_1 R_{123}}{R_{12} R_{13}} \ge 1$$ has formally the same structure as the condition derived by Grangier [*et al.*]{}[@grangier86]. However, the meaning of the count rates is different, as we have incorporated the binary response of realistic detectors. It is noteworthy that this inequality has been derived with a very general model of a detector, assuming essentially only a monotonic response with increasing light intensity. Our experimental apparatus is similar to that reported in Ref. [@uren04]. PDC is generated by a KTP nonlinear waveguide pumped by femtosecond pulses from a modelocked, frequency doubled 87MHz repetition rate Ti:sapphire laser. In contrast to that reported in Ref. [@uren04], the approach here is to record time-resolved detection information for the three spatial modes involved with respect to the Ti:sapphire pulse train as detected by a fast photodiode. We thus obtain a reference clock signal with respect to which post-detection event selection can be performed in order to implement temporal gating. The latter is important for the suppression of uncorrelated background photons, the presence of which can lead to heralded vacuum (rather than a true single photon). Through this approach, we are able to freely specify the time-gating characteristics; arbitrarily complicated logic can be performed without added experimental hardware. Drawbacks include the lack of real-time data processing as well as the deadtime in the region of $\mu$s between subsequent triggers exhibited by the digital oscilloscope (LeCroy WavePro 7100) used for data acquisition. In our setup, source brightness information is obtained via a separate NIM electronics-based measurement. For a given trigger event, three numbers are recorded: the time difference between the electronic pulse positive edge corresponding to the trigger and to the two signal modes $t_{S1}$, $t_{S2}$, as well as the trigger-clock reference time difference $t_{CLK}$. Time-gating involves discarding trigger events outside a certain range of $t_{CLK}$ values, while coincidence events with $t_{S1}$ and $t_{S2}$ outside a $1.1$ns wide coincidence window are regarded as accidental and ignored. We collected $75000$ trigger events and measured pre-time gating detection efficiencies (defined as the rate of coincidences normalized by singles) for each of the two signal channels of $14.4\%$ and $13.7\%$. Fig. \[Fig:GrangierData\] shows the post-processed data using a scanned temporal band-pass filter with $300$ps width (selected to approximately match the measured APD jitter). Fig. \[Fig:GrangierData\](A)\[(B)\] shows the time-resolved signal$_1$-trigger \[signal$_2$-trigger\] coincidence count rate, compared to the time-resolved trigger singles count rate. Fig. \[Fig:GrangierData\](C)\[(D)\] shows the resulting time-gated detection efficiency for the signal$_1$ \[signal$_2$\] channel, showing maximum values of $\sim17.4\%$ \[$\sim17.0\%$\]. Fig. \[Fig:GrangierData\](E) shows time-resolved triple coincidences, for identical coincidence windows as used in computing double coincidences. Thus, our time-gating procedure filters the PDC flux so that for the pump-power used the generated light is described essentially by a superposition of vacuum with single photon pairs, showing nearly vanishing multiple pair generation. Fig. \[Fig:GrangierData\](F) shows the time-resolved inequality parameter \[see Eq. \[R1R123-R12E13\]\] resulting from the count rates presented above. As a numerical example, at the peak of the triples counts, we obtain the following time-gated counting rates for $75000$ trigger events: $R_{123}=2$, $R_{12}=5329$, $R_{13}=5067$ and $R_1=30629$, yielding an inequality parameter value of $B=-0.029 \pm .001$. For comparison, our results correspond to a value of the anti-correlation parameter of $\alpha= (2.3 \pm 1.6)\times 10^{-3}$. Fig. \[Fig:GrangierData\] indicates an overall signal transmission \[defined as the sum of the two individual efficiencies $(R_{12}+R_{13})/R_1$\] of $\sim$34.5$\%$. The main contribution to losses is the non-unit quantum efficiency of the single photon detectors. The overall detection efficiency is also degraded due to imperfect optical transmission and remaining unsuppressed uncorrelated photons. From the above count rates, we can also calculate $g^{(2)}(0)=2 p_{(2)}/p_{(1)}^2$ in terms of the probability of observing a single photon in the signal arm $p_{(1)}=(R_{12}+R_{13})/R_1$ and the probability of observing two photons in the signal arm $p_{(2)}=R_{123}/R_1$. We thus obtain $g^{(2)}(0)=(1.1\pm 0.8)\times 10^{-3}$, amongst the lowest reported for a single photon source. Ignoring the spectral and transverse momentum degrees of freedom, the signal and idler photon-number distribution in a realistic PDC source is expressed as: $$|\Psi\rangle=\sqrt{1-|\lambda|^2}\sum\limits_{n=0}^\infty \lambda^n|n\rangle_s|n\rangle_i$$ where $n$ represents the photon number describing each of the signal and idler modes and $\lambda$ represents the parametric gain. PDC experiments often operate in a regime where $\lambda$ is small enough that the probability of multiple pair generation becomes negligible. For larger values of $\lambda$ (accessed for example by a higher pump power or higher non-linearity), however, the higher order terms (e.g. $|2\rangle_s|2\rangle_i$, $|3\rangle_s|3\rangle_i$...) become important. While these higher photon number terms are desirable for conditional preparation via photon number resolving detection, in the context of the present work, where the detectors used *are not* photon-number resolving and where the emphasis is on high-fidelity preparation of *single* photons, multiple pair generation must be avoided. As discussed earlier,in order to characterize a source of conditionally prepared single photons based on PDC, besides the parametric gain $\lambda$, optical losses must be taken into account. Losses in the signal arm imply that a trigger detection event can incorrectly indicate the existence of a signal photon, while in reality vacuum is present. Fig. \[Fig:BBtheo\] shows the expected inequality behavior based on a quantum mechanical calculation in which it is assumed that the detection probability is given by the expectation value of the operator $1-\exp(-\eta \hat{W})$ (where $\hat{W}$ is the time-integrated incoming intensity operator and $\eta$ is the corresponding overall transmission including all optical and detection losses). Fig. \[Fig:BBtheo\](A) shows the calculated inequality parameter $B$ for PDC light as a function of the overall signal optical transmission $\eta_s=(R_{12}+R_{13})/R_1$ for a fixed value of the parametric gain $\lambda$. Fig. \[Fig:BBtheo\](B) shows the inequality coefficient as a function of the parametric gain $\lambda$ for different levels of optical loss. Note that a strong violation of the inequality is only observed in the low parametric gain limit coupled with low losses. Note further that the minimum value of $B$, corresponding to the strongest violation and which is only reached in the ideal lossless case, is $-0.25$. In an experimental realization, while accessing very low values of $\lambda$ is straightforward *e.g.* by using a low pump power, attaining a sufficiently low level of loss to yield a nearly ideal violation is challenging. An analysis of expected detection rates, under the assumption that all uncorrelated photons in the trigger arm are suppressed, yields the parametric gain $\lambda$ in terms of experimentally measurable quantities: $$\lambda^2=\frac{R_2+R_3}{\eta_s R_{rep} (1+f)}$$ where $R_{rep}$ is the pump repetition rate and $f$ is the uncorrelated photon intensity normalized by that of PDC. We estimate that in our experiment $f$ is constrained by: $0<f\lesssim 2$. Our experimental values of $R_2+R_3\approx 70000 s^{-1}$, $R_{rep}=87\times10^6 s^{-1}$ and $\eta_s=0.345$ thus yield: $0.016<\lambda<0.047$. The experimentally observed violation \[see Fig. \[Fig:GrangierData\](F)\] is in good agreement with the theory curves in Fig. \[Fig:BBtheo\]. The black squares in Fig. \[Fig:BBtheo\](A) and (B) depict the observed violation as compared with the theoretical curves, where the uncertainty is smaller than the square dimensions. The plot in Fig. \[Fig:BBtheo\](A) assumes a fixed value of the parametric gain $\lambda$ (with different curves shown for a choice of $\lambda$ values). The signal arm transmission is obtained as the sum of the two individual signal detection efficiencies \[see Fig. \[Fig:GrangierData\](A) and (B)\]. In summary, we have derived a criterion which allows a conclusive test of the single photon character of conditionally prepared single photon states. We have shown that the inequality in Eq. \[R1R123-R12E13\] is fulfilled by all classical light sources, as well as by states generated by PDC exhibiting higher photon numbers through a large parametric gain. On the contrary, a strong violation of the inequality is observed only for states that constitute a good approximation to a conditionally prepared single photon. Our criterion is realistic enough to include binary non-photon number resolving photon counting detectors while it is sensitive to the degradation observed in the prepared state caused by a vacuum component due to losses, crucial for assessing heralded single photon source performance. Through the application of our criterion it is shown that our waveguided PDC source[@uren04] constitutes a high-fidelity conditional single photon source. Our derived inequality yields a new figure of merit quantifying the overall performance of conditional single photon sources taking into full consideration experimental imperfections. [99]{} A.B. U’Ren, Ch. Silberhorn, K. Banaszek, I.A. Walmsley, Phys. Rev. Lett. **93**, 093601 (2004) P. Grangier, G. Roger, A. Aspect, Europhys. Lett. **1**, 173 (1986) C.W.Chou, S.V. Polyakov, A. Kuzmich, H.J. Kimble, Phys. Rev. Lett. **92**, 213601 (2004) C. K. Hong and L. Mandel,Phys. Rev. Lett. **56**, 58 (1986) J. McKeever *et al.*, Science **303**, 1992 (2004); M. Hennrich, T. Legero, A. Kuhn, and G. Rempe, quant-ph/0406034 (2004) J. G. Rarity, P. R. Tapster, and E. Jakeman, Opt. Comm. [**62**]{}, 201 (1987). P.G. Kwiat, R.Y. Chiao, Phys. Rev. Lett. **66**, 588 (1991) A. I. Lvovsky *et al.*, Phys. Rev. Lett. [**87**]{}, 050402 (2001). O. Alibart, S. Tanzilli, D. B. Ostrowsky, P. Baldi, quant-ph/0405075 T. B. Pittman, C. C. Jacobs, J. D. Franson, quant-ph/0408093 J. F. Clauser, Phys. Rev. D 9, 853 (1974) E. Knill, R. LaFlamme and G.J. Milburn, Nature **409**, 46 (2001); T.C. Ralph, A.G. White, W.J. Munro and G.J. Milburn, Phys. Rev. A **65**, 012314 (2001); T.B. Pittman, M.J. Fitch, B.C. Jacobs and J.D. Franson, quant-ph/0303095 (2003); M. Fiorentino and F.N.C. Wong, Phys. Rev. Lett. **93**, 070502 (2004). P. G. Kwiat *et al.* Phys Rev. Lett. **75**, 4337 (1995); G. Weihs *et al.* Phys. Rev. Lett. 81, 5039 (1998). M.G. Roelofs, A. Suna, W. Bindloss and J.D. Bierlein, J. Appl. Phys. **76**, 4999 (1994).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Pseudoheating of ions in the presence of Alfvén waves is studied. We show that this process can be explained by $E\times B$ drift. The analytic solution obtained in this paper are quantitatively in accordance with previous results. Our simulation results show that the Maxwellian distribution is broadened during the pseudoheating; however, the shape of the broadening distribution function depends on the number of wave modes (i.e., a wave spectrum or a monochromatic dispersionless wave) and the initial thermal speed of ions ($v_{p}$). It is of particular interests to find that the Maxwellian shape is more likely to maintain during the pseudoheating under a wave spectrum compared with a monochromatic wave. It significantly improves our understanding of heating processes in interplanetary space where Alfvénic turbulences exist pervasively. Compared with a monochromatic Alfvén wave, $E\times B$ drift produces more energetic particles in a broad spectrum of Alfvén waves, especially when the Alfvénic turbulence with phase coherent wave modes is given. Such particles may escape from the region of interaction with the Alfvén waves and can contribute to fast particle population in astrophysical and space plasmas.' author: - | Chuanfei Dong$^\mathrm{a}$[^1] and Nagendra Singh$^\mathrm{b}$[^2]\ [$^\mathrm{a}$[Department of Atmospheric, Oceanic and Space Sciences, University of Michigan,]{}]{}\ [Ann Arbor, MI 48109, U.S.A.]{}\ [$^\mathrm{b}$[Department of Electrical and Computer Engineering, The University of Alabama, ]{}]{}\ [Huntsville, AL 35899, U.S.A.]{} title: 'Ion pseudoheating by low-frequency Alfvén waves Revisited' --- [**PACS: 52.50.-b, 52.35.Mw, 96.50.Ci** ]{} =0.32in Introduction ============ Plasma heating and acceleration are hot topics in the fields of nuclear fusion, plasma physics and astrophysics for a long time. A great number of heating mechanisms and theories have been proposed. In a collisionless space plasma environment, wave-particle interactions play significant roles of plasma heating and acceleration[@singh1; @chenPOP; @WangGRL; @Ts; @daiPRL; @ben]. Even when collisions are included, plasmas can still be heated by the interplay between waves and particles to a certain degree[@dongPOP; @DTp; @Leake]. Among a suite of electromagnetic waves, Alfvén waves are generally thought to be the major contributor to the ion heating and acceleration since they exist pervasively in the solar wind and interplanetary space[@a1; @a2; @a3; @jinApJ]. In thermodynamic sense, heating of ions by electromagnetic waves will lead to the dissipation of wave fields, thereby being an irreversible process. Several recent studies however show that ions can be heated by turbulent Alfvén waves in low-beta plasmas even when no dissipation of wave fields occurrs[wangPRL,wuPRL,wangPOP,yoonPOP,wuPOP1,bwangPOP,SAPL,YN]{}; a process that is called “*pseudoheating*” or nonresonant wave-particle interaction. Various methods have been used to validate this heating mechanism such as the test-particle approach with analytic solutions[@wangPRL; @wangPOP], and quasi-linear theories[@wuPRL; @SAPL]. In fact, the name pseudoheating may not be the best term to describe this “heating” process due to the kinetic effects of wave spectra[wuPRL,yoonPOP]{}. However, we still use this appellation for consistency in this paper. The pseudoheating is caused by the wave forces (or their spectra) that result in a deformation of the distribution function with respect to its initial Maxwellian shape. The mean-square velocity fluctuation due to the wave activity leads to an effective broadening of the distribution function similar to real heating and thus could mimic the genuine heating process[@DV]. It is important to point out that the heating shown by Ref.[@wangPRL] contains both pseudoheating and genuine heating, which indicates that even in the particles’ mean-velocity frame, the random kinetic energy of particles still increases via wave-particle interactions. The real heating (or the irreversible dissipation of the wave fields) is caused by the *initial* pitch-angle scattering of newly created ions[@bwangPOP], indicating that there exists the dissipation of wave fields. In contrast, the first adiabatic invariant (magnetic moment $\mu=w_{\perp}/B$) remains constant during the pseudoheating and thus it is an reversible process. The real heating in Refs.[@bwangPOP; @wangPRL] is caused by the violation of the first adiabatic invariant due to the abrupt spatial change of the magnetic field. It is noteworthy that there is a factor of two difference between the temperature expressions in Ref.[@wangPRL] and Ref.[wuPRL]{}, resulting from the fact that quasilinear theory cannot solve the problem of the wave damping appropriately. In addition, Wang *et al.* pointed out that pitch-angle scattering plays a key role in the pseudoheating process[wangPOP]{}. In this paper, the heating process we focus on is restricted to the pseudoheating and thus the real heating is excluded. We demonstrate that the low-frequency Alfvén wave propagating along the background magnetic field ${\mathbf{B}}_{0}={B}_{0}{\mathbf{i}}_{z}$ can *heat*ions. The pseudoheating can be explained either by $E\times B$ drift (in the electric field of the Alfvén wave and the ambient magnetic field $B_0$) proposed in this paper or the pitch-angle scattering previously investigated[@wangPOP]. Our analytic results, as will be shown below, are identical as those derived from quasilinear theory[@wuPRL]. As shown in previous work[@wuPRL; @wangPOP]: $$T_{p\perp }\simeq T_{0}+\frac{W_{B}}{n_{p}}=T_{0}\left( 1+\frac{1}{{\beta }_{p}}\frac{B_{W}^{2}}{B_{0}^{2}}\right) \label{t_ps}$$where $W_{B}=B_{W}^{2}/2\mu _{0}$ and $n_{p}$ are the wave magnetic field energy density and proton number density, respectively. ${\beta }_{p}$ denotes proton $\beta $ that equals to $\left\langle v\right\rangle ^{2}/v_{A}^{2}$ ($\left\langle v\right\rangle $: the thermal speed). $B_{W}^{2}/B_{0}^{2}$ represents the ratio of wave-field energy density to that of the ambient field and $T_{0}=m_{p}\left\langle v\right\rangle ^{2}/2k_{B}$ is the initial proton temperature. Here $T_{p}$ represents the “*apparent temperature*” that should be distinguished from the temperature associated with a real heating process[@wangPOP]. The structure of the remainder of this paper is as follows: We derive and discuss the analytic results of pseudoheating based on $E \times B$ drift in Section 2. In Section 3, test particle simulation results are presented and discussed based on the comparison of a monochromatic Alfvén wave and a wave spectrum. We also briefly discuss the recent observations of large amplitude magnetic perturbations associated with Alfvén waves and their correlation to the pseudoheating. In the last section, conclusions are summarized. Analytic Theory of Pseudoheating ================================ Without loss of generality, left-hand circular polarized Alfvén waves are considered in this paper. The wave magnetic field vector can thus be expressed as $$\mathbf{B}_{W}=\sum_{k}{B}_{k}(\cos {\phi }_{k}{\mathbf{i}}_{x}-\sin {\phi }_{k}{\mathbf{i}}_{y}),$$From the Faraday’s law and the dispersion relation (Ampère’s law) $\omega=kv_A$, the electric field vector can be written as $$\mathbf{E}_{W}=-v_{A}{\mathbf{i}}_{z}\times \mathbf{B}_{W} \label{efv}$$where ${\mathbf{i}}_{x}$, ${\mathbf{i}}_{y}$ and ${\mathbf{i}}_{z}$ are unit directional vectors, ${\phi }_{k}=k({v}_{A}t-z)+\varphi _{k}$ denotes the wave phase, $\varphi _{k}$ is the random phase for mode $k$ and $v_{A}=B_{0}/\sqrt{\mu _{0}n_{p}m_{p}}$ represents the Alfvén speed. According to the linear approximation, we use Eq.(\[efv\]) and the first order term of the generalized Ohmic law $\mathbf{E}_{W} = -\mathbf v_{\perp} \times \mathbf B_0$ to derive the $E\times B$ drift velocity, $\mathbf{v}_{E}$, that can be expressed as follows: $$\mathbf{v}_{E}=\mathbf{v}_{\perp }=\frac{\mathbf{E}_{W}\times \mathbf{B}_{0}}{B_{0}^{2}}=\frac{-v_{A}{\mathbf{i}}_{z}\times \mathbf{B}_{W}\times \mathbf{B}_{0}}{B_{0}^{2}} \label{vdrift}$$ The derivation above is consistent with the traditional procedure to derive the classical Alfvén wave solution[@DV] in the ideal magnetohydrodynamic (MHD) system. It indicates that the randomized proton motion is actually parasitic to wave fields due to the fact that the drift velocity $\mathbf{v}_{E}$ is expected to disappear if the waves subside. Given the assumption that the characteristic spatial scale of the system is much larger than typical Alfvén wavelength[@wangPRL], the energy density associated with the wave magnetic field can be written as: $$W_{B}=\frac{1}{2}n_{p}m_{p}\left\langle \left( \mathbf{v}_{E}-\left\langle \mathbf{v}_{E}\right\rangle \right) ^{2}\right\rangle =\frac{1}{2}n_{p}m_{p}v_{A}^{2}\frac{{B}_{W}^{2}}{B_{0}^{2}}=\frac{1}{2}n_{p}m_{p}\frac{B_{0}^{2}}{\mu _{0}n_{p}m_{p}}\frac{{B}_{W}^{2}}{B_{0}^{2}}=\frac{B_{W}^{2}}{2\mu _{0}} \label{WB}$$ where the bracket $\left\langle \cdot \right\rangle $ denotes an average over all particles. Given the large characteristic spatial scale of the system, $l$, as described above, the following approximation is valid: $$\left\langle \mathbf{v}_{E}\right\rangle=\lim_{l\rightarrow \infty}\frac{1}{l} \int_{0}^{l} \mathbf{v}_{E} dz =0$$ The temperature expression shown below is the same as Eq.(\[t\_ps\]), $$T_{p\perp }=T_{0}+\frac{1}{2}n_{p}m_{p}\left\langle \left( \mathbf{v}_{E}-\left\langle \mathbf{v}_{E}\right\rangle \right) ^{2}\right\rangle =T_{0}+\frac{B_{W}^{2}}{2\mu _{0}}=T_{0}\left( 1+\frac{1}{\beta _{p}}\frac{B_{W}^{2}}{B_{0}^{2}}\right) \label{temp}$$ The results shown in Eqs.(\[WB\])&(\[temp\]) are consistent with the MHD theory due to the fact that the energy density associated with the Alfvén wave magnetic field, $B_W^2/2\mu_0$, equals to the ion (fluid) kinetic energy density, $B_W^2/2\mu_0=B_0^2u_1^2/(2\mu_0v_A^2) =\rho_0 u_1^2/2$, where $u_1$ is the perturbed ion (fluid) velocity. The consistency between Eqs.(\[WB\])&(\[temp\]) and the MHD theory[@YN] clearly indicates that the analytic theory described here incorporates the local equilibrium velocity distribution of the ions. In the following section, we adopt the test particle approach to simulate pesudoheating and the results will be presented and discussed in detail. Test Particle Simulations of Pseudoheating ========================================== We start with a linearly polarized Alfvén wave with wave magnetic field vector $\mathbf{B}_{W}=\sum_{k}{B}_{k}\cos {\phi }_{k}{\mathbf{i}}_{y}$ in order to show that the drift velocity is in $E\times B$ direction. This is the basis of further understanding. Then we conduct two case studies. In case one, we consider a monochromatic dispersionless Alfvén wave with frequency $\omega $=0.05$\Omega _{p}$. In case two, we test a spectrum of Alfvén waves with random phase $\varphi _{k}$, and the frequencies of the wave modes can be calculated as follows: $\omega _{i}=\omega _{1}+(i-1)\triangle \omega $ ($i$=1,2,..., $N$; $N$=41), where $\triangle \omega =(\omega _{N}-\omega _{1})/(N-1)$; $\omega _{1}$=0.01$\Omega _{p}$ and $\omega _{N}$=0.05$\Omega _{p}$. The amplitude of each wave mode (only one wave mode in case one) is considered to be equal but changes gradually with time such that $B_{W}^{2}=\sum_{k}B_{k}^{2}=\epsilon (t)B_{0}^{2}$, where $\epsilon (t)=\left\{ \begin{array}{ccc} \epsilon _{0}e^{-(t-t_{1})^{2}/\tau ^{2}}, & if & t<t_{1}, \\ \epsilon _{0}, & if & t_{1}\leq t\leq t_{2}, \\ \epsilon _{0}e^{-(t-t_{2})^{2}/\tau ^{2}}, & if & t>t_{2}.\end{array}\right. $ ![Wave field strength $B_W^2/B_0^2$ (solid line) and perpendicular apparent temperature $T_{p\perp}$ (dashed line) *vs* time; $v_{p}$=0.07$v_{A}$.[]{data-label="fig1"}](B-T.jpg) Similar to previous work[@wangPOP], we set $t_{1}$=500$\Omega _{p}^{-1}$, $t_{2}$=1000$\Omega _{p}^{-1}$, $\tau $=200$\Omega _{p}^{-1}$, and $\epsilon _{0}$=0.05, where $\Omega _{p}$ is the proton gyrofrequency. The initial velocities of test particles are randomly distributed and possess a Maxwellian distribution with thermal speed $v_{p}$=0.07$v_{A}$ (the situations with $v_{p}$=0.01$v_{A}$, 0.03$v_{A}$, and 0.15$v_{A}$ are also discussed later). The numerical scheme is similar to what described in Ref.[@wangPRL; @dongPOP]. It is noteworthy that the ion thermal speed $v_{p}$ in this paper and in Refs.[@dongPOP; @wangPRL; @wangPOP; @bwangPOP] is defined as $v_{p}=\left( k_{B}T/m\right) ^{1/2}$[@book] while the general thermal speed $\left\langle v\right\rangle =$ $\left( 2k_{B}T/m\right) ^{1/2}$, causing $v_{p}^{2}=\left\langle v\right\rangle ^{2}/2$. Thus it leads to a factor of two difference for proton $\beta _{p}$. The basic reason causing this difference is the different definition of temperature, i.e., $T=m\left\langle v\right\rangle ^{2}/2k_{B}$ or $T=mv_{p}^{2}/k_{B}$. The one-dimensional Maxwellian velocity distribution based on $\left\langle v\right\rangle $ and $v_{p}$ can be expressed as follows[@book]: $$f_{v}\left( v_{i=x,y,z}\right) =\frac{n}{\left( \pi \left\langle v_{i}\right\rangle ^{2}\right) ^{1/2}}\exp \left( -\frac{v_{i}^{2}}{\left\langle v_{i}\right\rangle ^{2}}\right) =\frac{n}{\left( 2\pi v_{p}^{2}\right) ^{1/2}}\exp \left( -\frac{v_{i}^{2}}{2v_{p}^{2}}\right) \label{MaxDis}$$Different definitions of thermal speed or temperature, however, do not affect the final results since self-consistent definition is maintained throughout the previous studies[@dongPOP; @wangPRL; @wangPOP; @bwangPOP; @wuPRL]. ![Velocity scatter plots of test particles in the $v_x-v_y$ space for a linearly polarized Alfvén wave $\mathbf{B}_{W}=\sum_{k}{B}_{k}\cos {\protect\phi }_{k}{\mathbf{i}}_{y}$ at $\Omega _{p}$t=0 (left) and $\Omega _{p}$t=600 (right); $v_{p}$=0.07$v_{A}$.[]{data-label="fig11"}](At0vxvy.jpg "fig:") ![Velocity scatter plots of test particles in the $v_x-v_y$ space for a linearly polarized Alfvén wave $\mathbf{B}_{W}=\sum_{k}{B}_{k}\cos {\protect\phi }_{k}{\mathbf{i}}_{y}$ at $\Omega _{p}$t=0 (left) and $\Omega _{p}$t=600 (right); $v_{p}$=0.07$v_{A}$.[]{data-label="fig11"}](Aparaexbvxvy.jpg "fig:") Fig.\[fig1\] shows the dependence of apparent temperature ($T_{p\perp }$) on time-dependent wave field strength $B_{W}^{2}/B_{0}^{2}$ under a spectrum of Alfvén waves. The result of a monochromatic dispersionless Alfvén wave is almost the same as that under a spectrum of Alfvén waves, which is in consistency with the analytic result shown above; the temperature expression Eq.(\[temp\]) is independent of the number of wave modes, $N$. In Fig.\[fig1\], apparent temperature versus time step shows the same tendency as the wave field strength, indicating that the proton temperature returns to its original value when the waves subside, in accordance with the work of Wang *et al.*[@wangPOP; @bwangPOP]. It implies once again that the pseudoheating process is parasitic to the waves, as indicated by the aforementioned $E\times B$ drift. It is very important that the consistency between Eqs.(\[WB\])&(\[temp\]) and the numerical solution shown in Fig.\[fig1\] indirectly ensures the self-consistency of the test particle simulation debated in the Refs.[@Dongr; @Luc]. Furthermore, this process is analogous to the ion motion in a magnetic mirror, where the magnetic moment is invariant. This physical picture helps us to better understand the reversibility in pseudoheating. ![ Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a circularly polarized Alfvén wave at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$ (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, and (d) $v_{p}$=0.15$v_{A}$.[]{data-label="fig31"}](Asvp001va.jpg "fig:") ![ Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a circularly polarized Alfvén wave at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$ (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, and (d) $v_{p}$=0.15$v_{A}$.[]{data-label="fig31"}](Asvp003va.jpg "fig:") ![ Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a circularly polarized Alfvén wave at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$ (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, and (d) $v_{p}$=0.15$v_{A}$.[]{data-label="fig31"}](Asvp007va.jpg "fig:") ![ Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a circularly polarized Alfvén wave at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$ (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, and (d) $v_{p}$=0.15$v_{A}$.[]{data-label="fig31"}](Asvp015va.jpg "fig:") In order to show that the pesudoheating is caused by $E\times B$ drift, we first illustrate the velocity scatter plots of test particles in the $v_x-v_y$ space for a linearly polarized Alfvén wave. According to Eq.(\[efv\]), if $\mathbf{B}_{W}$ is in the $y$ direction, $\mathbf{E}_{W}$ is in the $x$ direction; therefore, $\mathbf{E}_{W}\times \mathbf{B}_{0}$ is in the $y$ direction (ignore the negative sign here). As indicated in Fig.\[fig11\], the drift velocity is in the $y$ direction, in agreement with the analytic results. To be consistent with the previous work[@wangPRL; @wuPRL; @wangPOP; @yoonPOP; @wuPOP1; @bwangPOP; @SAPL; @YN], we will focus on the circularly polarized condition in the following paragraphs. The further discussions are based on the comparison of a monochromatic Alfvén wave and a wave spectrum. ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for a circularly polarized Alfvén wave; $v_{p}$=0.07$v_{A}$.[]{data-label="fig3"}](Ast0.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for a circularly polarized Alfvén wave; $v_{p}$=0.07$v_{A}$.[]{data-label="fig3"}](Ast200.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for a circularly polarized Alfvén wave; $v_{p}$=0.07$v_{A}$.[]{data-label="fig3"}](Ast300.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for a circularly polarized Alfvén wave; $v_{p}$=0.07$v_{A}$.[]{data-label="fig3"}](Ast600.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for a circularly polarized Alfvén wave; $v_{p}$=0.07$v_{A}$.[]{data-label="fig3"}](Ast1200.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for a circularly polarized Alfvén wave; $v_{p}$=0.07$v_{A}$.[]{data-label="fig3"}](Ast1500.jpg "fig:") Fig.\[fig31\] presents the velocity scatter plots of ions for a circularly polarized Alfvén wave with different initial thermal speeds: $v_{p}$=0.01$v_{A}$, 0.03$v_{A}$, 0.07$v_{A}$ and 0.15$v_{A}$. Inspection of Fig.\[fig31\] reveals that when the thermal speed is quite small with respect to the Alfvén speed $v_{A}$ (i.e., $v_{p}$=0.01$v_{A}$ and 0.03$v_{A}$), test particles form a ring distribution in the $v_{x}-v_{y}$ velocity space. Although the ring distribution eventually can be filled with test particles with different velocities when $v_{p}$ becomes large, the heating efficiency becomes relatively low \[refer to Eq.(\[temp\])\]. Besides the velocity scatter plot, particle distribution in the phase space is also essential for our understanding of the pesudoheating. Fig.\[fig3\] shows the scatter plots of protons between 1000 and 2400$ v_{A}\Omega _{p}^{-1}$ at different times $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 for a monochromatic dispersionless Alfvén wave. The results agree with that shown in Fig.\[fig1\]; the stronger the wave fields are, the more obvious the velocity fluctuations are. A common feature that stands out in Fig.\[fig3\] is that the particle motion is periodic under a monochromatic Alfvén wave. It indicates that the kinetic behavior of test particles under a monochromatic dispersionless wave versus a wave spectrum, as will be shown below, is fairly different. ![Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi _{k}$ at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$, (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, (d) $v_{p}$=0.15$v_{A}$, (e) $v_{p}$=0.01$v_{A}$, and (f) $v_{p}$=0.03$v_{A}$; (a)-(d): 41 wave modes, (e)-(f): 1001 wave modes.[]{data-label="fig21"}](Amvp001va.jpg "fig:") ![Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi _{k}$ at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$, (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, (d) $v_{p}$=0.15$v_{A}$, (e) $v_{p}$=0.01$v_{A}$, and (f) $v_{p}$=0.03$v_{A}$; (a)-(d): 41 wave modes, (e)-(f): 1001 wave modes.[]{data-label="fig21"}](Amvp003va.jpg "fig:") ![Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi _{k}$ at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$, (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, (d) $v_{p}$=0.15$v_{A}$, (e) $v_{p}$=0.01$v_{A}$, and (f) $v_{p}$=0.03$v_{A}$; (a)-(d): 41 wave modes, (e)-(f): 1001 wave modes.[]{data-label="fig21"}](Amvp007va.jpg "fig:") ![Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi _{k}$ at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$, (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, (d) $v_{p}$=0.15$v_{A}$, (e) $v_{p}$=0.01$v_{A}$, and (f) $v_{p}$=0.03$v_{A}$; (a)-(d): 41 wave modes, (e)-(f): 1001 wave modes.[]{data-label="fig21"}](Amvp015va.jpg "fig:") ![Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi _{k}$ at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$, (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, (d) $v_{p}$=0.15$v_{A}$, (e) $v_{p}$=0.01$v_{A}$, and (f) $v_{p}$=0.03$v_{A}$; (a)-(d): 41 wave modes, (e)-(f): 1001 wave modes.[]{data-label="fig21"}](Avth001w1001.jpg "fig:") ![Velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi _{k}$ at $\Omega _{p}$t=600. (a) $v_{p}$=0.01$v_{A}$, (b) $v_{p}$=0.03$v_{A}$, (c) $v_{p}$=0.07$v_{A}$, (d) $v_{p}$=0.15$v_{A}$, (e) $v_{p}$=0.01$v_{A}$, and (f) $v_{p}$=0.03$v_{A}$; (a)-(d): 41 wave modes, (e)-(f): 1001 wave modes.[]{data-label="fig21"}](Avth003w1001.jpg "fig:") Compared with the ion behavior under a monochromatic wave, the ion motion tends to become random under a wave spectrum as indicated in Figs.\[fig21\] & \[fig2\]. Fig.\[fig21\] presents the velocity scatter plots of test particles in the $v_{x}-v_{y}$ space for a spectrum of circularly polarized Alfvén waves with random phases $\varphi _{k}$. When the thermal speed is two orders of magnitudes smaller than the Alfvén speed $v_{A}$ (i.e., $v_{p}$=0.01$v_{A}$ and 0.03$v_{A}$), test particles cannot fully fill the circle in the $v_{x}-v_{y}$ velocity space. However, with the increase of the initial thermal speed $v_{p}$, the circle in the velocity space is fully filled with test particles with different velocities. Figs.\[fig21\](e)&(f) show the velocity scatter plots when adopting 1001 wave modes. Compared with Figs.\[fig21\](a)&(b), Figs.\[fig21\](e)&(f) show that protons tend to fully fill the circle in the velocity space, indicating that the ion velocity distribution tends to be a full-filled circle in the velocity space when the number of wave modes, $N$, is large enough, regardless of the relatively small thermal speed. It can be observed based on the comparison between Fig.\[fig31\] and Fig.\[fig21\] as well. The phase space proton scatter diagrams by adopting a wave spectrum with wave modes $N$=41 are shown in Fig.\[fig2\] . The results, however, reveal significantly different particle distributions in the phase space compared with those shown in Fig.\[fig3\]. The main conclusion drawn from Fig.\[fig2\] is that the velocity fluctuations caused by wave activity is quasirandom, and thereby could mimic the real heating (also refer to Fig.\[fig4\]). However, as indicated in both Fig.\[fig11\] and Fig.\[fig2\], the pseudoheating caused by these wave activities is reversible, indicating no dissipation of wave fields, and therefore does not represent real heating in thermodynamic sense. It is also interesting to investigate the ion behavior under a spectrum of circularly polarized Alfvén waves with same initial phases $\protect\varphi_k$. It is noteworthy that the wavelength and wave frequency among different wave modes are still different. Fig.\[fig8\] illustrates the scatter plots of the test particles in the $v_x-z$ and $v_x-v_y$ space. There is a pulse-like structure in the protons’ phase space distribution due to the coherence of wave modes. The proton distribution in the $v_x-v_y$ space is primarily consist of two parts: the broadening core Maxwellian distribution and the outer ring structure. The broadening is caused by the wave forces (or their spectra) while the accelerated particles in the outer ring result from the Alfvénic turbulence with phase coherent wave modes as indicated in Ref.[@ra1], where the acceleration of charged particles by large amplitude MHD waves was studied. In contrast, if Alfvénic turbulence without any envelope modulation[@ra2] is given, the “acceleration” may not be observed. These high energy particles in the outer ring may escape from the region of interaction with the Alfvén waves and can contribute to the fast particle population in astrophysical and space plasmas. ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for an Alfvén wave spectrum with random phases $\protect\varphi _{k}$; $v_{p}$=0.07$v_{A}$.[]{data-label="fig2"}](Amt0.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for an Alfvén wave spectrum with random phases $\protect\varphi _{k}$; $v_{p}$=0.07$v_{A}$.[]{data-label="fig2"}](Amt200.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for an Alfvén wave spectrum with random phases $\protect\varphi _{k}$; $v_{p}$=0.07$v_{A}$.[]{data-label="fig2"}](Amt300.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for an Alfvén wave spectrum with random phases $\protect\varphi _{k}$; $v_{p}$=0.07$v_{A}$.[]{data-label="fig2"}](Amt600line.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for an Alfvén wave spectrum with random phases $\protect\varphi _{k}$; $v_{p}$=0.07$v_{A}$.[]{data-label="fig2"}](Amt1200.jpg "fig:") ![Scatter plots of protons between 1000 and 2400$v_{A}\Omega _{p}^{-1} $ at different times, $\Omega _{p}$t=0, 200, 300, 600, 1200 and 1500 in the $v_{x}$-$z$ phase space for an Alfvén wave spectrum with random phases $\protect\varphi _{k}$; $v_{p}$=0.07$v_{A}$.[]{data-label="fig2"}](Amt1500.jpg "fig:") ![Proton scatter plots of a spectrum of circularly polarized Alfvén waves with same initial phases $\protect\varphi_k$ at $\Omega _{p}$t=600; $v_{p}$=0.07$v_{A}$.[]{data-label="fig8"}](Acompnor.jpg) ![The normalized velocity distribution functions plotted against $v_x$ at $\Omega _{p}$t=0 (solid line) and 600 (dashed line). (a) A spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$; (b) A circularly polarized monochromatic Alfvén wave. $v_{p}$=0.07$v_{A}$; (c) A spectrum of circularly polarized Alfvén waves with same initial phases $\protect\varphi_k$. (d) The local normalized velocity distribution function based on the statistics of protons in the spatial range $1490<z\Omega_p/v_A<1510$ under a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$. The black dots denote the shifted local normalized velocity distribution function in the laboratory frame while the white dots represent the local normalized velocity distribution function in the particles’ mean-velocity frame. The local statistical mean velocity equals to $v_x(1490<z\Omega_p/v_A<1510)=-0.176v_A$. []{data-label="fig4"}](Amvd.jpg "fig:") ![The normalized velocity distribution functions plotted against $v_x$ at $\Omega _{p}$t=0 (solid line) and 600 (dashed line). (a) A spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$; (b) A circularly polarized monochromatic Alfvén wave. $v_{p}$=0.07$v_{A}$; (c) A spectrum of circularly polarized Alfvén waves with same initial phases $\protect\varphi_k$. (d) The local normalized velocity distribution function based on the statistics of protons in the spatial range $1490<z\Omega_p/v_A<1510$ under a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$. The black dots denote the shifted local normalized velocity distribution function in the laboratory frame while the white dots represent the local normalized velocity distribution function in the particles’ mean-velocity frame. The local statistical mean velocity equals to $v_x(1490<z\Omega_p/v_A<1510)=-0.176v_A$. []{data-label="fig4"}](Asvd.jpg "fig:") ![The normalized velocity distribution functions plotted against $v_x$ at $\Omega _{p}$t=0 (solid line) and 600 (dashed line). (a) A spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$; (b) A circularly polarized monochromatic Alfvén wave. $v_{p}$=0.07$v_{A}$; (c) A spectrum of circularly polarized Alfvén waves with same initial phases $\protect\varphi_k$. (d) The local normalized velocity distribution function based on the statistics of protons in the spatial range $1490<z\Omega_p/v_A<1510$ under a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$. The black dots denote the shifted local normalized velocity distribution function in the laboratory frame while the white dots represent the local normalized velocity distribution function in the particles’ mean-velocity frame. The local statistical mean velocity equals to $v_x(1490<z\Omega_p/v_A<1510)=-0.176v_A$. []{data-label="fig4"}](Aenertail.jpg "fig:") ![The normalized velocity distribution functions plotted against $v_x$ at $\Omega _{p}$t=0 (solid line) and 600 (dashed line). (a) A spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$; (b) A circularly polarized monochromatic Alfvén wave. $v_{p}$=0.07$v_{A}$; (c) A spectrum of circularly polarized Alfvén waves with same initial phases $\protect\varphi_k$. (d) The local normalized velocity distribution function based on the statistics of protons in the spatial range $1490<z\Omega_p/v_A<1510$ under a spectrum of circularly polarized Alfvén waves with random phases $\protect\varphi_k$. The black dots denote the shifted local normalized velocity distribution function in the laboratory frame while the white dots represent the local normalized velocity distribution function in the particles’ mean-velocity frame. The local statistical mean velocity equals to $v_x(1490<z\Omega_p/v_A<1510)=-0.176v_A$. []{data-label="fig4"}](x=1500only.jpg "fig:") In Fig.\[fig4\], we present the normalized velocity distribution function at different time $\Omega _{p}$t=0 (solid line) and 600 (dashed line). The broadening of the distribution functions shown in Figs.\[fig4\](a)-(c) are based on the statistics of all the particles with different spatial coordinates $z$ while Fig.\[fig4\](d) is based on the local statistics of protons in the spatial range $1490<z\Omega_p/v_A<1510$, thus being a local normalized velocity distribution function around $z\Omega_p/v_A=1500$ as indicated by the line in Fig.\[fig2\](d). The velocity spreading in Figs.\[fig4\](a)-(c) is caused by averaging over the wave effects. Inspection of Figs.\[fig4\](a)-(b) reveals that the Maxwellian distribution is more likely to maintain under a wave spectrum compared with a monochromatic Alfvén wave due to the fact that the ion motion under a wave spectrum is quasirandom, which is in good agreement with the analytic derivation by Wu and Yoon using the quasi-linear theory[@wuPRL]. It also points towards different kinetic behaviors of particles under a wave spectrum and a monochromatic Alfvén wave. It is well known that enhanced Alfvénic turbulences exist pervasively in the solar wind and interplanetary space, while in astrophysical observation measurements of temperature are based on spectroscopic data collected from the source region of interest. Therefore it is very difficult to distinguish the real heating from the pseudoheating due to the restriction of spatial resolution of the instruments and the presence of wave forces (or their spectra). The suprathermal tail and small bump shown in Fig.\[fig4\](c), where the $v_x$ axis scale is different from Figs.\[fig4\](a)-(b), corresponds to the accelerated (suprathermal) ions due to the wave modes coherence. The bump-on-tail structure may excite plasma instabilities, however, the detailed discussion of these instabilities is beyond the scope of this paper. It also needs to point out that here we only consider the pseudoheating and investigate the normalized velocity distribution function corresponding to this process while the real heating via nonresonant interaction with Alfvén waves always coexists[@wangPRL; @Dongr]. The velocity distribution function, therefore, is supposed to be broader than that shown in Fig.\[fig4\] and it is possible that the broadening effects smooth the velocity distribution function and eliminate the bump on the tail. In Fig.\[fig4\](d), we investigate the local normalized velocity distribution function based on the statistics of protons in the spatial range $1490<z\Omega_p/v_A<1510$. If the spatial range is too small, there will be insufficient particles to be counted for statistics. The local statistical mean velocity equals to $v_x$=-0.176$v_A$ which agrees well with the result shown in Fig.\[fig2\](d). The local normalized velocity distribution function in the particles’ mean-velocity frame at $\Omega_pt=600$ (as indicated by the white dots) is in good agreement with the initial Maxwellian distribution based on Eq.(\[MaxDis\]). The slight difference between these two distribution functions results from the fact that the local statistics are based on a small spatial range ($1490<z\Omega_p/v_A<1510$) and therefore it can be potentially affected by the effects of wave spectra. This leads to a slightly broader local velocity distribution. The result shown in Fig.\[fig4\](d) indicates that the analytic theory in Sec.2 is on the basis of the local equilibrium velocity distribution of the ions. ![ (Color online) (a) The spatial distribution of Alfvén wave magnetic fields at $\Omega _{p}$t=600. (b)-(c) The Alfvén wave magnetic field spatial distribution and the corresponding scatter plot in the $v_x-z$ phase space at $\Omega _{p}$t=600; (b) the case with random phases $\protect\varphi_k$ and (c) the case with same phases $\protect\varphi_k$. []{data-label="fig42"}](Bdist600wp.jpg "fig:") ![ (Color online) (a) The spatial distribution of Alfvén wave magnetic fields at $\Omega _{p}$t=600. (b)-(c) The Alfvén wave magnetic field spatial distribution and the corresponding scatter plot in the $v_x-z$ phase space at $\Omega _{p}$t=600; (b) the case with random phases $\protect\varphi_k$ and (c) the case with same phases $\protect\varphi_k$. []{data-label="fig42"}](BdisVx.jpg "fig:") ![ (Color online) (a) The spatial distribution of Alfvén wave magnetic fields at $\Omega _{p}$t=600. (b)-(c) The Alfvén wave magnetic field spatial distribution and the corresponding scatter plot in the $v_x-z$ phase space at $\Omega _{p}$t=600; (b) the case with random phases $\protect\varphi_k$ and (c) the case with same phases $\protect\varphi_k$. []{data-label="fig42"}](BdisVxcor.jpg "fig:") The pseudoheating becomes important when the magnetic perturbation ($\delta B$) is relatively large compared with the background magnetic field ($B_0$). Fig.\[fig42\] illustrates the spatial distribution of the Alfvén wave magnetic fields with different wave modes, $N$, and the corresponding scatter plot in the $v_x-z$ phase space. The solid curve for the magnetic field of a monochromatic Alfvén wave is simply a sinusoidal wave. The dashed and dotted curves represent a spectrum of Alfvén waves with 41 wave modes that has random and same phase $\varphi _{k}$, respectively. They can mimic the Alfvénic turbulence due to the quasirandom fluctuations of the wave fields. The amplitude of the magnetic perturbation can be quite large locally, thus may produce some energetic particles that can escape from the constraint of the Alfvén waves and contribute to fast particle population in astrophysical and space plasmas. As indicated in Fig.\[fig42\], the larger the magnetic perturbations are, the more effective the distribution functions are broadened. Refs.[@LS; @AA] suggest that small scale reconnection events occur during the solar flares, which can provide large magnitude spike-like magnetic field fluctuations. Additionally, the cluster observation of surface waves in the ion jets from magnetotail reconnection also shows that $\delta B/|B|$ can be as high as 0.5 and occasionally even higher[@dai2]. Furthermore, the amplitude of the magnetic perturbation associated with Alfvén waves can be even larger than the ambient magnetic field in certain cases observed by the *Wind* satellite[@wind]. Figs.\[fig42\](b)&(c) also show that the distribution of test particles in the $v_{x}$-$z$ phase space and the wave field perturbations are in antiphase ($\pi$ phase difference) due to the fact that the proton motion is parasitic to Alfvén waves, indicating an exchange of energy between the particles’ kinetic energy and the magnetic energy. Therefore Fig.\[fig42\](b) indicates that the psuedoheating is a consequence of equilibrium MHD system. It is noteworthy that the parallel electric field and wave damping are not included in the test particle simulations of our current work. In the self-consistent simulations such as hybrid simulations (i.e., kinetic description of ions, fluid electrons), the phase-correlation among the Fourier modes could affect, for instance, that the ponderomotive force resulting from the envelope-modulated Alfvén waves heat ions through the nonlinear Landau damping[@r3] and the parallel heating of ions due to the nonlinear Landau damping[@r4]. In Ref.[@r3], the ponderomotive force and beat interaction are identified as the most important nonlinear effects in proton heating by nonlinear field-aligned Alfvén waves in the solar coronal holes. Interestingly, they found that the nonlinearity is particularly strong when the wave spectrum consists of counterpropagating modes of equal intensity, even if the intensity is relatively low. Moreover, from the hybrid approach, dissipation processes of the Alfvénic turbulence with the broadband wave number spectrum can be different from those of monochromatic Alfvén waves, since the former is associated with the density fluctuations, $|\delta n/n|$, and the resultant spatial modulation of $|B|^2$ due to compressive effects of ponderomotive forces[@r5; @r6]: right-hand polarized Alfvénic turbulence with such an envelope modulation can be dissipated due to the nonlinear Landau damping, while left-hand polarized Alfvénic turbulence with the broadband spectrum is preferentially dissipated by the modulational instability. On the other hand, low-frequency, monochromatic Alfvén waves are relatively stable to the linear collisionless dissipations such as Landau damping and cyclotron damping, nonlinear wave-wave interactions parametric instabilities are important for the dissipation of these waves. The decay instability is dominant for dissipation of the right-hand polarized finite-amplitude monochromatic Alfvén waves in low $\beta$ plasmas, while the left-hand polarized waves can also be dissipated via the modulational instability[@r4]. These wave damping mechanisms are closely related to the present work; therefore, in order to study the characteristics of the “acceleration” and “heating” process in more detail, comprehensive and self-consistent studies are necessary in the future. CONCLUSION ========== Ion pseudoheating by low-frequency Alfvén waves is investigated based on the comparison between a monochromatic Alfvén wave and a wave spectrum. Both analytic and simulation results show that $E\times B$ drift plays a principal role in this process and the proton motion is parasitic to Alfvén waves. It indicates the psuedoheating is a consequence of the equilibrium in the MHD system. Our results are in good agreement with the previous studies based on pitch-angle scattering. More importantly, it provides a simple understanding of the reversible property of this process from $E\times B$ drift point of view; if wave magnetic and electric fields disappear, there will be no drift velocity $v_{E}$ and therefore no pseudoheating. We showed that the wave spectra contribute to the broadening of the Maxwellian distribution during the pseudoheating, and it is therefore difficult to exclude the apparent temperature $T_p$ from observations due to the low spatial resolution of the instruments. It is of particular interests to note that the Maxwellian shape is more likely to maintain during the pseudoheating under a wave spectrum compared with a monochromatic Alfvén wave. We, therefore can conclude that the kinetic behavior of ions under a monochromatic wave and a wave spectrum is totally different. Moreover, we illustrated that $E \times B$ drift can produce energetic particles under a spectrum of Alfvén waves, which may contribute to fast particle population in astrophysical and space plasmas. [**Acknowledgments:**]{} C.F. Dong appreciates many fruitful discussions with Prof. C.B. Wang and Prof. Y.Y. Lau. The authors would like to thank the anonymous referees’ helpful comments and suggestions. [99]{} L. Chen, Z. H. Lin, and R. White, Phys. Plasmas **8**, 4713 (2001). N. Singh, G. Khazanov and A. Mukhter, J. Geophys. Res. **112**, A06210, doi: 10.1029/2006JA011933, (2007). B. Wang, C. B. Wang, P. H. Yoon, and C. S. Wu, Geophys. Res. Lett., **38**, L10103 (2011). D. Tsiklauri, Phys. Plasmas **18**, 092903 (2011) L. Dai, Phys. Rev. Lett. **102**, 245003 (2009). B. Chandran, B. Li, B. Rogers, E. Quataert, and K. Germaschewski, Astrophys. J. **720**, 503 (2010). C. F. Dong, and C. S. Paty, Phys. Plasmas **18**, 030702 (2011). D. Tsiklauri, and R. Pechhacker, Phys. Plasmas **18**, 042901 (2011). J. E. Leake, T. D. Arber, and M. L. Khodachenko, Astron. Astrophys. **442**, 1091 (2005). F. Scarf, Space Sci. Rev. **11**, 234 (1970). L. F. Burlaga, Space Sci. Rev. **12**, 600 (1971). L. Davis, Jr., in *Solar-Terrestrial Physics*, edited by E. R. Dyer, J.G. Roederer, and A. J. Hundhausen (Reidel, Dordrecht, 1972). M. Jin et al., Astrophys. J. **745**, 6 (2012). C. B. Wang, C. S. Wu, and P.H. Yoon, Phys. Rev. Lett. **96**, 125001 (2006). C. S. Wu, and P.H. Yoon, Phys. Rev. Lett. **99**, 075001 (2007). S. Bourouaine, E. Marsch, and C. Vocks, Astrophys. Lett. **684**, L119 (2008). C. B. Wang, and C. S. Wu, Phys. Plasmas **16**, 020703 (2009). C. S. Wu, P. H. Yoon, and C. B. Wang, Phys. Plasmas **16**, 054503 (2009). P. H. Yoon, C. B. Wang, and C. S. Wu, Phys. Plasmas **16**, 102102 (2009). B. Wang, and C. B. Wang, Phys. Plasmas **16**, 082902 (2009). Y. Nariyuki, Phys. Plasmas **19**, 084504 (2012). D. Verscharen, and E. Marsch, Ann. Geophys., **29**, 909 (2011). W. Baumjohann, and R. A. Treumann, *Basic Space Plasma Physics* (Imperial College Press, 1997, pp. 127) Q. M. Lu, X. Gao, and X. Li, Phys. Plasmas **18**, 084703 (2011). C. Dong and C. S. Paty, Phys. Plasmas **18**, 084704 (2011). Y. Kuramitsu, and T. Hada, Geophys. Res. Lett., **27**, 629 (2000). F. Malara, et al, Phys. Plasmas, **7**, 2866 (2000); R. H.Cohen, and R. M. Kulsrud, Phys. Fluids, **17**, 2215 (1974). S. A. Markovskii, B. J. Vasquez, and J. V. Hollweg, Astrophys. J. **695**, 1413 (2009). Y. Nariyuki, T. Hada, and K. Tsubouchi, Phys. Plasmas **17**, 072301 (2010). Y. Nariyuki, T. Hada, and K. Tsubouchi, Phys. Plasmas **15**, 114502 (2008). F. Valentini, P. Veltri, F. Califano, and A. Mangeney, Phys. Rev. Lett. **101**, 025006 (2008). M. Jin, and M. D. Ding, Astron. Astrophys., **471**, 705 (2007). K. Shibata, and T. Magara, Living Rev. Solar Phys. 8 (2011), 6. L. Dai, J. R. Wygant, C. Cattell, J. Dombeck, S. Thaller, C. Mouikis, A. Balogh, and H. Rème, J. Geophys. Res. **116**, A12227, doi:10.1029/2011JA017004, (2011). X. Wang, J. S. He, C. Y. Tu, E. Marsch, L. Zhang, and J. K. Chao, Astrophys. J. **746**, 147 (2012). [^1]: dcfy@umich.edu [^2]: singhn@uah.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'Camera and [lidar]{}are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the [bird’s-eye view]{}detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining.' author: - | Sourabh Vora Alex H. Lang Bassam Helou Oscar Beijbom\ nuTonomy: an Aptiv Company\ [{sourabh, alex, bassam, oscar}@nutonomy.com]{} bibliography: - '../references.bib' title: 'PointPainting: Sequential Fusion for 3D Object Detection' ---
{ "pile_set_name": "ArXiv" }
--- author: - | Frank Wolter\ University of Liverpool - | Michael Zakharyaschev\ Birkbeck College London title: Undecidability of the unification and admissibility problems for modal and description logics --- Author’s address: F. Wolter, Department of Computer Science, University of Liverpool, Liverpool L69 7ZF, U.K., [frank@csc.liv.ac.uk]{}. M. Zakharyaschev, School of Computer Science and Information Systems, Birkbeck College, London WC1E 7HX, U.K., [michael@dcs.bbk.ac.uk]{}. Introduction ============ The *unification* (or *substitution*) *problem* for a propositional logic $L$ can be formulated as follows: given a formula $\varphi$ in the language of $L$, decide whether it is *unifiable* in $L$ in the sense that there exists a uniform substitution ${\mbox{\boldmath $s$}}$ for the variables of $\varphi$ such that ${\mbox{\boldmath $s$}}(\varphi)$ is provable in $L$. For normal modal logics, this problem is equivalent to the standard unification problem modulo equational theories : in this case the equational theory consists of any complete set of equations axiomatising the variety of Boolean algebras with operators and additional equations corresponding the axioms of $L$. A close algorithmic problem for $L$ is the *admissibility problem* for inference rules: given an inference rule $\varphi_1,\dots,\varphi_n /\varphi$, decide whether it is [*admissible*]{} in $L$, that is, for every substitution ${\mbox{\boldmath $s$}}$, we have $L \vdash {\mbox{\boldmath $s$}}(\varphi)$ whenever $L\vdash{\mbox{\boldmath $s$}}(\varphi_1)$, …, $L\vdash {\mbox{\boldmath $s$}}(\varphi_n)$. It should be clear that if the admissibility problem for $L$ is decidable, then the unification problem for $L$ is decidable as well. Indeed, the rule $\varphi/\bot$ is not admissible in $L$ iff there is a substitution ${\mbox{\boldmath $s$}}$ for which $L\vdash {\mbox{\boldmath $s$}}(\varphi)$. It follows from the results of V. Rybakov (see [@Rybakovbook97] and references therein; see also ) that the unification and admissibility problems are decidable for propositional intuitionistic logic and such standard modal logics as [K4]{}, [GL]{}, [S4]{}, [S4.3]{}. However, nearly nothing has been known about the decidability status of the unification and admissibility problems for other important modal logics such as the (‘non-transitive’) basic logic [K]{}, various multi-modal, hybrid and description logics. In fact, only one—rather artificial—example of a decidable *uni*modal logic for which the admissibility problem is undecidable has been found [@Chagrov92b] (see also ). *The first main result of this paper shows that for the standard modal logics [K]{} and [K4]{} [(]{}and, in fact, all logics between them[)]{} extended with the universal modality the unification problem and, therefore, the admissibility problem are undecidable.* The universal modality, first investigated in , is regarded nowadays as a standard constructor in modal logic; see, e.g., [@MLHandbook]. Basically, the universal box is an [S5]{}-box whose accessibility relation contains the accessibility relations for all the other modal operators of the logic. The undecidability result formulated above also applies to those logics where the universal modality is definable, notably to propositional dynamic logic with the converse; see, e.g., [@Hareletal00]. The unification and admissibility problems for [K]{} itself still remain open. Observe that [K4]{} is an example of a logic for which the unification and admissibility problems are decidable, but the addition of the (usually ‘harmless’) universal modality makes them undecidable (although [K4]{} with the universal modality itself is still decidable, in fact, [PSpace]{}-complete). Note also that for ‘reflexive’ modal logics with the universal modality such as [S4]{} the unification problem is trivially decidable. *The second result of this paper shows that the unification and admissibility problems are undecidable for multimodal [K]{} and [K4]{} [(]{}with at least two modal operators[)]{} extended with nominals.* Nominals, that is, additional variables that denote singleton sets, are one of the basic ingredients of hybrid logics; see, e.g., [@Arecesten] and references therein. As follows from our second result, for most hybrid logics the unification and admissibility problems are undecidable. A particularly interesting consequence of this result is in description logic. Motivated by applications in the design and maintenance of knowledge bases, Baader and Narendran and Baader and Kuesters [-@BaaderKuesters-LPAR] identify the unification problem for concept descriptions as an important reasoning service. In its simplest formulation, this problem is equivalent to the unification problem for modal logics. Baader and Narendran and Baader and Kuesters [-@BaaderKuesters-LPAR] develop decision procedures for certain sub-Boolean description logics, leaving the study of unification for Boolean description logics as an open research problem. It follows from our results that unification is undecidable for Boolean description logics with nominals such as $\mathcal{ALCO}$, $\mathcal{ALCQO}$, $\mathcal{ALCQIO}$, and $\mathcal{SHIQO}$. Moreover, if a Boolean description logic has transitive roles, inverse roles and role hierarchies, then a role box can be used to define a universal role. In this case our results can be used to show the undecidability of unification relative to role boxes. This applies, for example, to the logics $\mathcal{SHI}$ and $\mathcal{SHIQ}$. These undecidability results cover almost all Boolean description logics used in applications, in particular the description logic underlying [OWL-DL]{}. However, the unification problem for some basic Boolean description logics such as $\mathcal{ALC}$ and $\mathcal{ALCQI}$ remains open. The plan of this paper is as follows. We start by introducing the syntax and semantics of normal modal logics with the universal modality, in particular ${{\sf }K4}_{u}$ and ${{\sf }K}_{u}$. Then we prove, using an encoding of Minsky machines, the undecidability of the unification and admissibility problems for all logics between ${{\sf }K4}_{u}$ and ${{\sf }K}_{u}$. We also briefly discuss the formulation of this result in terms of equational theories. Then we introduce modal logics with nominals and show how to modify the proof in order to establish the undecidability of unification and admissibility for ${{\sf }K}$ and ${{\sf }K4}$ with at least two modal operators and nominals. We close with a brief discussion of consequences for description logics with nominals. Unification in modal logics with the universal modality {#universal} ======================================================= Let ${\cal L}$ be the propositional language with an infinite set $p_{0},p_{1},\ldots$ of propositional variables, the Boolean connectives $\wedge$ and $\neg$ (and their derivatives such as $\vee$, $\rightarrow$, and $\bot$), and two unary modal operators $\Box$ and $\forall$ (with their duals $\Diamond$ and $\exists$). A *normal modal logic* $L$ with the *universal modality* $\forall$ is any set of ${\cal L}$-formulas that contains all propositional tautologies, the axioms $$\begin{aligned} & \Box (p \rightarrow q) \rightarrow (\Box p \rightarrow \Box q), \qquad \forall (p \rightarrow q) \rightarrow (\forall p \rightarrow \forall q),\\ & \forall p \rightarrow p, \qquad \forall p \rightarrow \forall\forall p, \qquad p \rightarrow \forall \exists p, \qquad \forall p \rightarrow \Box p,\end{aligned}$$ and is closed under *modus ponens*, the necessitation rules $\varphi/\Box \varphi$ and $\varphi/\forall \varphi$, and uniform substitution. ${{\sf }K}_{u}$ is the smallest normal modal logic with the universal modality. ${{\sf }K4}_{u}$ is the smallest normal modal logic with the universal modality that contains the extra axiom $\Box p \rightarrow \Box\Box p$. ${{\sf }K}_{u}$ and ${{\sf }K4}_{u}$ as well as many other normal modal logics with the universal modality are determined by relational structures. A *frame* for ${\cal L}$ is a directed graph ${\mathfrak F}=(W,R)$, that is, $R\subseteq W\times W$. A *model* for ${\cal L}$ is a pair ${\mathfrak M}= ({\mathfrak F},{\mathfrak V})$ where ${\mathfrak F}$ is a frame and ${\mathfrak V}$ a *valuation* mapping the set of propositional variables to $2^{W}$. The *truth-relation* $(\mathfrak M,x)\models\varphi$ between points $x\in W$ of $\mathfrak M$ and $\mathcal{L}$-formulas $\varphi$ is defined inductively as follows: - $(\mathfrak M,x) \models p_{i}$ iff $x\in {\mathfrak V}(p_{i})$, - $(\mathfrak M,x) \models \neg \psi$ iff $(\mathfrak M,x) \not\models \psi$, - $(\mathfrak M,x) \models \psi \land \chi$ iff $(\mathfrak M,x) \models \psi$ and $(\mathfrak M,x) \models \chi$, - $(\mathfrak M,x) \models \Box\psi$ iff $(\mathfrak M,y) \models \psi$ for all $y\in W$ with $xRy$, - $(\mathfrak M,x) \models \forall \varphi$ iff $(\mathfrak M,y) \models \varphi$ for all $y\in W$. Instead of $(\mathfrak M,x)\models\varphi$ we write $x\models\varphi$ if $\mathfrak M$ is clear from the context. A formula $\varphi$ is *valid* in a frame ${\mathfrak F}$, ${\mathfrak F}\models \varphi$ in symbols, if $\varphi$ is true at every point of every model based on ${\mathfrak F}$. The following facts are well known (see, for example, [@Arecesetal00]): ${{\sf }K}_{u}$ is the set of formulas that are valid in all frames. ${{\sf }K4}_{u}$ is the set of formulas that are valid in all transitive frames. The satisfiability problem is [ExpTime]{}-complete for ${{\sf }K}_{u}$, and [PSpace]{}-complete for ${{\sf }K4}_{u}$. We now formulate the unification problem for normal modal logics with the universal modality. The *unification problem* for a normal modal logic $L$ with the universal modality is to decide, given a formula $\varphi$, whether there exists a substitution ${\mbox{\boldmath $s$}}$ such that ${\mbox{\boldmath $s$}}(\varphi)\in L$. \[main1\] The unification problem for any normal modal logic between ${{\sf }K}_{u}$ and ${{\sf }K4}_u$ is undecidable. The proof proceeds by reduction of some undecidable configuration problem for Minsky machines. We remind the reader that a [*Minsky machine*]{} (or a register machine with two registers; see, e.g., [@Minsky61; @Ebbinghaus1994]) is a finite set (program) of instructions for transforming triples $\left\langle s,m,n\right\rangle$ of natural numbers, called [*configurations*]{}. The intended meaning of the current configuration $\left\langle s,m,n\right\rangle$ is as follows: $s$ is the number (label) of the current machine state and $m$, $n$ represent the current state of information. Each instruction has one of the four possible forms: $$\begin{aligned} & s\rightarrow\left\langle t,1,0\right\rangle, & & s\rightarrow\left\langle t,-1,0\right\rangle( \left\langle t',0,0\right\rangle),\\ & s\rightarrow\left\langle t,0,1\right\rangle ,&& s\rightarrow\left\langle t,0,-1\right\rangle( \left\langle t',0,0\right\rangle).\end{aligned}$$ The last of them, for instance, means: transform $\left\langle s,m,n\right\rangle$ into $\left\langle t,m,n-1\right\rangle$ if $n>0$ and into $\left\langle t',m,n\right\rangle$ if $n=0$. We assume that Minsky machines are *deterministic*, that is, they can have at most one instruction with a given $s$ in the left-hand side. For a Minsky machine ${\mbox{\boldmath $P$}}$, we write ${\mbox{\boldmath $P$}}:\left\langle s,m,n\right\rangle\rightarrow \left\langle t,k,l\right\rangle$ if starting with $\left\langle s,m,n\right\rangle$ and applying the instructions in ${\mbox{\boldmath $P$}}$, in finitely many steps (possibly, in 0 steps) we can reach $\left\langle t,k,l\right\rangle$. We will use the well known fact (see, e.g., ) that there exist a Minsky program ${\mbox{\boldmath $P$}}$ and a configuration ${\mathfrak a}=\left\langle s,m,n\right\rangle$ such that no algorithm can decide, given a configuration ${\mathfrak b}$, whether ${\mbox{\boldmath $P$}}:{\mathfrak a}\rightarrow{\mathfrak b}$. Fix such a pair ${\mbox{\boldmath $P$}}$ and $\mathfrak a=\left\langle s,m,n\right\rangle$, and consider the transitive frame ${\mathfrak F}=(W,R)$ shown in Fig. \[F5.2.1\], where the points $e(t,k,l)$ represent configurations $\left\langle t,k,l\right\rangle$ such that ${\mbox{\boldmath $P$}}:\left\langle s,m,n\right\rangle\rightarrow \left\langle t,k,l\right\rangle$, $e(t,k,l)$ ‘sees’ the points $a^{0}_t$, $a^{1}_k$, $a^{2}_l$ representing the components of $\left\langle t,k,l\right\rangle$, and $a$ is the only reflexive point of $\mathfrak F$. More precisely, $$\begin{gathered} W ~=~ \{ a,b,g,g_1,g_2,d,d_1,d_2 \} \cup \{ a^i_j \mid i\le 2,\ j < \omega \} \cup {} \\ \{ e(t,k,l) \mid {\mbox{\boldmath $P$}}:\left\langle s,m,n\right\rangle\rightarrow \left\langle t,k,l\right\rangle \}\end{gathered}$$ and $R$ is the transitive closure of the following relation: $$\begin{gathered} \{ (a,a), (g,a), (g,b), (d,b), (g_1,g), (g_2,g_1), (d_1,d), (d_2,d_1),\\ (a^0_0,g), (a^0_0,d), (a^1_0,g_1), (a^1_0,d_1), (a^2_0,g_2), (a^2_0,d_2)\} \cup {} \phantom{MMMMMMM} \\ \{ (a^i_{j+1},a^i_j) \mid i\le 2,\ j<\omega \} \cup {} \phantom{MMMMMMMMM} \\ \{ \big( e(t,k,l), a^0_t \big ), \big( e(t,k,l), a^1_k \big ), \big( e(t,k,l), a^2_l \big ) \mid e(t,k,l)\in W \}.\end{gathered}$$ This frame and the formulas below describing it were introduced by A. Chagrov in where the reader can find further references. (170,180)(40,0) (80,175)[$\bullet$]{} (120,165)[$\bullet$]{} (160,155)[$\bullet$]{} (200,145)[$\bullet$]{} (40,155)[$\circ$]{} (80,145)[$\bullet$]{} (120,135)[$\bullet$]{} (160,125)[$\bullet$]{} (100,125)[$\bullet$]{} (140,115)[$\bullet$]{} (180,105)[$\bullet$]{} (100,90)[$\bullet$]{} (107,90)[$a_{1}^{0}$]{} (140,90)[$\bullet$]{} (147,90)[$a_{1}^{1}$]{} (180,90)[$\bullet$]{} (187,90)[$a_{1}^{2}$]{} (100,75)[$\bullet$]{} (107,75)[$a_{2}^{0}$]{} (140,75)[$\bullet$]{} (147,75)[$a_{2}^{1}$]{} (180,75)[$\bullet$]{} (187,75)[$a_{2}^{2}$]{} (101,63) (141,63) (181,63) (100,55)[$\bullet$]{} (107,55)[$a_{t-1}^{0}$]{} (140,55)[$\bullet$]{} (147,55)[$a_{k-1}^{1}$]{} (180,55)[$\bullet$]{} (187,55)[$a_{l-1}^{2}$]{} (100,40)[$\bullet$]{} (107,40)[$a_{t}^{0}$]{} (140,40)[$\bullet$]{} (147,40)[$a_{k}^{1}$]{} (180,40)[$\bullet$]{} (187,40)[$a_{l}^{2}$]{} (101,28)[$\vdots$]{} (141,28)[$\vdots$]{} (181,28)[$\vdots$]{} (120,10)[$\bullet$]{} (117,1)[$e(t,k,l)$]{} (122,168)[(-4,1)[37]{}]{} (162,158)[(-4,1)[37]{}]{} (202,148)[(-4,1)[37]{}]{} (82,148)[(-4,1)[37]{}]{} (122,138)[(-4,1)[37]{}]{} (162,128)[(-4,1)[37]{}]{} (102,129)[(-1,1)[17]{}]{} (142,119)[(-1,1)[17]{}]{} (182,109)[(-1,1)[17]{}]{} (102,127)[(1,2)[20]{}]{} (142,117)[(1,2)[20]{}]{} (182,107)[(1,2)[20]{}]{} (106,12)[…]{} (126,12)[…]{} (82,147)[(0,1)[28]{}]{} (102,93.5)[(0,1)[32]{}]{} (142,93.5)[(0,1)[22]{}]{} (182,93.5)[(0,1)[12]{}]{} (102,79)[(0,1)[12]{}]{} (142,79)[(0,1)[12]{}]{} (182,79)[(0,1)[12]{}]{} (40,145)[$a$]{} (80,135)[$g$]{} (70,175)[$b$]{} (120,175)[$d$]{} (160,165)[$d_1$]{} (200,155)[$d_2$]{} (120,145)[$g_1$]{} (160,135)[$g_2$]{} (107,125)[$a^{0}_{0}$]{} (147,115)[$a_{0}^{1}$]{} (187,105)[$a_{0}^{2}$]{} (102,44)[(0,1)[12]{}]{} (142,44)[(0,1)[12]{}]{} (182,44)[(0,1)[12]{}]{} (122,13)[(-2,3)[19]{}]{} (122.5,13)[(2,3)[19]{}]{} (123,13)[(2,1)[58]{}]{} The following variable free formulas characterise the points in ${\mathfrak F}$ in the sense that each of these formulas, denoted by Greek letters with subscripts and/or superscripts, is true in ${\mathfrak F}$ precisely at the point denoted by the corresponding Roman letter with the same subscript and/or superscript (and nowhere else): $$\begin{aligned} &\alpha~=~\Diamond\top\wedge\Box\Diamond\top,\hspace*{3cm} \beta~=~\Box\bot,\\ &\gamma ~=~ \Diamond \alpha\wedge \Diamond \beta\wedge\neg\Diamond^2\beta,\hspace*{2.1cm} \delta ~=~\neg\gamma\wedge\Diamond\beta\wedge\neg\Diamond^2\beta,\\ &\delta_{1}~=~\Diamond \delta\wedge \neg\Diamond^2\delta,\hspace*{3cm} \delta_{2}~=~\Diamond \delta_{1}\wedge \neg\Diamond^2\delta_{1},\\ &\gamma_1~=~\Diamond\gamma\wedge\neg\Diamond^2\gamma\wedge\neg\Diamond\delta,\qquad \hspace*{1.1cm}\gamma_2~=~\Diamond\gamma_1\wedge\neg\Diamond^2\gamma_1\wedge\neg\Diamond \delta,\\ &\alpha_{0}^{0} ~=~\Diamond\gamma\wedge\Diamond\delta\wedge\neg\Diamond^2\gamma \wedge\neg\Diamond^2\delta,\\ &\alpha_{0}^{1} ~=~\Diamond\gamma_{1}\wedge\Diamond\delta_{1}\wedge\neg\Diamond ^2\gamma_{1}\wedge\neg\Diamond^2\delta_{1},\\ &\alpha_{0}^{2} ~=~\Diamond\gamma_{2}\wedge\Diamond\delta_{2}\wedge\neg\Diamond ^2\gamma_{2}\wedge\neg\Diamond^2\delta_{2},\\ &\alpha_{j+1}^{i} ~=~ \Diamond \alpha_{0}^{i} \wedge \Diamond\alpha_{j}^{i}\wedge\neg\Diamond^{2}\alpha_{j}^{i} \wedge\bigwedge_{i\neq k}\neg\Diamond \alpha^k_0,\end{aligned}$$ where $i\in \{ 0,1,2\}$, $j\ge 0$. It is worth emphasising that the formulas $$\label{property} \alpha_{j}^{i} \rightarrow \neg \Diamond \alpha^{i}_{j} \quad \text{and} \quad \alpha_{j+1}^{i} \rightarrow \Diamond \alpha^{i}_{0} \wedge \bigwedge_{k\not=i} \neg\Diamond \alpha^{k}_{0}$$ are valid in *all frames* for all $i\in \{ 0,1,2\}$, $j\ge 0$. We will use this property in what follows. The formulas characterising the points $e(t,k,l)$ are denoted by $\varepsilon(t,\alpha^{1}_{k},\alpha^{2}_{l})$ and defined as follows, where $\varphi$ and $\psi$ are arbitrary formulas, $$\varepsilon(t,\varphi,\psi ) ~=~ \Diamond \alpha_{t}^{0}\wedge\neg\Diamond \alpha_{t+1}^{0}\wedge\Diamond \varphi\wedge\neg\Diamond^2\varphi\wedge\Diamond \psi\wedge\neg\Diamond^2\psi.$$ We also require formulas characterising not only fixed but arbitrary configurations: $$\begin{aligned} \pi_{1} &~=~(\Diamond \alpha_{0}^{1}\vee \alpha_{0}^{1})\wedge\neg\Diamond \alpha_{0}^{0}\wedge\neg\Diamond \alpha_{0}^{2}\wedge p_{1}\wedge\neg\Diamond p_1,\\ \pi_{2} &~=~\Diamond \alpha_{0}^{1}\wedge\neg\Diamond \alpha_{0}^{0}\wedge\neg\Diamond \alpha_{0}^{2}\wedge\Diamond p_{1}\wedge\neg\Diamond^2p_{1},\\ \tau_{1} &~=~(\Diamond \alpha_{0}^{2}\vee \alpha_{0}^{2})\wedge\neg\Diamond \alpha_{0}^{0}\wedge \neg\Diamond \alpha_{0}^{1}\wedge p_{2}\wedge\neg\Diamond p_{2},\\ \tau_{2} &~=~\Diamond \alpha_{0}^{2}\wedge\neg\Diamond \alpha_{0}^{0}\wedge\neg\Diamond \alpha_{0}^{1}\wedge\Diamond p_{2}\wedge\neg\Diamond^2p_{2}.\end{aligned}$$ Observe that in $\mathfrak{F}$, under any valuation, $\pi_{1}$ can be true in at most one point, and this point has to be $a^{1}_{j}$, for some $j\geq 0$. Similarly, $\pi_{2}$ can only we true in at most one point, and this point has to be of the form $a^{1}_{j}$, for some $j>0$. The same applies to $\tau_{1}$ and $\tau_{2}$, but with $a^{1}_{j}$ replaced by $a^{2}_{j}$. Now we are fully equipped to simulate the behaviour of ${\mbox{\boldmath $P$}}$ on $\mathfrak a$ by means of modal formulas with the universal modalities. With each instruction $I$ in ${\mbox{\boldmath $P$}}$ we associate a formula $AxI$ by taking: $$AxI~=~\exists \varepsilon(t, \pi_{1},\tau_{1})\to \exists \varepsilon(t',\pi_{2},\tau_{1})$$ if $I$ is of the form $t\rightarrow\left\langle t',1,0\right\rangle$, $$AxI ~=~ \exists\varepsilon(t,\pi_{1},\tau_{1})\rightarrow \exists\varepsilon(t',\pi_{1},\tau_{2})$$ if $I$ is $t\rightarrow\left\langle t',0,1\right\rangle$, $$\begin{aligned} AxI ~=~ \big(\exists\varepsilon(t,\pi_{2},\tau_{1})\rightarrow \exists\varepsilon(t',\pi_{1},\tau_{1})\big)\wedge \big(\exists\varepsilon(t,\alpha^{1}_{0},\tau_{1}) \rightarrow \exists\varepsilon(t'',\alpha^{1}_{0},\tau_{1})\big)\end{aligned}$$ if $I$ is $t\rightarrow\left\langle t',-1,0\right\rangle (\left\langle t'',0,0\right\rangle)$, and finally $$\begin{aligned} AxI ~=~ \big(\exists\varepsilon(t,\pi_{1},\tau_{2})\rightarrow \exists\varepsilon(t',\pi_{1},\tau_{1})\big)\wedge \big(\exists\varepsilon(t,\pi_{1},\alpha^{2}_{0})\rightarrow \exists\varepsilon(t'',\pi_{1},\alpha^{2}_{0})\big)\end{aligned}$$ if $I$ is $t\rightarrow\left\langle t',0,-1\right\rangle (\left\langle t'',0,0\right\rangle)$. The formula simulating ${\mbox{\boldmath $P$}}$ as a whole is $$AxP ~=~ \bigwedge_{I\in\mbox{\scriptsize ${\mbox{\boldmath $P$}}$}}AxI.$$ One can readily check that $\mathfrak F \models AxP$. Now, for each $\mathfrak b = \langle t,k,l\rangle$ consider the formula $$\psi(\mathfrak b) ~=~ \big ( AxP \land \exists\varepsilon (s,\alpha^1_m,\alpha^2_n) \big ) \to \exists\varepsilon (t,\alpha^1_k,\alpha^2_l).$$ \[main-lemma\] Let ${{\sf }K}_{u} \subseteq L \subseteq {{\sf }K4}_u$. Then ${\mbox{\boldmath $P$}}:{\mathfrak a}\rightarrow{\mathfrak b}$ iff $\psi(\mathfrak b)$ is unifiable in $L$. $(\Leftarrow)$ Suppose that ${\mbox{\boldmath $P$}}:{\mathfrak a} \not\to{\mathfrak b}$. Then, by the construction of $\mathfrak F$, we have $$\mathfrak F \models AxP \land \exists\varepsilon (s,\alpha^1_m,\alpha^2_n) \quad \text{and}\quad \mathfrak F \not\models \exists\varepsilon (t,\alpha^1_k,\alpha^2_l).$$ As $\exists\varepsilon (t,\alpha^1_k,\alpha^2_l)$ is variable free, all substitution instances of $\psi(\mathfrak b)$ are refuted in $\mathfrak F$, and so $\psi(\mathfrak b)$ is not unifiable in any $L\subseteq {{\sf }K4}_u$. $(\Rightarrow)$ Conversely, suppose that ${\mbox{\boldmath $P$}}:{\mathfrak a} \to{\mathfrak b}$. Our aim is to find a substitution ${\mbox{\boldmath $s$}}$ for the variables $p_1$ and $p_2$ such that ${\mbox{\boldmath $s$}}(\psi(\mathfrak b)) \in {{\sf }K}_u$. Let $${\mbox{\boldmath $P$}}: \mathfrak a = \langle t_0,k_0,l_0\rangle \stackrel{I_1}\to \langle t_1,k_1,l_1\rangle \stackrel{I_2}\to \dots \stackrel{I_\ell}\to \langle t_\ell,k_\ell,l_\ell\rangle = \mathfrak b$$ be the computation of ${\mbox{\boldmath $P$}}$ starting with $\mathfrak a$ and ending with $\mathfrak b$, where $I_j$ is the instruction from ${\mbox{\boldmath $P$}}$ that is used to transform $\langle t_{j-1},k_{j-1},l_{j-1}\rangle$ into $\langle t_j,k_j,l_j\rangle$. Consider the formula $$\label{defect} {\sf defect}_{i} ~=~ \exists \varepsilon (t_0, \alpha^1_{k_0}, \alpha^2_{l_0}) \land \dots \land \exists \varepsilon (t_i, \alpha^1_{k_i}, \alpha^2_{l_i}) \land \neg \exists \varepsilon (t_{i+1}, \alpha^1_{k_{i+1}}, \alpha^2_{l_{i+1}})$$ which ‘says’ that the computation is simulated properly up to the $i$th step, but there is no point representing the $i+1$st configuration. Define the substitution ${\mbox{\boldmath $s$}}$ we need by taking $$\label{substitution} {\mbox{\boldmath $s$}}(p_1) ~=~ \bigvee_{i=0}^{\ell-1} {\sf defect}_{i} \land \overline{\alpha}^1_{k_i}, \qquad {\mbox{\boldmath $s$}}(p_2) ~=~ \bigvee_{i=0}^{\ell-1} {\sf defect}_{i} \land \overline{\alpha}^2_{l_i},$$ where $$\overline{\alpha}^{1}_{k_{i}} ~=~ \begin{cases} \alpha^{1}_{k_{i}} & \text{if either}\ k_{i} = 0\ \text{or}\ I_{i+1} \ne t_i \to \langle t_{i+1},-1,0\rangle,\\ \alpha^{1}_{k_{i}-1} & \text{if} \ k_{i} \ne 0 \ \text{and}\ I_{i+1} = t_i \to \langle t_{i+1},-1,0\rangle, \end{cases}$$ and $$\overline{\alpha}^{2}_{l_{i}} ~=~ \begin{cases} \alpha^{2}_{l_{i}} & \text{if either}\ l_{i} = 0\ \text{or}\ I_{i+1} \ne t_i \to \langle t_{i+1},0,-1\rangle,\\ \alpha^{2}_{l_{i}-1} & \text{if} \ l_{i} \ne 0 \ \text{and}\ I_{i+1} = t_i \to \langle t_{i+1},0,-1\rangle. \end{cases}$$ We show now that we have $\mathfrak G \models {\mbox{\boldmath $s$}}(\psi(\mathfrak b))$ for *all* frames $\mathfrak G$, which clearly means that ${\mbox{\boldmath $s$}}(\psi ({\mathfrak b})) \in {{\sf }K}_u$. Suppose $\mathfrak G = (W,R)$ is given. As all formulas considered below, in particular ${\mbox{\boldmath $s$}}(\psi ({\mathfrak b}))$, are variable free, we can write $x \models \psi$ to say that $\psi$ is true at $x$ in some/all models based on ${\mathfrak G}$. Moreover, for any Boolean combination $\psi$ of such formulas starting with $\exists$, we have $x\models \psi$ iff $x'\models \psi$ for any $x,x'\in W$. Hence, ${\mathfrak G}\not\models \psi$ means that $x\not\models\psi$ for all $x\in W$. Let us now proceed with the proof. Two cases are possible. *Case* 1: $\mathfrak G \models \neg \exists\varepsilon (t_0,\alpha^1_{k_0},\alpha^2_{l_0}) \lor \exists\varepsilon (t_\ell,\alpha^1_{k_\ell},\alpha^2_{l_\ell})$. Then clearly $\mathfrak G \models {\mbox{\boldmath $s$}}(\psi(\mathfrak b))$. *Case* 2: $\mathfrak G \models \exists\varepsilon (t_0,\alpha^1_{k_0},\alpha^2_{l_0}) \land \neg \exists\varepsilon (t_\ell,\alpha^1_{k_\ell},\alpha^2_{l_\ell})$. Then there exists some number $i < \ell$ such that $\mathfrak G \models {\sf defect}_{i}$. It follows that, for all $z\in W$, $$\label{property1} z \models {\mbox{\boldmath $s$}}(p_{1}) \quad \text{iff}\quad z\models \overline{\alpha}_{k_{i}}^{1}, \quad \text{and} \quad z\models {\mbox{\boldmath $s$}}(p_{2}) \quad \text{iff} \quad z\models \overline{\alpha}_{l_{i}}^{2}.$$ \[claim1\] For all $z\in W$, we have [(i)]{} $z \models {\mbox{\boldmath $s$}}(\pi_{1})$ iff $z \models \overline{\alpha}_{k_{i}}^{1}$, and [(ii)]{} $z \models {\mbox{\boldmath $s$}}(\tau_{1})$ iff $z \models \overline{\alpha}_{l_{i}}^{2}$. Suppose $z\in W$ is given. We know that $${\mbox{\boldmath $s$}}(\pi_{1}) ~=~ (\Diamond \alpha_{0}^{1} \vee \alpha_{0}^{1}) \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge {\mbox{\boldmath $s$}}(p_{1}) \wedge \neg \Diamond {\mbox{\boldmath $s$}}(p_{1}).$$ Hence, by and , $$z \models {\mbox{\boldmath $s$}}(\pi_{1}) \quad \text{iff} \quad z \models (\Diamond \alpha_{0}^{1} \vee \alpha_{0}^{1}) \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge \overline{\alpha}_{k_{i}}^{1} \wedge \neg \Diamond \overline{\alpha}_{k_{i}}^{1} \quad \text{iff} \quad z \models \overline{\alpha}_{k_{i}}^{1}.$$ (ii) is considered analogously. \[claim2\] For all $z \in W$, [(i)]{} $z \models {\mbox{\boldmath $s$}}(\pi_{2})$ iff $z \models \overline{\alpha}_{k_{i}+1}^{1}$, and [(ii)]{} $z \models {\mbox{\boldmath $s$}}(\tau_{2})$ iff $z \models \overline{\alpha}_{l_{i}+1}^{2}$. Suppose $z\in W$ is given. We know that $${\mbox{\boldmath $s$}}(\pi_{2}) ~=~ \Diamond \alpha_{0}^{1} \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge \Diamond {\mbox{\boldmath $s$}}(p_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(p_{1}).$$ Hence, by , $$z \models {\mbox{\boldmath $s$}}(\pi_{2}) \quad \text{iff} \quad z \models \Diamond \alpha_{0}^{1} \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge \Diamond \overline{\alpha}_{k_{i}}^{1} \wedge \neg \Diamond^{2} \overline{\alpha}_{k_{i}}^{1}.$$ But, according to , the latter formula is equivalent to the definition of $\overline{\alpha}^{1}_{k_{i}+1}$, which proves the claim. We now make a case distinction according to rule $I_{i+1}$ used to transform $\langle t_i,k_i,l_i\rangle$ to $\langle t_{i+1},k_{i+1},l_{i+1}\rangle$. *Case* 1: $I_{i+1} = t_i \to \langle t_{i+1},1,0\rangle$. Our aim is to show that - ${\mathfrak G} \models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i},\pi_{1},\tau_{1}))$ and - ${\mathfrak G} \not\models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i+1},\pi_{2}, \tau_{1}))$, for then we would have ${\mathfrak G}\not\models {\mbox{\boldmath $s$}}(AxP)$, and so ${\mathfrak G}\models {\mbox{\boldmath $s$}}(\psi({\mathfrak b}))$. \(a) As ${\mathfrak G} \models \exists \varepsilon(t_{i},\alpha_{k_{i}}^{1},\alpha_{l_{i}}^{2})$, we have some $z\in W$ such that $$z \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0}\wedge\Diamond \alpha_{k_{i}}^{1} \wedge\neg\Diamond^2\alpha_{k_{i}}^{1}\wedge \Diamond \alpha_{l_{i}}^{2}\wedge\neg\Diamond^2\alpha_{l_{i}}^{2}.$$ By Claim \[claim1\], we then have $$z \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0} \wedge \Diamond {\mbox{\boldmath $s$}}(\pi_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(\pi_{1}) \wedge \Diamond {\mbox{\boldmath $s$}}(\tau_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(\tau_{1}),$$ which means that $z \models {\mbox{\boldmath $s$}}(\varepsilon(t_{i},\pi_{1},\tau_{1}))$, and so ${\mathfrak G} \models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i},\pi_{1},\tau_{1}))$. \(b) Suppose that $\mathfrak G \not\models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i+1}, \pi_{2},\tau_{1}))$ does not hold. Then there is $x\in W$ with $$x \models \varepsilon(t_{i+1}, {\mbox{\boldmath $s$}}(\pi_{2}), {\mbox{\boldmath $s$}}(\tau_{1})),$$ that is, $$x \models \Diamond \alpha_{t_{i+1}}^{0}\wedge\neg\Diamond \alpha_{t_{i+1}+1}^{0}\wedge\Diamond {\mbox{\boldmath $s$}}(\pi_2) \wedge\neg\Diamond^2 {\mbox{\boldmath $s$}}(\pi_2) \wedge \Diamond {\mbox{\boldmath $s$}}(\tau_1) \wedge\neg \Diamond^2{\mbox{\boldmath $s$}}(\tau_1).$$ By Claims \[claim1\] and \[claim2\], we then have $$x \models \Diamond \alpha_{t_{i+1}}^{0}\wedge\neg\Diamond \alpha_{t_{i+1}+1}^{0}\wedge\Diamond \alpha_{k_{i}+1}^{1} \wedge\neg\Diamond^2 \alpha_{k_{i}+1}^{1} \wedge \Diamond \alpha_{l_{i}}^{2} \wedge\neg \Diamond^2 \alpha_{l_{i}}^{2}$$ which means $$x \models \varepsilon(t_{i+1}, \alpha_{k_{i}+1}^{1}, \alpha_{l_{i}}^{2}).$$ Now recall that $\alpha_{k_{i}+1}^{1}= \alpha_{k_{i+1}}^{1}$ and $\alpha_{l_{i}}= \alpha_{l_{i+1}}$, that is, we have $$x \models \varepsilon(t_{i+1}, \alpha_{k_{i+1}}^{1}, \alpha_{l_{i+1}}^{2}),$$ and so ${\mathfrak G} \models \exists \varepsilon(t_{i+1}, \alpha_{k_{i+1}}^{1}, \alpha_{l_{i+1}}^{2})$, contrary to ${\mathfrak G}\models {\sf defect}_{i}$. *Case* 2: $I_{i+1}$ is of the form $t_i \to \langle t'_{i+1},-1,0\rangle (\langle t''_{i+1},0,0 \rangle)$. Suppose first that $k_i=0$, that is, the actual instruction is $I_{i+1}= t_i \to \langle t_{i+1},0,0 \rangle$. We need to show that - ${\mathfrak G} \models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i},\alpha_{0}^{1}, \tau_{1}))$ and - ${\mathfrak G} \not\models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i+1},\alpha_{0}^{1},\tau_{1}))$, which, as before, would imply ${\mathfrak G}\models {\mbox{\boldmath $s$}}(\psi({\mathfrak b}))$. \(a) As ${\mathfrak G} \models \exists \varepsilon(t_{i},\alpha_{0}^{1},\alpha_{l_{i}}^{2})$, we have $x\in W$ such that $$x \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0}\wedge\Diamond \alpha_{0}^{1} \wedge\neg\Diamond^2\alpha_{0}^{1}\wedge \Diamond \alpha_{l_{i}}^{2}\wedge\neg\Diamond^2\alpha_{l_{i}}^{2},$$ from which, by Claim \[claim1\], $$x \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0} \wedge \Diamond \alpha_{0}^{1} \wedge \neg \Diamond^{2} \alpha_{0}^{1} \wedge \Diamond {\mbox{\boldmath $s$}}(\tau_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(\tau_{1}).$$ Thus we have $x \models \exists \varepsilon(t_{i},\alpha_{0}^{1}, \tau_{1})$. (b) is proved similarly and left to the reader. Suppose now that $k_{i}>0$, that is, the instruction $I_{i+1}= t_i \to \langle t_{i+1},-1,0\rangle$ was actually used. This time we need to show that - ${\mathfrak G} \models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i},\pi_{2},\tau_{1}))$ and - ${\mathfrak G} \not\models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i+1},\pi_{1}, \tau_{1}))$. \(a) Since ${\mathfrak G} \models \exists \varepsilon(t_{i},\alpha_{k_{i}}^{1},\alpha_{l_{i}}^{2})$, we have $x\in W$ such that $$x \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0}\wedge\Diamond \alpha_{k_{i}}^{1} \wedge\neg\Diamond^2\alpha_{k_{i}}^{1}\wedge \Diamond \alpha_{l_{i}}^{2}\wedge\neg\Diamond^2\alpha_{l_{i}}^{2}.$$ Clearly, it is sufficient to show that $$x \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0} \wedge \Diamond {\mbox{\boldmath $s$}}(\pi_{2}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(\pi_{2}) \wedge \Diamond {\mbox{\boldmath $s$}}(\tau_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(\tau_{1}).$$ Observe that in this case $\overline{\alpha}_{k_{i}}^{1}= \alpha_{k_{i}-1}$. Hence, by Claim \[claim2\], for all $z\in W$ we have $z \models {\mbox{\boldmath $s$}}(\pi_{2})$ iff $z \models \alpha^1_{k_{i}}$. So it remains to use Claims \[claim1\] and \[claim2\]. \(b) Suppose otherwise, that is, ${\mathfrak G}\models {\mbox{\boldmath $s$}}(\exists\varepsilon(t_{i+1},\pi_{1},\tau_{1}))$. Then there exists $x \in W$ such that $$x \models \Diamond \alpha_{t_{i+1}}^{0}\wedge\neg\Diamond \alpha_{t_{i+1}+1}^{0}\wedge\Diamond {\mbox{\boldmath $s$}}(\pi_1) \wedge\neg\Diamond^2 {\mbox{\boldmath $s$}}(\pi_1) \wedge \Diamond {\mbox{\boldmath $s$}}(\tau_1) \wedge\neg \Diamond^2{\mbox{\boldmath $s$}}(\tau_1).$$ By Claim \[claim1\], this implies $$x \models \Diamond \alpha_{t_{i+1}}^{0}\wedge\neg\Diamond \alpha_{t_{i+1}+1}^{0}\wedge\Diamond \alpha_{k_{i}-1}^{1} \wedge\neg\Diamond^2 \alpha_{k_{i}-1}^{1} \wedge \Diamond \alpha_{l_{i}}^{2} \wedge\neg \Diamond^2 \alpha_{l_{i}}^{2},$$ that is, $$x \models \varepsilon(t_{i+1}, \alpha_{k_{i}-1}^{1}, \alpha_{l_{i}}^{2})$$ which leads to a contradiction, because $\alpha_{k_{i}-1}^{1}= \alpha_{k_{i+1}}^{1}$ and $\alpha_{l_{i}} = \alpha_{l_{i+1}}$, and therefore we must have ${\mathfrak G} \models \exists \varepsilon(t_{i+1}, \alpha_{k_{i+1}}^{1}, \alpha_{l_{i+1}}^{2})$. The remaining two types of instructions (where the third component changes) are dual to the ones considered above. We leave these cases to the reader. This completes the proof of Lemma \[main-lemma\]. Theorem \[main1\] follows immediately in view of the choice of ${\mbox{\boldmath $P$}}$ and $\alpha$. Observe that Theorem \[main1\] can be proved for multimodal ${\sf K}_{u}$ and ${\sf K4}_{u}$ as well. In this case, in the frame ${\mathfrak F}$ considered above, the additional operators can be interpreted by the empty relation. By a proper modification of the frame $\mathfrak F$ in Fig. \[F5.2.1\], this theorem can also be extended to some logics above ${\sf K4}_{u}$, for example, ${\sf GL}_{u}$. The *admissibility problem* for inference rules for a normal modal logic $L$ with the universal modality is to decide, given an inference rule $\varphi_1,\dots,\varphi_n /\varphi$, whether ${\mbox{\boldmath $s$}}(\varphi_1)\in L$, …, ${\mbox{\boldmath $s$}}(\varphi_n)\in L$ imply ${\mbox{\boldmath $s$}}(\varphi)\in L$, for every substitution ${\mbox{\boldmath $s$}}$. As an immediate consequence of Theorem \[main1\] we obtain the following: The admissibility problem for any normal modal logic $L$ between ${{\sf }K}_{u}$ and ${{\sf }K4}_{u}$ is undecidable. Minor modifications of the proof above can be used to prove undecidability of the unification and admissibility problems for various modal logics in which the universal modality is definable. An interesting example is [PDL]{} with converse, i.e., the extension of propositional dynamic logic with the converse constructor on programs: if $\alpha$ is a program, then $\alpha^{-1}$ is a program which is interpreted by the converse of the relation interpreting $\alpha$. (We do not provide detailed definitions of the syntax and semantics here but refer the reader to [@Hareletal00].) The undecidability proof for the unification problem (for substitutions instead of propositional variables rather than atomic programs!) is carried out by taking an atomic program $\alpha$ and replacing, in the proof above, the operator $\Box$ with $[\alpha]$ and the universal modality $\forall$ with $[(\alpha \cup \alpha^{-1})^{\ast}]$. It seems worth mentioning, however, that the unification problem is trivially decidable for any normal modal logic $L$ with $\neg \Box \bot\in L$. To see this, recall that a substitution ${\mbox{\boldmath $s$}}$ is called *ground* if it replaces each propositional variable by a variable free formula (that is, a formula constructed from $\bot$ and $\top$ only). Obviously, it is always the case that if there exists a substitution ${\mbox{\boldmath $s$}}$ such that ${\mbox{\boldmath $s$}}(\varphi)\in L$, then there exists a ground substitution ${\mbox{\boldmath $s$}}'$ with ${\mbox{\boldmath $s$}}'(\varphi)\in L$. But if $\neg \Box \bot \in L$, then there are, up to equivalence in $L$, only two different variable free formulas, namely, $\bot$ and $\top$. Thus, to decide whether a formula $\varphi$ is unifiable in $L$ it is sufficient to check whether any of the ground substitutions makes $\varphi$ equivalent to $\top$ (which can be done in Boolean logic). A well known example of such a logic is ${{\sf }S4}_{u}$, ${{\sf }S4}$ with the universal modality. Note that the admissibility problem for ${{\sf }S4}_{u}$ might nevertheless be undecidable. We leave this as an interesting open problem. Unification modulo equational theories ====================================== The results presented above can be reformulated as undecidability results for the well-known notion of unification modulo equational theories . Consider the equational theory ${\sf BAO}_{2}$ of Boolean algebras with operators $\Box_{1}$ and $\Box_{2}$, which consists of an axiomatisation [BA]{} of the variety of Boolean algebras (say, in the signature with the binary connective $\wedge$, unary connective $\neg$ and constant $1$) together with the equations $$\Box_{i}(x \wedge y) ~=~ \Box_{i}x \wedge \Box_{i} y \quad \text{and} \quad \Box_{i} 1 ~=~ 1,$$ for $i=1,2$. Let $T$ be any set of equations over the signature of Boolean algebras with two operators. Then the *unification problem modulo* ${\sf BAO}_{2} \cup T$ is to decide, given an equation $t_{1}=t_{2}$ over the signature of ${\sf BAO}_{2}$, whether there exists a substitution ${\mbox{\boldmath $s$}}$ such that $${\mbox{\boldmath $s$}}(t_{1}) ~=_{{\sf BAO}_{2} \cup T}~ {\mbox{\boldmath $s$}}(t_{2}),$$ that is, whether there exists a substitution ${\mbox{\boldmath $s$}}$ such that the equation ${\mbox{\boldmath $s$}}(t_{1}) = {\mbox{\boldmath $s$}}(t_{2})$ is valid in all algebras where the equations in ${\sf BAO}_{2} \cup T$ hold true. For a term $t$, let $t^{p}$ denote the propositional modal formula that is obtained from $t$ by replacing its (individual) variables with (mutually distinct) propositional variables. We may assume that $\cdot^{p}$ is a bijection between the terms $t$ over the signature of ${\sf BAO}_{2}$ and the modal formulas with modal operators $\Box_{1}$ and $\Box_{2}$. Denote by $\cdot^{-p}$ the inverse of this function. It is well-known (see, e.g., [@yde]) that a modal formula $\varphi$ is valid in the smallest normal modal logic $L$ containing the formulas $$\{ t_{1}^{p} \leftrightarrow t_{2}^{p} \mid t_{1}=t_{2} \in T\}$$ if, and only if, $\varphi^{-p}$ is valid in all algebras validating ${\sf BAO}_{2} \cup T$. The appropriate converse statement is also easily formulated. It follows that the unification problem modulo ${\sf BAO}_{2} \cup T$ is decidable if, and only if, the unification problem for $L$ is decidable. Clearly, it remains an open question whether the unification problem modulo ${\sf BAO}_{2}$ is decidable. However, if $T$ consists of the following inequalities (saying that $\Box_{1}$ is the universal box) $$\Box_{1} x ~\leq~ \Box_{2}x, \quad \Box_{1} x ~\leq~ x, \quad \Box_{1} x ~\leq~ \Box_{1}\Box_{1} x, \quad x ~\leq~ \Box_{1}\neg\Box_{1}\neg x,$$ then Theorem \[main1\] implies that the unification problem modulo ${\sf BAO}_{2} \cup T$ is undecidable. Unification in modal logics with nominals ========================================= Let us now consider the extension of the language ${\cal L}$ with nominals. More precisely, denote by $\mathcal{H}_{2}$ the propositional language constructed from - an infinite list $p_{1},p_{2},\dots$ of propositional variables and - an infinite list $n_{1},n_{2},\dots$ of *nominals* using the standard Boolean connectives and two modal operators $\Box$ and $\Box_{h}$ (instead of $\Box$ and $\forall$ in ${\cal L}$).[^1] $\mathcal{H}_{2}$-formulas are interpreted in frames of the form ${\mathfrak F}=(W,R,S)$ where $R,S \subseteq W\times W$. As before, a *model* is a pair $\mathfrak M = ({\mathfrak F},{\mathfrak V})$, where ${\mathfrak V}$ is a *valuation function* that assigns to each $p_{i}$ a subset $\mathfrak V(p_i)$ of $W$ and to each $n_{i}$ a *singleton subset* $\mathfrak V(n_i)$ of $W$. The *truth-relation*, $(\mathfrak M, x) \models \varphi$, is defined as above with two extra clauses: - $(\mathfrak M,x) \models n_{i}$ iff $\{x\} = {\mathfrak V}(n_{i})$, - $(\mathfrak M,x) \models \Box_h\psi$ iff $(\mathfrak M,y) \models \psi$ for all $y\in W$ with $xSy$. Denote by ${\sf K}_{{\mathcal H}_{2}}$ the set of all $\mathcal{H}_{2}$-formulas that are valid in all frames, and denote by ${\sf K}_{{\mathcal H}_{2}}\oplus 45$ the set of $\mathcal{H}_{2}$-formulas that are valid in all frames $(W,R,S)$ with transitive $R$ and $S= W \times W$. A proof of the following result can be found in [@Arecesetal00]: The satisfiability problem for ${\sf K}_{{\mathcal H}_{2}}$ is [PSpace]{}-complete, while for ${\sf K}_{{\mathcal H}_{2}}\oplus 45$ it is [ExpTime]{}-complete. A *substitution* ${\mbox{\boldmath $s$}}$ for $\mathcal{H}_{2}$ is a map from the set of propositional variables into $\mathcal{H}_{2}$. In particular, any substitution leaves nominals intact.[^2] The unification and admissibility problems for modal logics with nominals are formulated in exactly the same way as before. \[main2\] The unification problem and, therefore, the admissibility problem for any logic $L$ between ${{\sf }K}_{{\mathcal H}_{2}}$ and ${{\sf }K}_{{\mathcal H}_{2}}\oplus 45$ are undecidable. The proof of this theorem is similar to the proof of Theorem \[main1\]. Here we only show how to modify the encoding of Minsky machine computations from Section \[universal\]. The main difference is that now the language does not contain the universal modality which can refer to all points in the frame in order to say, e.g., that a certain configuration is (not) reachable. To overcome this problem, we will use one nominal, let us call it $n$, which, if accessible from a point $x$ (via $R$ and $S$), will be forced to be accessible from all points located within a certain distance from $x$. This trick will provide us with a ‘surrogate’ universal modality which behaves, locally, similarly to the standard one. From now on we will be using the following abbreviation, where $\varphi$ is an $\mathcal{H}_{2}$-formula: $$\label{surrogate} \exists \varphi ~=~ \Diamond_{h}(n \wedge \Diamond_{h} \varphi).$$ The defined operator $\exists$ will play the role of our surrogate universal diamond. Consider again a Minsky program ${\mbox{\boldmath $P$}}$ and a configuration ${\mathfrak a}= \left\langle s,m,n \right\rangle$ such that it is undecidable, given a configuration $\mathfrak b$, whether ${\mbox{\boldmath $P$}}: {\mathfrak a} \rightarrow {\mathfrak b}$. The frame ${\mathfrak F} = (W,R,S)$ encoding $\mathfrak F$ and $\mathfrak a$ is defined as in Fig. \[F5.2.1\], with $S = W \times W$. For each instruction $I$, we introduce the formula $AxI$ in precisely the same way as before, with $\exists$ defined by . The first important difference between the two constructions is the definition of $AxP$. Let ${\textit{Nom}}$ denote the conjunction of all $\mathcal{H}_{2}$-formulas of the form $$\Diamond_{h}n \rightarrow M \Diamond_{h} n \quad \mbox{and} \quad M' \Diamond_{h}n \rightarrow \Diamond_{h} n,$$ where $M$ is any sequence of $\Box$ and $\Box_{h}$ of length $\le 6$, and $M'$ is any sequence of $\Diamond$ and $\Diamond_{h}$ of length $\le 6$. To explain the meaning of ${\textit{Nom}}$, consider a model $({\mathfrak G},{\mathfrak V})$ based on some frame ${\mathfrak G}=(W,R,S)$. Let $x_{0}\in W$. We say that $x\in W$ is *of distance* $\le m$ *from* $x_{0}$ if there exists a sequence $$x_{0} S' x_{1} S' x_{2} \cdots x_{k-1} S 'x_{k} ~ = ~ x,$$ where $S' = R \cup S$ and $k\le m$. Now assume that $x_{0} \models {\textit{Nom}}$. Then either all points of distance $\le 6$ from $x_{0}$ ‘see’ ${\mathfrak V}(n)$ via $S$, or no point of distance $\le 6$ from $x_{0}$ sees ${\mathfrak V}(n)$ via $S$. In particular, $x_{0}\models \exists \varphi$ if, and only if, $x\models \exists \varphi$ for all $x$ of distance $\le 6$ from $x_{0}$, and $x_{0}\not\models\exists\varphi$ if, and only if, $x\not\models \exists\varphi$ for all $x$ of distance $\le 6$ from $x_{0}$. The formula simulating ${\mbox{\boldmath $P$}}$ as a whole in this case is $$AxP ~=~ \bigwedge_{I\in\mbox{\scriptsize ${\mbox{\boldmath $P$}}$}}AxI \wedge {\textit{Nom}}.$$ Consider the frame ${\mathfrak F}=(W,R,S)$ in Fig. \[F5.2.1\] (with $S=W\times W$). Then, no matter which singleton set interprets $n$, the new operator $\exists$ is always interpreted by the universal relation. Hence, as before we have $\mathfrak F \models AxP$. Now, for each $\mathfrak b = \langle t,k,l\rangle$ consider (as before) the formula $$\psi(\mathfrak b) ~=~ AxP \land \exists\varepsilon (s,\alpha^1_m,\alpha^2_n) \to \exists\varepsilon (t,\alpha^1_k,\alpha^2_l).$$ \[lem3\] ${\mbox{\boldmath $P$}}:{\mathfrak a}\rightarrow{\mathfrak b}$ iff $\psi(\mathfrak b)$ is unifiable in $L$, where ${{\sf }K}_{{\mathcal H}_{2}} \subseteq L \subseteq {\sf K}_{{\mathcal H}_{2}}\oplus 45$. The proof of $(\Leftarrow)$ is exactly as before. $(\Rightarrow)$ Suppose that ${\mbox{\boldmath $P$}}:{\mathfrak a} \to{\mathfrak b}$. Our aim is to find a substitution ${\mbox{\boldmath $s$}}$ for the variables $p_1$ and $p_2$ such that ${\mbox{\boldmath $s$}}(\psi(\mathfrak b)) \in {{\sf }K}_{{\mathcal H}_{2}}$. The definition of the substitution is as before. Let $${\mbox{\boldmath $P$}}: \mathfrak a = \langle t_0,k_0,l_0\rangle \stackrel{I_1}\to \langle t_1,k_1,l_1\rangle \stackrel{I_2}\to \dots \stackrel{I_\ell}\to \langle t_\ell,k_\ell,l_\ell\rangle = \mathfrak b$$ be the computation of ${\mbox{\boldmath $P$}}$ starting with $\mathfrak a$ and ending with $\mathfrak b$. Then we define ${\mbox{\boldmath $s$}}$ by means of , where ${\sf defect}_i$ is given by . We have to show that, for *all* frames $\mathfrak G$, we have $\mathfrak G \models {\mbox{\boldmath $s$}}(\psi(\mathfrak b))$. Note that now we *cannot assume* that $\exists$ is interpreted by the universal relation. Suppose that we are given a frame $\mathfrak G = (W,R,S)$, a valuation ${\mathfrak V}$ in it, and some $x_{0}\in W$. We write $\{n^{\mathfrak V}\}$ for ${\mathfrak V}(n)$, and $x\models \psi$ for $({\mathfrak G},{\mathfrak V},x)\models \psi$. As before, two cases are possible. *Case* 1: $x_{0} \models \neg \exists\varepsilon (t_0,\alpha^1_{k_0},\alpha^2_{l_0}) \lor \exists\varepsilon (t_\ell,\alpha^1_{k_\ell},\alpha^2_{l_\ell})$. Then clearly $x_{0} \models {\mbox{\boldmath $s$}}(\psi(\mathfrak b))$. *Case* 2: $x_{0} \models \exists\varepsilon (t_0,\alpha^1_{k_0},\alpha^2_{l_0}) \land \neg \exists\varepsilon (t_\ell,\alpha^1_{k_\ell},\alpha^2_{l_\ell})$. If $x_{0} \not\models {\mbox{\boldmath $s$}}({\textit{Nom}})$ then obviously $x_{0} \models {\mbox{\boldmath $s$}}(\psi({\mathfrak b}))$, and we are done. So assume that $x_{0} \models {\mbox{\boldmath $s$}}({\textit{Nom}})$. Then there exists some number $i < \ell$ such that $x_{0} \models {\sf defect}_{i}$. \[closer\] For all points $x$ of distance $\le 6$ from $x_{0}$, $x \models {\sf defect}_{i}$. So, for all such $x$, we have $x \models {\mbox{\boldmath $s$}}(p_{1})$ iff $x\models \overline{\alpha}_{k_{i}}^{1}$, and $x\models {\mbox{\boldmath $s$}}(p_{2})$ iff $x\models \overline{\alpha}_{l_{i}}^{2}$. Follows immediately from $x_{0} \models {\textit{Nom}}$. \[again1\] For all $x$ of distance $\le 5$ from $x_{0}$, we have [(i)]{} $x \models {\mbox{\boldmath $s$}}(\pi_{1})$ iff $x \models \overline{\alpha}_{k_{i}}^{1}$, and [(ii)]{} $x \models {\mbox{\boldmath $s$}}(\tau_{1})$ iff $x \models \alpha_{l_{i}}^{2}$. We only prove (i). Suppose $x$ is given. We know that $${\mbox{\boldmath $s$}}(\pi_{1}) ~=~ (\Diamond \alpha_{0}^{1} \vee \alpha_{0}^{1}) \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge {\mbox{\boldmath $s$}}(p_{1}) \wedge \neg \Diamond {\mbox{\boldmath $s$}}(p_{1}).$$ Hence, by Claim \[closer\], $$x \models {\mbox{\boldmath $s$}}(\pi_{1}) \quad \text{iff} \quad x \models (\Diamond \alpha_{0}^{1} \vee \alpha_{0}^{1}) \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge \overline{\alpha}_{k_{i}}^{1} \wedge \neg \Diamond \overline{\alpha}_{k_{i}}^{1}.$$ (Observe that ${\mbox{\boldmath $s$}}(p_{1})$ occurs within the scope of a $\Diamond$. Hence, we obtain this equivalence only for points of distance $\le 5$ from $x_{0}$.) But this is equivalent to $x \models \overline{\alpha}_{k_{i}}^{1}$. \[again2\] For all $x$ of distance $\le 4$ from $x_{0}$, [(i)]{} $x \models {\mbox{\boldmath $s$}}(\pi_{2})$ iff $z \models \overline{\alpha}_{k_{i}+1}^{1}$, and [(ii)]{} $x \models {\mbox{\boldmath $s$}}(\tau_{2})$ iff $z \models \overline{\alpha}_{l_{i}+1}^{2}$. We only prove (i). Suppose $x$ is given. We know that $${\mbox{\boldmath $s$}}(\pi_{2}) ~=~ \Diamond \alpha_{0}^{1} \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge \Diamond {\mbox{\boldmath $s$}}(p_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(p_{1}).$$ Hence, by Claim \[closer\], $$x \models {\mbox{\boldmath $s$}}(\pi_{2}) \quad \text{iff} \quad x \models \Diamond \alpha_{0}^{1} \wedge \neg \Diamond \alpha_{0}^{0} \wedge \neg \Diamond \alpha_{0}^{2} \wedge \Diamond \overline{\alpha}_{k_{i}}^{1} \wedge \neg \Diamond^{2} \overline{\alpha}_{k_{i}}^{1}.$$ (In this case ${\mbox{\boldmath $s$}}(p_{1})$ occurs within the scope of a $\Diamond^{2}$. Therefore, we obtain this equivalence for points $x$ of distance $\le 4$ from $x_{0}$.) But this formula is in fact the definition of $\overline{\alpha}^{1}_{k_{i}+1}$. As in the proof of Lemma \[main-lemma\], we now make a case distinction according to rule $I_{i+1}$ used to transform $\langle t_i,k_i,l_i\rangle$ to $\langle t_{i+1},k_{i+1},l_{i+1}\rangle$. Here we only consider the case of $I_{i+1} = t_i \to \langle t_{i+1},1,0\rangle$, and leave the remaining three cases to the reader. We need to show that - $x_{0} \models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i},\pi_{1},\tau_{1})$) and - $x_{0} \not\models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i+1},\pi_{2},\tau_{1}))$, which, as before, would imply $x_{0}\models {\mbox{\boldmath $s$}}(\psi({\mathfrak b}))$. \(a) As $x_{0} \models \exists \varepsilon(t_{i}, \alpha_{k_{i}}^{1}, \alpha_{l_{i}}^{2})$, we have some $z$ such that $x_{0} S n^{\mathfrak V} S z$ and $$z \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0}\wedge\Diamond \alpha_{k_{i}}^{1} \wedge\neg\Diamond^2\alpha_{k_{i}}^{1}\wedge \Diamond \alpha_{l_{i}}^{2}\wedge\neg\Diamond^2\alpha_{l_{i}}^{2}.$$ Clearly, it is sufficient to show $$z \models \Diamond \alpha_{t_{i}}^{0}\wedge\neg\Diamond \alpha_{t_{i}+1}^{0} \wedge \Diamond {\mbox{\boldmath $s$}}(\pi_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(\pi_{1}) \wedge \Diamond {\mbox{\boldmath $s$}}(\tau_{1}) \wedge \neg \Diamond^{2} {\mbox{\boldmath $s$}}(\tau_{1}).$$ But this follows from Claim \[again1\]: just observe that $z$ is of distance $\le 2$ from $x_{0}$, while ${\mbox{\boldmath $s$}}(\pi_{1})$ and ${\mbox{\boldmath $s$}}(\tau_{1})$ occur within the scope of $\Diamond^{2}$. \(b) To show $x_{0} \not\models {\mbox{\boldmath $s$}}(\exists \varepsilon(t_{i+1}, \pi_{2},\tau_{1}))$, suppose otherwise. Then there is $z$ such that $x_{0} S n^{\mathfrak V} S z$ and $$z \models \varepsilon(t_{i+1}, {\mbox{\boldmath $s$}}(\pi_{2}), {\mbox{\boldmath $s$}}(\tau_{1})).$$ This means that $$z \models \Diamond \alpha_{t_{i+1}}^{0}\wedge\neg\Diamond \alpha_{t_{i+1}+1}^{0}\wedge\Diamond {\mbox{\boldmath $s$}}(\pi_2) \wedge\neg\Diamond^2 {\mbox{\boldmath $s$}}(\pi_2) \wedge \Diamond {\mbox{\boldmath $s$}}(\tau_1) \wedge\neg \Diamond^2{\mbox{\boldmath $s$}}(\tau_1).$$ By Claims \[again1\] and \[again2\] this implies $$z \models \Diamond \alpha_{t_{i+1}}^{0}\wedge\neg\Diamond \alpha_{t_{i+1}+1}^{0}\wedge\Diamond \alpha_{k_{i}+1}^{1} \wedge\neg\Diamond^2 \alpha_{k_{i}+1}^{1} \wedge \Diamond \alpha_{l_{i}}^{2} \wedge\neg \Diamond^2 \alpha_{l_{i}}^{2}.$$ It follows that $$z \models \varepsilon(t_{i+1}, \alpha_{k_{i}+1}^{1}, \alpha_{l_{i}}^{2})$$ and we arrive at a contradiction, because $\alpha_{k_{i}+1}^{1}= \alpha_{k_{i+1}}^{1}$. This completes the proofs of Lemma \[lem3\] and Theorem \[main2\]. Applications to description logics ================================== In this section, we briefly comment on the consequences of our results in the context of description logics [@DLHandbook]. We remind the reader that description logics (DLs, for short) are knowledge representation and reasoning formalisms in which complex concepts are defined in terms of atomic concepts using certain constructors. DLs are then used to represent, and reason about, various relations between such complex concepts (typically, the subsumption relation). The basic Boolean description logic $\mathcal{ALC}$ has as its constructors the Boolean connectives and the universal restriction $\forall r$, which, for a concept $C$ and a binary relation symbol $r$, gives the concept $\forall r.C$ containing precisely those objects $x$ from the underlying domain for which $y\in C$ whenever $xry$. The language $\mathcal{ALC}$ is a notational variant of the basic modal logic [K]{} with infinitely many modal operators: propositional variables correspond to atomic concepts, while $\forall r.C$ is interpreted in a relational structure in the same way as $\Box_{r}$ (the modal box interpreted by the accessibility relation $r$). We refer the reader to [@DLHandbook] for precise definitions and a discussion of syntax and semantics of $\mathcal{ALC}$ and other description logics. It has been argued in that for many applications of DLs it would be useful to have an algorithm capable of deciding, given two complex concepts $C_{1}$ and $C_{2}$, whether there exists a substitution ${\mbox{\boldmath $s$}}$ (of possibly complex concepts in place of atomic ones) such that ${\mbox{\boldmath $s$}}(C_{1})$ is equivalent to ${\mbox{\boldmath $s$}}(C_{2})$ in the given DL.[^3] We call this problem the *concept unification problem*. A typical application of such an algorithm is as follows. In many cases, knowledge bases (ontologies) based on DLs are developed by different knowledge engineers over a long period. It can therefore happen that some concepts which, intuitively, should be equivalent, are introduced several times with slightly different definitions. To detect such redundancies, one can check whether certain concepts can be unified. Unifiability does not necessarily mean that these concepts have indeed been defined to denote the same class of objects—but this fact can serve as an indicator of a possible redundancy, so that the knowledge engineer could then ‘double check’ the meaning of those concepts and change the knowledge base accordingly. The concept unification problem for $\mathcal{ALC}$ is easily seen to be equivalent to the unification problem for the modal logic ${{\sf }K}$ with infinitely many modal operators: formulated for the modal language, the problem is to decide whether, given two modal formulas $\varphi_{1}$ and $\varphi_{2}$, there exists a substitution ${\mbox{\boldmath $s$}}$ such that, for every Kripke model $\mathfrak M$ and every point $x$ in it, $$(\mathfrak M,x) \models {\mbox{\boldmath $s$}}(\varphi_{1}) \quad \mbox{ iff } \quad (\mathfrak M, x) \models {\mbox{\boldmath $s$}}(\varphi_{2}).$$ This is obviously equivalent to the validity of ${\mbox{\boldmath $s$}}(\varphi_{1} \leftrightarrow \varphi_{2})$. Baader and Kuesters [-@BaaderKuesters-LPAR] and Baader and Narendran develop decision procedures for the concept unification problem for a number of sub-Boolean DLs, that is, DLs which do not have all the Boolean connectives as constructors and are, therefore, either properly less expressive than $\mathcal{ALC}$ or incomparable with $\mathcal{ALC}$. The investigation of the concept unification problem for Boolean DLs, that is, $\mathcal{ALC}$ and its extensions, is left as an open research problem. It should be clear that we have to leave open the decidability status for the concept unification problem for $\mathcal{ALC}$ as well. However, we obtain the undecidability of this problem for extensions of $\mathcal{ALC}$ with nominals. In contemporary description logic research and applications, nominals play a major role, see e.g., [@HorrocksSattler2005] and references therein. The smallest description logic containing $\mathcal{ALC}$ and nominals is known as $\mathcal{ALCO}$, and by extending the mapping between modal and description languages indicated above, one can see that $\mathcal{ALCO}$ is a straightforward notational variant of the modal logic with infinitely many modal operators and nominals. Hence, as a consequence of Theorem \[main2\] we obtain: The concept unification problem for $\mathcal{ALCO}$ is undecidable. Moreover, the undecidability proof goes through as well for extensions of $\mathcal{ALCO}$ such as, for example, $\mathcal{ALCQO}$ and $\mathcal{SHIQO}$, the description logic underlying [OWL-DL]{} [@HoPH03a]. Another family of description logics for which the concept unification problem turns out to be undecidable are those extensions of $\mathcal{ALC}$ where the universal role is definable. The minimal description logic of this sort, widely used in DL applications, is known nowadays as $\mathcal{SHI}$. Originally, Horrocks and Sattler [-@Horrocks98j] introduced this logic under the name $\mathcal{ALCHI}_{R^+}$. In $\mathcal{SHI}$, the signature of $\mathcal{ALC}$ is extended by - infinitely many relation symbols, which are interpreted by *transitive relations*, - and for each relation symbol $r$, there is a relation symbol $r^{-}$, which is interpreted by the inverse of the interpretation of $r$. The concept unification problem for $\mathcal{SHI}$ remains open. However, when considering $\mathcal{SHI}$ it is not the concept unification problem one is mainly interested in, but its generalisation to the *concept unification relative to role axioms*[^4]: in $\mathcal{SHI}$ and its extensions one can state in a so-called RBox (role box) that the interpretation of a relation symbol $r$ is included in the interpretation of a relation symbol $s$, in symbols $r \sqsubseteq s$. Now, $\mathcal{SHI}$ concepts $C$ and $D$ are called unifiable relative to an RBox $R$ iff there exists a substitution ${\mbox{\boldmath $s$}}$ (of complex $\mathcal{SHI}$-concepts for atomic ones) such that ${\mbox{\boldmath $s$}}(C)$ is equivalent to ${\mbox{\boldmath $s$}}(D)$ in every model satisfying the RBox $R$. It easily seen that this problem is undecidable. Indeed, consider the RBox $R$ consisting of $s \sqsubseteq s^{-}$, $s^{-} \sqsubseteq s$, and $r \sqsubseteq s$, where $s$ is a transitive role. Then, in every model for $R$, $s$ is transitive, symmetric and contains $r$. By replacing the operator $\Box$ with $\forall r$ and the operator $\forall$ with $\forall s$ in the proof of Theorem \[main1\], one can easily show that concept unification relative to the RBox $R$ is undecidable. Thus we obtain the following: The concept unification problem relative to role axioms for $\mathcal{SHI}$ is undecidable. This undecidability proof also goes through for extensions of $\mathcal{SHI}$ such as, for example, $\mathcal{SHIN}$ and $\mathcal{SHIQ}$. Conclusion ========== In this paper, we have shown that for two standard constructors of modal logic—the universal modality and nominals—the unification and admissibility problems are undecidable. It follows that both unification and admissibility are undecidable for all standard hybrid logics and many of the most frequently employed description logics. Many intriguing problems remain open. The question whether the unification and admissibility problems for ${{\sf }K}$ (or, equivalently, $\mathcal{ALC}$) are decidable is one of the major open problems in modal and description logic. We were partially supported by the U.K. EPSRC grants GR/S61966, GR/S63182, GR/S63175, GR/S61973. , [Blackburn, P.]{}, [and]{} [Marx, M.]{} 2000. The computational complexity of hybrid temporal logics.  [*8*]{}, 653–679. 2006\. Hybrid logics. See , 821–867. , [Calvanese, D.]{}, [McGuinness, D.]{}, [Nardi, D.]{}, [and]{} [Patel-Schneider, P.]{}, Eds. 2003. . Cambridge University Press. 2001\. Unification in a description logic with transitive closure of roles. In [*Proceedings of the 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2001)*]{}, [R. Nieuwenhuis]{} [and]{} [A. Voronkov]{}, Eds. Lecture Notes in Computer Science, vol. 2250. Springer-Verlag, Havana, Cuba, 217–232. 2001\. Unification of concept terms in description logics.  [*31*]{}, 277–305. 1994\. Unification theory. In [*Handbook of Logic in Artificial Intelligence and Logic Programming*]{}, [D. Gabbay]{}, [C. Hogger]{}, [and]{} [J. Robinson]{}, Eds. Oxford University Press. 2001\. Unification theory. In [*Handbook of Automated Reasoning*]{}, [J. Robinson]{} [and]{} [A. Voronkov]{}, Eds. Vol. I. Elsevier Science Publishers, 447–533. , [van Benthem, J.]{}, [and]{} [Wolter, F.]{}, Eds. 2006. . Elsevier. 1990\. Undecidable properties of extensions of provability logic. [I]{}.  [*29*]{}, 231–243. 1992\. A decidable modal logic with the undecidable admissibility problem for inference rules.  [*31*]{}, 53–61. 1997\. . Oxford Logic Guides, vol. 35. Clarendon Press, Oxford. , [Flum, J.]{}, [and]{} [Thomas, W.]{} 1994. . Springer. 2000\. Best solving modal equations.  [*102*]{}, 183–198. 2004\. Unification, finite duality and projectivity in locally finite varieties of [H]{}eyting algebras.  [*127*]{}, 99–115. 2004\. Filtering unification and most general unifiers in modal logic.  [*69*]{}, 879–906. 1992\. Using the universal modality: Gains and questions.  [*2*]{}, 5–30. , [Kozen, D.]{}, [and]{} [Tiuryn, J.]{} 2000. . MIT Press. , [Patel-Schneider, P. F.]{}, [and]{} [van Harmelen, F.]{} 2003. From $\mathcal{SHIQ}$ and [RDF]{} to [OWL]{}: The making of a web ontology language.  [*1*]{}, 7–26. 1999\. A description logic with transitive and inverse roles and role hierarchies.  [*9*]{}, 385–410. 2005\. A tableaux decision procedure for $\mathcal{SHOIQ}$. In [*Proceedings of Nineteenth International Joint Conference on Artificial Intelligence (IJCAI 2005)*]{}, [L. Kaelbling]{} [and]{} [A. Saffiotti]{}, Eds. Professional Book Center, 448–453. 2001\. On the admissible rules of intuitionistic propositional logic.  [*66*]{}, 281–294. 2003\. Towards a proof system for admissibility. In [*Proceedings of the 17th International Workshop ‘Computer Science Logic’*]{}, [M. Baaz]{} [and]{} [J. Makowsky]{}, Eds. Lecture Notes in Computer Science, vol. 2803. Springer, 255–270. 1961\. Recursive unsolvability of [P]{}ost’s problem of [“]{}tag[”]{} and other topics in the theory of [T]{}uring machines.  [*74*]{}, 437–455. 1997\. . Studies in Logic and the Foundations of Mathematics, vol. 136. Elsevier. 2006\. Algebras and coalgebras. See , 331–425. , [Wolter, F.]{}, [and]{} [Chagrov, A.]{} 2001. Advanced modal logic. In [*Handbook of Philosophical Logic, 2nd edition*]{}, [D. Gabbay]{} [and]{} [F. Guenthner]{}, Eds. Vol. 3. Kluwer Academic Publishers, 83–266. ... [^1]: The language with infinitely many modal operators and nominals is often denoted by $\mathcal{H}$ and called the *minimal hybrid logic*; see, e.g., [@Arecesten]. [^2]: Alternatively, we could allow nominals to be substituted by nominals. This would not affect the undecidability result. [^3]: This is the simplest version of the decision problem they consider. More generally, Baader and Narendran consider the problem whether there exists such a substitution which leaves certain atomic concepts intact. We will not consider this more complex decision problem in this paper. [^4]: In description logic, the most useful generalisation of the concept unification problem is unification relative to TBoxes *and* RBoxes. We will not discuss this generalisation here because the undecidability results presented in this paper trivially hold for it as well.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that spin-$S$ chains with SU(2)-symmetric, ferromagnetic nearest-neighbor and frustrating antiferromagnetic next-nearest-neighbor exchange interactions exhibit metamagnetic behavior under the influence of an external magnetic field for small $S$, in the form of a first-order transition to the fully polarized state. The corresponding magnetization jump increases gradually starting from an $S$-dependent critical value of exchange couplings and takes a maximum in the vicinity of a ferromagnetic Lifshitz point. The metamagnetism results from resonances in the dilute magnon gas caused by an interplay between quantum fluctuations and frustration.' author: - 'M. Arlego' - 'F. Heidrich-Meisner' - 'A. Honecker' - 'G. Rossini' - 'T. Vekua' date: 'May 27, 2011; revised November 22, 2011' title: Resonances in a dilute gas of magnons and metamagnetism of isotropic frustrated ferromagnetic spin chains --- Introduction {#sec:intro} ============ Quantum spin systems at low temperatures share many of the macroscopic quantum behaviors with systems of bosons such as Bose-Einstein condensation, superfluidity (see Ref.  and references therein), or macroscopic quantum tunneling, [@chudnovsky88] and may indeed be viewed as quantum simulators of interacting bosons.[@giamarchi08] Spin systems with frustration, in particular, realize exotic phases of strongly correlated bosons, such as spin liquids [@balents10] or supersolids.[@liu73] Experimentally, such systems can be studied both in low-dimensional quantum magnets (see, e.g., Ref. ) and in ultra-cold atomic gases.[@struck11; @jo11] In the latter case, interest in the many-body physics has been excited by the extraordinary control over interactions between bosons via Feshbach resonances. [@bloch08] These can, in particular, be used to tune interactions from repulsive to attractive. In the attractive regime, the collapse of an ultra-cold gas of bosons was observed in experiments.[@gerton00] In this work, we discuss a mechanism by which the same can be achieved in spin systems. In the large-$S$ limit, spins map onto bosons with a finite but large Hilbert space, justifying a description in terms of soft-core bosons. Yet, in many cases, similarities between spins and soft-core bosons may even exist for $S=1/2$. In three dimensions (3D), systems of spins and bosons resemble each other the more the smaller the density of bosons is, whereas 1D spin-$S$ antiferromagnets close to saturation behave as spinless fermions, or hard-core bosons. In spin systems, the external magnetic field $h$ tunes the density of magnons. The limit of a dilute gas of magnons is then realized as the fully polarized state (the vacuum of magnons) is approached from below. In this work, we argue that resonances can play a crucial role in frustrated quantum spin systems largely determining their low-energy behavior in an external magnetic field. We consider the frustrated ferromagnetic (FM) spin-$S$ chain and show that upon changing system parameters such as the coupling constants or $h$, one can tune the effective interaction between magnons from repulsive to attractive by exploiting the existence of resonances. As a main result, we demonstrate that, close to resonances, where the scattering length is much larger than the lattice constant, and in the case of attractive effective interactions, the intrinsic hard-coreness of spins does not play a significant role in the limit of a dilute gas of magnons. A behavior resembling the collapse of attractively interacting bosons [@nozieres82; @Mueller] therefore exists in such spin systems close to their fully polarized state. The thermodynamic instability of collapsed states causes jumps in the magnetization curve just below saturation. This has to be contrasted with the scattering length being of the order of a few lattice sites. In that case, which is realized for spin 1/2, the formation of mutually repulsive, multi-magnon bound states can be observed (see, e.g., Refs. ). Our work demonstrates that the mapping of a [*purely 1D*]{} spin system close to saturation to an effective theory of a dilute Bose gas properly accounts for the physics of the model. This is accomplished by connecting the scattering vertices of the microscopic lattice model with the coupling constants of the effective theory. Furthermore, despite the purely 1D nature of our problem, a $1/S$ expansion is a valuable tool and yields the correct physics. Concretely, we study the following system: $$H_S = \sum_{i=1}^L \left \lbrack J\vec{S}_i\cdot \vec{S}_{i+1}+ J' \vec{S}_i\cdot \vec{S}_{i+2}-h S^z_i \right\rbrack\,. \label{eq:ham}$$ $\vec{S}_i=(S_i^x, S^y_i, S^z_i)$ is a spin-$S$ operator acting on site $i$ and $L$ is the number of sites. $J < 0$ is the FM, nearest-neighbor exchange interaction and $J'=1$ is the antiferromagnetic (AFM), next-nearest-neighbor exchange interaction setting the energy scale. In the absence of an external field $h$, $H_S$ has a FM ground state for $J\le-4$ for all $S$ (see Ref. ). $J=-4$ is a ferromagnetic Lifshitz point where the quadratic term in the dispersion of the magnons vanishes. Systems with competing FM and AFM interactions are of timely interest, [@shannon06] in particular, the spin-$1/2$ version of Eq. ,[@chubukov91a; @kolezhuk05; @hm06a; @vekua07; @kecke07; @hikihara08; @sudan09] motivated by the experimental realizations in, e.g., LiCuVO$_4$ (Refs. ) and Li$_2$ZrCuO$_4$ (Ref. ). The 1D case of $J<0$ and $S > 1/2$ is largely unexplored (for $J>0$ and $S>1/2$, see Refs.  and ). We are mainly interested in the region $-4<J<0$ and magnetization $M=S^z/{(S L)}$ ($S^z=\sum_i \langle S^z_i \rangle $) close to saturation $M=1$, for general spin $S$. We will proceed in three steps: First, in Sec. \[sec:two\_magnon\], we discuss the solution of the two-magnon problem and introduce the scattering length. Second, in Sec. \[sec:dilute\], we map the low-energy limit of Eq. , close to saturation, to a dilute 1D gas of two species of bosons interacting via an effective short-range interaction.[@Batyev84; @nikuni95] We then calculate the interaction vertices in this effective theory using a $1/S$ expansion. Finally, in Sec. \[sec:dmrg\], we compare the analytical results with exact numerical ones using the density matrix renormalization group (DMRG) method [@white92b; @schollwoeck05] and exact diagonalization (ED). We put a particular focus on the case of $S=1$. A summary of our results is presented in Section \[sec:summary\], while technical details of the mapping to a dilute gas and of the $1/S$ expansion are given in Appendix \[app:dilute\]. A comparison of DMRG results for open boundary conditions vs.results for periodic boundary conditions is shown in Appendix \[app:dmrg\]. Two-magnon problem {#sec:two_magnon} ================== Solution of the two-magnon problem ---------------------------------- We now solve the interacting two-magnon problem, starting with the thermodynamic limit (for $S=1/2$, see Ref. ). On a chain of finite length $L$ with periodic boundary conditions the total momentum $K$ is a good quantum number due to the translational invariance of the Hamiltonian $H_S$ in Eq. . Thus, it is convenient to use a basis separating momentum subspaces $$|K,r\rangle=\sum_{l=1}^{L}e^{iK(l+r/2)}S_{l}^{-}S_{l+r}^{-}|F\rangle \, , \label{eq:mixed-basis}$$ where $|F\rangle$ is the fully polarized state, $K=2q\pi/L$ ($q=0,1,\cdots,L-1$) and $r$ is the relative distance of two magnons. The allowed values of $r$ depend on $S$ and the parity of $L$ and $q$. For instance, in the case of $S>1/2$ and $L$ even, $r=0, 1,..,L/2-1,(L/2)$ for $q$ odd (even). We expand a general two-magnon state with momentum $K$ into the (unnormalized) basis of Eq. (\[eq:mixed-basis\]) as $$|\Psi_{2M}\rangle=\sum_{r}C_{r}|K,r\rangle$$ and determine $C_{r}$ analytically by solving the two-magnon Schrödinger equation $$H_S|\Psi_{2M}\rangle=E_{2M}|\Psi_{2M}\rangle.$$ This leads to the recurrence relations $$\begin{aligned} \label{eq:recurrence} \Omega_0C_0&=&\frac{S}{\sqrt{S(2S-1)}}(\zeta_{1}C_1 +\zeta_{2}C_2 )\nonumber\\ (\Omega_0-J)C_1&=&\frac{(2S-1)^{3/2}}{{S^{3/2}}}\zeta_{1}C_0+\zeta_{1}C_2 \nonumber\\ &&+\zeta_{2}(C_1+C_3) \nonumber\\ (\Omega_0-1)C_2&=& \frac{(2S-1)^{3/2}}{{S^{3/2}}}\zeta_{2}C_0+\zeta_{2}C_4\nonumber\\ &&+ \zeta_{1} (C_1+C_3)\nonumber\\ \Omega_{0}C_{r} & = & \zeta_{1}\left(C_{r+1}+C_{r-1}\right)\nonumber\\ &&+ \zeta_{2}\left(C_{r+2}+C_{r-2}\right),\quad\mathrm{for}\,\,r\geq3,\end{aligned}$$ where $\zeta_{1}=2SJ\cos{(K/2)}$, $\zeta_{2}=2S\cos{(K)}$. When $|\Psi_{2M}\rangle$ is a bound state, $\Omega_0=E_b-4S(1+J^2/8)$ where $E_b$ is the (negative) binding energy (defined as the bound-state energy minus the energy of the minimum of the two-magnon scattering states). The (unnormalized) two-magnon bound states for a given $K$ are constructed with the ansatz $$C_{r}=e^{-\kappa_{-}r}+ve^{-\kappa_{+}r}\quad(r\geq1),\label{eq:Ansatz}$$ which, inserted in Eq. (\[eq:recurrence\]), leads to a characteristic quartic equation for $r\geq 3$ $$\Omega_{0}z^{2}-\zeta_{1}(z^{3}+z)-\zeta_{2}(z^{4}+1)=0 \, ,$$ $z$ being any of $e^{-\kappa_{\pm}}$ with $\text{Re}[\kappa_{\pm}]>0$. The remaining unknown quantities $C_{0}$, $v$ and $E_b$ are determined from the remaining relations listed in Eq. (\[eq:recurrence\]). Scattering length in the lattice problem ---------------------------------------- For $S>1/2$, bound states with energies below the minimum of the two-magnon scattering continuum exist only for $K\simeq \pm 2k_{cl}$ and only in a finite window of couplings $$\label{range_J1c} -4 < J < J_{cr}(S),$$ with $S$-dependent critical values $J_{cr}(S)$, as illustrated in Table \[tab:Jcr\]. The critical value $J_{cr}(S)$, which is $J_{cr}\approx -2.11$ for $S=1$, quickly approaches $J_{cr}(S) \simeq -4$ with increasing $S$ (see Tab. \[tab:Jcr\]). In fact, the $1/S$ analysis to be presented in Sec. \[sec:dilute\] suggests the existence of an $S_{cr}$ beyond which this window disappears completely. $S$ 1 3/2 2 5/2 -------------- ------------- ------------- ------------- ------------- $-J_{cr}(S)$ 2.11 (2.95) 3.31 (3.42) 3.68 (3.66) 3.84 (3.80) : Critical exchange couplings $J_{cr}(S)$ for the existence of metamagnetism in Eq.  derived from solving the two-magnon problem (values in parenthesis: Results from the $1/S$ expansion of Sec. \[sec:dilute\]).[]{data-label="tab:Jcr"} We define the *scattering length* of bound states, in the thermodynamic limit $L \rightarrow \infty$, from their spatial extent (in analogy to the continuum problem of particles interacting via a short-range, attractive potential): $$\label{boundstatescatteringlength} a_S =\frac{1}{\mathrm{min} \{\text{Re}[\kappa_{\pm}]\}}.$$ The binding energy takes its lowest value ([*i.e.*]{}, the largest absolute value) for $K=K^*\simeq \pm 2k_{cl}=\pm 2\arccos{(-J/4 )}$ and this quantity, with extremely high accuracy, is related to the scattering length by $$\label{bindingenergy} E_b(K^*)\simeq-\frac{1}{ma_S^2}\, ,$$ where $m$ is the one-magnon *mass*, $$\label{magnonmass} m=\frac{2}{S(4-J)(4+J)}.$$ The relation Eq. (\[bindingenergy\]) between the binding energy and the scattering length that holds for our microscopic lattice model is typical for a 1D Bose gas in the continuum interacting via an attractive contact potential, the Lieb-Liniger model.[@LL] ![(Color online) 1D scattering length $a_S/a$ ($a$: lattice spacing) for $S=1/2,1,3/2$ (solid, dashed, dot-dashed line). []{data-label="fig:scatt"}](figure1.eps){width="0.9\columnwidth"} The scattering length $a_S$ can also be determined from the scattering problem of two magnons for general $S$ in the thermodynamic limit. We have solved this problem, with the momenta of the two magnons (which participate in scattering) taken in the vicinity of the same dispersion minimum, $k_1=k_{cl}+k$ and $k_2=k_{cl}-k$. From the asymptotic form of the two magnon scattering state wavefunction we extract the scattering phase shift $\delta_S (k)$ for any $S$, $$\lim_{r\to \infty}C_r \sim \cos{(rk+\delta_S(k))}.$$ To extract the scattering length from the scattering phase shift we use the same relation as in a 1D continuum model of particles interacting via a short-range potential $$\label{scatteringstatescatteringlength} a_S=\lim_{k\to 0} \frac{\cot{(\delta_S(k))}}{k}.$$ This allows us to calculate the scattering length in the repulsive regime $a_S < 0$ as well (when two-magnon bound states are not formed below the minimum of the scattering continuum). In the attractive regime $a_S > 0$, the scattering lengths obtained from both approaches \[[*i.e.*]{}, Eq. (\[boundstatescatteringlength\]) and Eq. (\[scatteringstatescatteringlength\])\] are in excellent agreement with each other. As a side note on terminology, we call attractive (repulsive) regime the one in which the effective interaction between magnons is attractive (repulsive). We can now generalize the procedure[@Okunishi98] of mapping the antiferromagnetic (unfrustrated) spin-$S$ chain close to saturation onto the low-density limit of the Lieb-Liniger model with a coupling constant $$\label{okunishi} g_0=-\frac{2}{ma_S} \, . $$ However, in our model the single-magnon dispersion has two minima. The effective theory thus will be a two-component (two species) Lieb-Liniger model. There are two types of low-energy scattering processes, first when momenta of two magnons are in the vicinity of the same dispersion minimum that we have presented above (intraspecies scattering), and second, when the momenta $k_1$ and $k_2$ of two magnons are in the vicinity of different minima of dispersion, [*i.e.*]{}, $k_1=k_{cl}+k$ and $k_2=-k_{cl}-k$ (interspecies scattering). For the latter case we can repeat all steps presented above and extract another coupling constant $\tilde g_0$ from the interspecies scattering length, $\tilde a_S$ in analogy with Eq. (\[okunishi\]). We obtain that $\tilde g_0>0$ (implying that bound states with total momentum $K=0$ are never formed below the scattering continuum), and $\tilde g_0>g_0$ for any $S$ in the region $-4<J<0$. Since the relation $\tilde g_0>g_0$ always holds, the relevant scattering length at low energies is the intraspecies scattering length $a_S$. The scattering length, shown in Fig. \[fig:scatt\], can be well described, for small $S>1/2$, by a sum of two terms (resonances): $a_S\simeq \lambda^-_S/(4+J)+\lambda^+_S/(J_{cr}(S)-J)$ \[where $\lambda^{\pm}_S$ are numerical prefactors\]. We emphasize that, for $S\geq 1$, the scattering length is in general much larger than the lattice spacing, as is evident from Fig. \[fig:scatt\]: For $S=1$, $a_S$ takes a minimum at $J\simeq -3.3$ with $ a_S\simeq 80a$. Additionally, the emergence of bound states manifests itself by a diverging scattering length at $J_{cr}(S)$, where $a_S$ changes its sign jumping from $-\infty$ to $+ \infty$. Thus, bound states are typically shallow, with a binding energy given by Eq. (\[bindingenergy\]). In addition, the minima in their dispersion occur at incommensurate momenta $K^*$. For $S=1/2$, any $J<0$ induces a two-magnon bound state with total momentum $K=\pi$, [@chubukov91a; @hm06a] [*i.e.*]{}, $a_S > 0$ and there is no resonance at $-4 < J<0$ for $S=1/2$. Hence, $S=1/2$ is very different from the $S>1/2$ case where bound states with $K=\pi$ are never below the two-magnon scattering continuum, and, as discussed above, a resonance exists for $-4 < J<0$ and $1\leq S < S_{cr}$. In order to analyze the finite-size effects with respect to results in the thermodynamic limit, we have numerically diagonalized $H_S$ in the basis given in Eq. (\[eq:mixed-basis\]) (see Ref.  for details of the procedure). We obtained the full spectrum, *i.e.*, the scattering continuum and bound/antibound states if present, for selected values of $J \in[-4,0]$ in systems with up to $L=4000$ sites and several values of $S$. Table \[tab:Num\_Jcr\] shows the numerical determination of $J_{cr}(S)$ for $S=1$ and different system sizes. Although finite-size effects are apparent, a quadratic fit in $1/L$ to numerical data for $J_{cr}(S)$ extrapolates to $J_{cr}\simeq -2.11$ in agreement with the result determined directly in the thermodynamic limit (see the preceding discussion and Table \[tab:Jcr\]). $L$ 1000 2000 4000 $\infty$ ---------------- -------- -------- -------- ---------- $-J_{cr}(S=1)$ $2.25$ $2.16$ $2.13$ $2.11$ : Finite-size dependence of critical values $J_{cr}(S)$ for the emergence of bound states below the minimum of the two-magnon continuum of scattering states, for $S=1$.[]{data-label="tab:Num_Jcr"} Mapping of the spin Hamiltonian to a dilute gas of bosons {#sec:dilute} ========================================================= In this section, we describe our effective theory in the thermodynamic limit, for the case of a finite (though vanishingly small) density of magnons. The mapping to a dilute gas of bosons is motivated by the following observation: For $S>1/2$, we have shown that $a_S$ is large. Hence in the dilute limit, we can safely neglect the hard-core constraint and take the continuum limit. For $S=1/2$, on the contrary, the scattering length $a_S$ is typically of the order of a few lattice constants and only for $-4<J<-3.9$ does $a_S$ become comparable to the smallest value of the scattering length for $S=1$. For $S\geq 1$, close to saturation, and in the low-energy limit, we therefore map our system onto a dilute two-component gas of bosons interacting with effective short-range interactions. Many-body effects will be incorporated by properly shifting the two-body T matrix off-shell as explained in Ref. . We show that, while the [*inter*]{}species interaction is always repulsive and stronger than the [*intra*]{}species interaction, the latter undergoes a sign change. When the intraspecies interaction becomes negative, the bosons are unstable against a collapse. We show that a $1/S$ expansion captures this physics correctly and, similar to the case of the (unfrustrated) Heisenberg chain, [@Johnson] is applicable to the present problem, albeit its one-dimensional nature. Effective Hamiltonian --------------------- Using the Dyson-Maleev transformation [@DysonMaleev] (the Dyson-Maleev representation is used here for convenience. We have checked that the explicitly hermitean Holstein-Primakoff representation, [@HolsteinPrimakoff] to leading order $1/S$, provides equivalent results) $$\begin{aligned} S_i^z&=&S-a_i^{\dagger}a_i \, ,\quad S_i^+= \sqrt{2S}a_i \, , \nonumber\\ S_i^-&=&\sqrt{2S}a_i^{\dagger}(1-a_i^{\dagger}a_i/2S)\,,$$ we map Eq.  onto a bosonic problem: $$\label{DysonMaleev} H=\!\sum_k(2S\epsilon_k-\mu)a_k^{\dagger}a_k +\!\!\!\sum_{k,k',q} \!\!\! \frac{\Gamma_0(q;k,k')}{2L} a_{k+q}^{\dagger}a_{k'-q}^{\dagger}a_{k}a_{k'} \, ,$$ where $$\epsilon_{k}=J\cos{k}+\cos{2k}-(J\cos{k_{cl}}+\cos{2k_{cl}})\ge 0$$ is the single-magnon dispersion and $k_{cl}=\arccos{(-J/4)}$. Note that in our normalization, the minima of the single-particle dispersion are at zero energy: $\epsilon_{\pm k_{cl}}=0$. The bare interaction vertex $\Gamma_0$ is given by $\Gamma_0(q;k,k') = V_q-\frac{1}{2}(V_k+V_{k'})$ with $V_k= 2J\cos{k}+2\cos{2k}$. The chemical potential is $\mu=h^{cl}_s-h$, where $h_s^{cl}$ is the classical saturation field value $h^{cl}_s=S(J+4)^2/4$. We are interested in the dilute regime $\mu\to 0$. Concentrating on the low-energy behavior we arrive at, via a Bogoliubov procedure, [@Batyev84; @nikuni95] a two-component Bose gas interacting via a $\delta$-potential with Hamiltonian density $$\mathcal{H}_{\rm eff}=\sum_{\alpha}-\frac{|\nabla \psi_{\alpha}|^2}{2m}+\frac{g_0(S)}{2}(n_1^2+n_2^2)+\tilde g_0(S) \, n_1 n_2\, . \label{eq:effective}$$ Here $\psi_\alpha$, $\alpha=1,2$ describe bosonic modes with momenta close to $\pm k_{cl}$ ([*i.e.*]{}, the Fourier transforms of $\psi_\alpha$ are $\psi_1(k\!\!\to\!\!0) \approx a_ {k_{cl}+k}$, $\psi_2(k\!\!\to\!\!0) \approx a_ {-k_{cl}+k}$) and $n_{\alpha}=\psi_\alpha^{\dagger}\psi_\alpha$ are the corresponding densities. The bare coupling constants of the effective 1D model of the two-component Bose gas, $g_0(S)$ and $\tilde g_0(S)$, are, in the dilute limit of bosons, related to the renormalized vertices of the microscopic model Eq. (\[DysonMaleev\]) through $$\begin{aligned} \label{microeffective} && \Gamma(0;k_{cl},k_{cl})= \frac{g_0(S)}{1+g_0(S){\sqrt{2m}}/({\pi \sqrt{\mu}})} ,\\ &&\Gamma(0;k_{cl},\!-k_{cl})+\Gamma(-2k_{cl};k_{cl},\!-k_{cl})\!=\!\frac{\tilde g_0(S)} {1+\frac{\tilde g_0(S){\sqrt{2m}}}{\pi \sqrt{\mu}}}. \nonumber \end{aligned}$$ The relations Eq. (\[microeffective\]) follow from a generalization of the corresponding equation for the case of a one-component Bose gas [@kolomeisky92; @Lee2002] to the two-component case using an RG analysis[@KolezhukRG] (see Appendix \[app:dilute\] for details of the calculation). $1/S$ expansion {#subsect:1overS} --------------- Next, we apply a $1/S$ expansion to calculate the interaction vertices and extract the coupling constants $g_0(S)$ and $\tilde g_0(S)$. Using a standard ladder approximation the Bethe-Salpeter equation for the vertices $\Gamma$ reads: $$\begin{aligned} \label{BetheSalpeterEquation} \Gamma(q;k,k')&=& \Gamma_0(q;k,k') \\ &-&\frac{1}{2SL}\sum_{p} \frac{ \Gamma_0(q-p;k+p,k'-p) } {\epsilon_{k+p}+\epsilon_{k'-p}} \Gamma(p;k,k')\nonumber\,.\end{aligned}$$ Setting the transferred momentum $q=0$ in $\Gamma$ and the incoming momenta to $k=k'=k_{cl}$, we get (see Appendix \[app:dilute\] for details): $$\begin{aligned} \label{nine} &&\Gamma(0;k_{cl},k_{cl})\left[1+\frac{V_0-V_{k_{cl}}}{2SL} \sum_{p} \frac{ 1 } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}}\right]=\nonumber\\ &&V_0-V_{k_{cl}}+\frac{1}{2SL}\sum_{p}\! \left[1-\frac{ V_{p}-V_0 } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}} \right]\Gamma(p;k_{cl},k_{cl})\nonumber\\ &&-\frac{1}{2SL}\sum_{p} \frac{ (V_0-V_{k_{cl}}) \left[\Gamma(p;k_{cl},k_{cl})-\Gamma(0;k_{cl},k_{cl})\right] } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}}.\nonumber\\\end{aligned}$$ Now, in the spirit of the $1/S$ expansion, we replace the renormalized vertex with the bare vertex on the right hand side of Eq. (\[nine\]), $\Gamma(p;k_{cl},k_{cl})\to 2\epsilon_p$, which is possible since there are no infrared divergences. Regularizing the left hand side of Eq. (\[nine\]) as in Refs. , and using Eq. (\[microeffective\]) we extract the coupling constants of the effective model. The analytical expression for $g_0(S)$ is $$\label{LLC_wc1} g_0(S)=\frac{F}{1-\frac{F(J^2-8)}{|J|S(16-J^2)^{3/2}}}$$ (the derivation of Eq.  can be found in Appendix \[app:dilute\] and the constant $F$ is given in Eq. ). To leading order in $J+4$, $$\lim _{J\to -4^+} g_0(S) \simeq \frac{S-S_{cr}}{4S}(J+4)^2+O\left((J+4)^{5/2}\right)\,.$$ To first order in $1/S$, we obtain $S_{cr}=6$, which is not that large a number, hence corrections beyond $1/S$ may affect $S_{cr}$. In the same way we calculate $\tilde g_0(S)$ as $$\label{g0tilde} \tilde g_0(S)=\frac{\tilde F}{1+\frac{J^2-8}{16S}}$$ (the derivation is presented in detail in Appendix \[app:dilute\]; see Eq.  for the expression for $\tilde F$). From Eqs.  and , we notice that $g_0(S) < \tilde g_0(S)$ for $-4<J<0$. Thus the state below saturation is a single-component one. Provided the interactions are repulsive, the ground state is a translationally invariant chiral state,[@kolezhuk05] where bosons prefer to ‘condense’ at the same minimum of the single-particle dispersion since they experience a minimal repulsion there.[@jackeli04] We also note that $|g_0|m\ll 1$ for $J<0$, hence interactions between bosons are generically weak. In particular, even though $m\to \infty$ when approaching the ferromagnetic Lifshitz point, $|g_0|m\to 0$. ![(Color online) Effective bare intraspecies interaction $g_0(S)$, for $S=3$ (solid line), representative of the generic behavior for $S<S_{cr}$, $S=10$ (dashed curve), representative of $S>S_{cr}$ \[dot-dashed line: $g_0(S=\infty)$\]. Inset: $g_0$ for $S=1$ and $3/2$. []{data-label="fig:geff"}](figure2.eps){width="0.9\columnwidth"} The effective bare intraspecies interaction is depicted in Fig. \[fig:geff\] and behaves as $g_0 (S) \sim [J-J_{cr}(S)]$ for $J\to J_{cr}(S)$. The scattering length is related to the effective coupling constant by Eq. , signaling a resonance at $J_{cr}(S)$. Thus, we see that for $S<S_{cr}$ there is a finite region near $J\simeq-4$ where $g_0 (S)<0$ and bosons attract each other, producing a collapsed state. To corroborate this, using ED for Eq.  and $S=1$ with periodic boundary conditions, we have calculated the ground-state momentum of the states with a small, but finite number of magnons, which is incommensurate, supporting the picture of a uniform chiral state in the repulsive case $J>J_{cr}$, and a collapsed state in the attractive case $J<J_{cr}$ at one of the two minima of the single-particle dispersion. In the attractive case, $\partial^2E_0/\partial n^2<0$, where $E_0$ and $n $ are the bosons’ ground-state energy and density, respectively. In the language of spins, the inverse magnetic susceptibility at saturation becomes negative, and hence, following standard arguments,[@dmitriev01] we conclude that there is a first-order transition at $M=1$, [*i.e.*]{}, a jump in the magnetization curve just below saturation. As pointed out above, the case of $S=1/2$ is special since the scattering length is typically of the order of the lattice constant here. The mapping of the $S=1/2$ case to a two-component Bose gas (by the procedure presented above for $S>1/2$) can be trusted only for $J \to -4$, where the scattering length becomes much larger than the lattice constant. In that case we can easily incorporate the exact hard-core constraint into our formalism [@nikuni95] and again expect that $S=1/2$ also shows metamagnetic behavior. This conclusion is in agreement with DMRG results for $S=1/2$.[@sudan09] Note that a metamagnetic jump can also be stabilized for spin $1/2$ with suitable anisotropic exchange interactions.[@GMK98; @hirata99] However, with our procedure we cannot account for the formation of stable two-, three-, and four-magnon bound states that is characteristic for most of the region $J>-4$ in the spin-$1/2$ frustrated ferromagnetic Heisenberg chain.[@hm06a; @kecke07; @hikihara08; @sudan09] Going back to $S>1/2$, at lower $M$, corresponding to higher densities of magnons, the hard-core nature of spins eventually prevails as well, resulting in a uniform ground state at a nonzero momentum. However, as already mentioned, from the finite-size analysis of the two-magnon problem, we observe that bound states disappear with decreasing $L$, suggesting that the attractive effective potential (in the limit of a small magnon density) can become repulsive upon increasing the magnon densities. Thus, the state below the jump ([*i.e.*]{}, $0<M<1-\Delta M_{\mathrm{jump}}$, where $\Delta M_{\mathrm{jump}}$ is the height of the jump) will be similar to the one encountered in the case of $J>J_{cr}$, [*i.e.*]{}, it is a translationally uniform chiral state. ![(Color online) (a) Magnetization curves $M(h)$ for $S=1$ at $J=-2.5,-3,-3.5$. (b) $M(h)$ for $S=1$, $J=-1$ (c) $M(h)$ for $S=3/2$ at $J=-3.5$ (all for $L=128$).[]{data-label="fig:mag_h"}](figure3a.eps "fig:"){width="0.85\columnwidth"}\ ![(Color online) (a) Magnetization curves $M(h)$ for $S=1$ at $J=-2.5,-3,-3.5$. (b) $M(h)$ for $S=1$, $J=-1$ (c) $M(h)$ for $S=3/2$ at $J=-3.5$ (all for $L=128$).[]{data-label="fig:mag_h"}](figure3bc.eps "fig:"){width="0.9\columnwidth"} DMRG results {#sec:dmrg} ============ Next, we turn to numerical results for the case of $S=1$ (unless stated otherwise), solving for the ground state of Eq.  in a finite magnetic field $h$, using ED where possible or DMRG.[@fn-alps] We present data from DMRG simulations using up to $1200$ states, for $L\leq 128$ sites, and for open boundary conditions (OBC), unless stated otherwise. Magnetization curves and magnetization profiles ----------------------------------------------- ![(Color online) Magnetization profiles for $S=1$, $L=128$ at $J=-1$ (no jump, dashed lines) and $J=-3$ (jump at $S^z=97$) for $S^z=80,90,110,120$ (top to bottom).[]{data-label="fig:mag_prof"}](figure4.eps){width="0.9\columnwidth"} The main result of this work, namely, the metamagnetic transition from a gapless finite-field phase to full saturation, is clearly seen in the magnetization curves shown in Fig. \[fig:mag\_h\](a). For $S=1$, we observe the appearance of this jump for $-4< J \lesssim -2$, while for $S=3/2$ \[an example is shown in Fig. \[fig:mag\_h\](c)\] the jump exists in a much narrower window $J\lesssim -3 $. This is consistent with our analytical results for $J_{cr}$, listed in Tab. \[tab:Jcr\]. Moreover, for $-2 \lesssim J<0 $, we resolve a plateau in $M(h)$ at $M=0$, which is due to the Haldane gap[@whitehuse] in this $S=1$ system \[see Fig. \[fig:mag\_h\](b)\]. This gap defines the critical field $h_c$ that separates the gapped Haldane phase from the finite-$M$ phase with a smooth $M(h)$-behavior for $h_c<h<h_{\mathrm{sat}}$. The collapse of magnons manifests itself in the magnetization profiles ($1-\langle S^z_i \rangle$ vs. site $i$) using OBC, displayed in Fig. \[fig:mag\_prof\] for $J=-3$ and $J=-1$. In the former case, there is a jump, but in the latter, there is none. Clearly, in the states that get skipped over ($S^z>97$ for $J=-3$), magnons collapse into the center of the system, whereas for actual ground states below the metamagnetic transition, the magnetization profiles become flat. By contrast, in the case of $J=-1$ where the transition to the fully polarized state is smooth and continuous, [*all*]{} profiles are, apart from boundary effects, flat. Central charge -------------- Our effective theory developed in Section \[sec:dilute\] suggests that the gapless phase in the region $h_{c}<h<h_{\mathrm{sat}}$ is a one-component phase (where $h_{\mathrm{sat}}$ is the saturation field). To substantiate this result, one can make use of entanglement measures such as the von-Neumann entropy to extract the central charge, which directly yields the number of components of the gapless state. The von-Neumann entropy is defined as $$S_{vN}(l)=- \mbox{tr}(\rho_l \ln \rho_{l}) \, ,$$ where $\rho_l$ is the reduced density matrix of a subsystem of length $l$ of our one-dimensional chain of length $L$. In a gapless state that is conformally invariant, the $l$ and $L$ dependence of the von-Neumann entropy is given by[@vidal03; @calabrese] $$S_{vN}(l) =\frac{c}{3} \ln\left( \frac{L}{\pi} \sin(\frac{\pi}{L} l)\right)+g\,, \label{eq:cch}$$ which is valid for systems with periodic boundary conditions (PBC). PBC are preferable for the calculation of the central charge from Eq.  since for OBC, there may be additional oscillatory terms. $g$ is a non-universal constant that depends on $M$. As DMRG directly accesses the eigenvalues of these reduced density matrices,[@schollwoeck05] it is straightforward to measure $S_{vN}(l)$ with this numerical method. Some typical DMRG results (squares) for systems of $L=64$ and periodic boundary conditions are presented in Fig. \[fig:svn\]. We have fitted the expression Eq.  to our numerical data (shown as solid lines in the figure) and obtain $c=1.0\pm 0.1$ in all examples. Therefore, we expect the gapless phase to be a (chiral) one-component liquid. Note, though, that at both small $M$ and $|J|$, where the convergence of DMRG is notoriously difficult, we cannot completely rule out the presence of a $c=2$ region, which, however, is irrelevant for the main conclusions of our work. ![(Color online) DMRG results for the von-Neumann entropy $S_{vN}(l)$ in the gapless phase $h_{c}<h<h_{\mathrm{sat}}$ of the $S=1$ system: (a) $J=-1$, $M=1/2$, (b) $J=-2$, $M=1/2$, (c) $J=-3$, $M=5/16$ (symbols). The lines are fits to Eq. , resulting in $c=1.0\pm0.1$ in all cases (we exclude $S_{vN}(l)$ for $l< 10$ and $l>54$ from the fit). In this figure, we display results for periodic boundary conditions and $L=64$ sites. []{data-label="fig:svn"}](figure5.eps){width="0.9\columnwidth"} Phase diagram for $S=1$ ----------------------- Our results for the $S=1$ chain are summarized in the $h$ vs. $J$ phase diagram Fig. \[fig:phase\]. We identify three phases: (i) a gapped $M=0$ phase at $h<h_c$ (similar to the Double-Haldane phase known for $J>0$, see Ref. ), (ii) a gapless (chiral) finite-field phase for $h_c<h<h_{\mathrm{sat}}$, and (iii) the fully polarized state at $h_{\mathrm{sat}}<h$ (with $h_{\mathrm{sat}}=0$ for $J<-4$). $\Delta M_{\mathrm{jump}}$ is plotted in the inset of Fig. \[fig:phase\]: the jump sets in at $J\lesssim -2 $ (close to where the zero-field gap becomes small rendering it difficult to resolve it numerically), consistent with our theory. Since in the limit of $J\to 0$, one has two spin-1 chains with antiferromagnetic interactions which both separately have a Haldane gap at zero field,[@whitehuse] upon coupling the chains, one obtains the so-called Double-Haldane phase (in contrast to the regular Haldane phase that is inherited from a single spin-1 chain with antiferromagnetic interactions). Both phases, Double-Haldane and Haldane phase, are realized in the frustrated, antiferromagnetic spin-1 chain,[@kolezhuk96] yet in our case, only the Double-Haldane phase exists. The determination of the corresponding spin gap $h_c$ is a bit subtle. Namely, a Haldane chain with open boundaries gives rise to spin-1/2 excitations at the open ends.[@kennedy90] Since we have two chains (for small $|J|$), we have a total of four spin-1/2 end spins. Hence, the spin gap in Fig. \[fig:phase\] is determined from $$h_c = E(S^z=3) - E(S^z=2) \, , \label{eq:HcOBC}$$ where $E(S^z)$ is the ground-state energy in a sector with a given total $S^z$. It is worth emphasizing several differences with the phase diagram of the spin-1/2 version of Eq. . First, for $S=1$, there are no multipolar phases, which occupy a large portion of the corresponding spin-1/2 phase diagram.[@hm06a; @vekua07; @kecke07; @hikihara08; @sudan09] Second, the spin-1/2 system features an instability towards nematic order,[@chubukov91a; @vekua07; @kecke07] which can be excluded on general grounds for integer spin-$S$ chains,[@kolezhuk05] even for $0<M\ll 1$. ![(Color online) Phase diagram of Eq.  for $S=1$ (circles: saturation field $h_{\mathrm{sat}}$; squares: spin gap $h_{c}$). Inset: height $\Delta M_{\mathrm{jump}}$ of the metamagnetic jump vs. $J$. The comparison of $L=64$ (open symbols) and $L=128$ (solid symbols) as well as finite-size scaling (not shown here) supports that both $\Delta M_{\mathrm{jump}}$ and $h_c$ are finite in extended regions of $J$.[]{data-label="fig:phase"}](figure6.eps "fig:"){width="0.9\columnwidth"}\[t!\] Summary {#sec:summary} ======= In conclusion, we showed that resonances can play a crucial role in determining the low-energy behavior of frustrated quantum spin systems subject to a magnetic field. The proximity of resonances caused by an interplay between frustration ($J>-4$) and quantum fluctuations ($1/2<S <S_{cr}$) results in extremely large values of the 1D scattering length that allows to develop an effective theory of a weakly interacting two-component Bose gas. The quasi-collapse of the dilute gas of magnons provides the physical origin of the emergent metamagnetism. The preditictions of our analytic theory were verified by numerical data. We focused on the case $S=1$ since there the jump in the magnetization curve is the most pronounced. As a by-product, we obtained the phase diagram for the $S=1$ $J$-$J'$ chain with antiferromagnetic $J'$ and ferromagnetic $J$ in a magnetic field. This phase diagram is remarkably simple. In particular, there are no multipolar phases in the case $S=1$, in marked contrast to the spin-1/2 case.[@hm06a; @vekua07; @kecke07; @hikihara08; @sudan09] We thank H. Frahm, A. Kolezhuk, R. Noack, and D. Petrov for fruitful discussions. A.H. acknowledges financial support from the DFG via a Heisenberg fellowship (HO 2325/4-2). T.V. is supported by the Center of Excellence QUEST. G.R. and M.A. are partially supported by CONICET (PIP 1691) and ANPCyT (PICT 1426). Details on the mapping to a dilute Bose gas {#app:dilute} =========================================== In Sec. \[sec:dilute\], we mapped the spin Hamiltonian Eq.  close to saturation to an effective, bosonic field theory Eq. . Here we provide the details of generalizing the procedure of obtaining the coupling constants from the many-body T-matrix for an effective single-component Bose gas model, which is described in Ref. , to the case relevant to us here, namely an effective theory of a two-component Bose gas. We introduce the ‘mean-field interaction coefficients’, $$\begin{aligned} \label{effectiveinteraction} g(S)&=&\Gamma(0;k_{cl},k_{cl}) \\ \tilde g(S)&=&\Gamma(0;k_{cl},-k_{cl})+\Gamma(-2k_{cl};k_{cl},-k_{cl}) \, ,\end{aligned}$$ where $\Gamma(q;k,k')$ is the full interaction vertex of the Hamiltonian Eq. (\[DysonMaleev\]). They satisfy mean-field like relations for $\tilde g_{0}>g_0$, $$\mu=g(S)n$$ and for $\tilde g_{0}<g_0$ $$\mu=(g(S)+\tilde g(S))n/2,$$ where $n=\langle\sum_{\alpha}n_{\alpha} \rangle\to0$ is the total density of bosons. To get the connection between $g(S)$ and $\tilde g(S)$ on the one hand and the bare coupling constants of the effective 1D model of a two-component Bose gas, $g_0(S)$ and $\tilde g_0(S)$ on the other hand, we generalize the corresponding equation for the case of a one-component Bose gas [@Lee2002] to the two-component case, consistent with the $SU(2)$ symmetry of the RG fixed point of a dilute, two-component Bose gas,[@KolezhukRG] $\lim_{\mu\to 0}g(S)= \lim_{\mu\to 0} \tilde g(S)=\pi\sqrt{\mu}/\sqrt{2m}$, $$\begin{aligned} \label{microeffectiveapp} && g(S)= \frac{g_0(S)}{1+g_0(S){\sqrt{2m}}/({\pi \sqrt{\mu}})},\\ &&\tilde g(S)\!=\! \frac{\tilde g_0(S)}{1+\tilde g_0(S){\sqrt{2m}}/({\pi \sqrt{\mu}})}. \label{microeffective_a} \end{aligned}$$ To zeroth order in $1/S$, we have $$g(S=\infty)=V_0-V_{k_{cl}}=\frac{(J+4)^2}{4}$$ and $$\tilde g(S=\infty)= V_{2k_{cl}}+V_0-2V_{k_{cl}}>g(S=\infty) \, .$$ Thus, classically ($S\to \infty, m\to 0$), we have $\tilde g_{0}>g_0$ for $-4<J<0$. We will show below that incorporating quantum fluctuations does not modify this relation. In the following we will calculate the effective intraspecies and interspecies interaction constants, to see how $1/S$ corrections modify $g_0(S)$ and $\tilde g_0(S)$. In order to obtain $g(S)$ we have to set in $\Gamma (q,k,k')$ given by Eq. (\[BetheSalpeterEquation\]), $q=0$ and $k=k'=k_{cl}$, $$\begin{aligned} g(S)&=& V_0-V_{k_{cl}} +\frac{1}{2S L}\sum_{p} \Gamma_p\nonumber\\ &&-\frac{1}{2S L}\sum_{p} \frac{ V_{p}-V_0+V_0-V_{k_{cl}} } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}} \Gamma_p,\end{aligned}$$ where we have denoted $ \Gamma(p;k_{cl},k_{cl})=\Gamma_p$. After straightforward manipulations we obtain, $$\begin{aligned} \label{nineapp} &&g(S)\left[1+\frac{V_0-V_{k_{cl}}}{2S L} \sum_{p} \frac{ 1 } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}}\right]=\nonumber\\ &&\quad V_0-V_{k_{cl}} +\frac{1}{2S L}\sum_{p} \Gamma_p \left[1-\frac{ V_{p}-V_0 } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}}\right] \qquad \nonumber\\ &&\quad -\frac{1}{2S L}\sum_{p} \frac{ (V_0-V_{k_{cl}}) (\Gamma_p-\Gamma_0) } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}} \, .\end{aligned}$$ Now, on the right hand side of Eq. (\[nineapp\]), we plug in the zeroth order vertex (in the $1/S$ expansion), $\Gamma(p;k_{cl},k_{cl})\to 2\epsilon_p$ (which is possible because there are no infrared divergences anymore), and use that $V_p-V_0=2(\epsilon_p-\epsilon_0)$. Equation (\[nineapp\]), after passing to infinite system size, becomes $$\begin{aligned} \label{unreg} g(S)\left[1+\frac{F}{4\pi S} \int\limits_{-\pi}^{\pi} \frac{ 1 } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}}\right]=F \, ,\end{aligned}$$ where $$\begin{aligned} F&=& V_0- (1+\frac{1}{2S})V_{k_{cl}} - \frac{1}{\pi S}\int\limits_{-\pi}^{\pi}{\mathrm d}p \frac{ \epsilon_p (\epsilon_p-\epsilon_0) } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}} \nonumber\\ &&- \frac{ V_{0}-V_{k_{cl}} }{2\pi S}\int\limits_{-\pi}^{\pi}{\mathrm d}p \frac{ \epsilon_p-\epsilon_0 } {\epsilon_{k_{cl}+p}+\epsilon_{k_{cl}-p}} \label{eq:valueF}\end{aligned}$$ and we have used that $V_0-V_{k_{cl}}=F+O(1/S)$. According to the scheme that we follow the two-body T matrix must be calculated off-shell, thus the denominator in Eq. (\[unreg\]) must be understood as $\epsilon_{p}\to \epsilon_{p}+C\mu/4S$, where the exact value of the numerical constant is $C=\pi^2/8$.[@Lee2002] This leads to (compare Eqs.  and ) $$\label{renormalizedpotential2} g(S)= \frac{g_0(S)}{1+g_0(S){\sqrt{2m}}/({\pi \sqrt{\mu}})},$$ where we have introduced the intraspecies Lieb-Liniger coupling constant $g_0(S)$ as in Eq. (\[LLC\_wc1\]). Note that all integrals presented in this section are evaluated analytically, which is a nice feature of the $1/S$ treatment of our problem. Now we outline the calculation of $\tilde g(S)$. We denote $\Gamma(p;k_{cl},-k_{cl})=\tilde \Gamma_p$ and obtain the Bethe-Salpeter equation for $$\tilde \Gamma_0 =V_0 -V_{k_{cl}} + \frac{1}{4\pi S} \int\limits_{-\pi}^{\pi} {\mathrm d} p \tilde \Gamma_p - \frac{1}{4\pi S} \int\limits_{-\pi}^{\pi} {\mathrm d} p \frac{ V_{-p}-V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}\tilde \Gamma_p \nonumber$$ and $$\begin{aligned} \tilde \Gamma_{-2k_{cl}} &=&V_{-2k_{cl}} -V_{k_{cl}} + \frac{1}{4\pi S} \int\limits_{-\pi}^{\pi} {\mathrm d} p \tilde \Gamma_p\nonumber\\ &&- \frac{1}{4\pi S} \int\limits_{-\pi}^{\pi} {\mathrm d} p \frac{ V_{-2k_{cl}-p}-V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}\tilde \Gamma_p .\end{aligned}$$ Adding these two equations gives $\tilde g(S)=\tilde \Gamma_0+\tilde \Gamma_{-2k_{cl}}$, $$\begin{aligned} \label{gamma2} \tilde g(S) &=&V_0+V_{-2k_{cl}} -2V_{k_{cl}} + \frac{2}{4\pi S} \int\limits_{-\pi}^{\pi} {\mathrm d} p \tilde \Gamma_p \qquad \nonumber\\ &&- \frac{1}{4\pi S} \int\limits_{-\pi}^{\pi} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p} -2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}\tilde \Gamma_p.\end{aligned}$$ We divide the last term in Eq. (\[gamma2\]) into two pieces, $$-\frac{1}{4\pi S} \int\limits_{-\pi}^{\pi} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}\tilde \Gamma_p=I_1+I_2 \, ,$$ where $$\begin{aligned} I_1&=& - \frac{1}{4\pi S} \int\limits_{-\pi-k_{cl}}^{-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}\nonumber\\ && \times (\tilde \Gamma_p- \tilde \Gamma_{-2k_{cl}}+\tilde \Gamma_{-2k_{cl}}) \end{aligned}$$ and $$\begin{aligned} I_2&=& - \frac{1}{4\pi S} \int\limits_{-k_{cl}}^{\pi-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}\nonumber\\ && \times ( \tilde \Gamma_p- \tilde \Gamma_{0}+\tilde \Gamma_{0}) \, . \end{aligned}$$ Note that, for convenience, we have shifted the first Brillouin zone, $(-\pi,\pi)\to (-\pi-k_{cl},\pi-k_{cl})$. Shifting the T-matrix off-shell, we have (the integral with a dash denotes the principal value) $$\begin{aligned} I_1&=& - \frac{1}{4\pi S} \! \dashint\limits_{-\pi-k_{cl}}^{-k_{cl}} \!\!\!\! {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}( \tilde \Gamma_p- \tilde \Gamma_{-2k_{cl}})\nonumber\\ &&- \frac{ \tilde \Gamma_{-2k_{cl}}}{4\pi S} \int\limits_{-\pi-k_{cl}}^{-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}+C\mu/2S} \end{aligned}$$ and $$\begin{aligned} I_2&=& - \frac{1}{4\pi S} \int\limits_{-k_{cl}}^{\pi-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}}( \tilde \Gamma_p- \tilde \Gamma_{0})\nonumber\\ &&- \frac{ \tilde \Gamma_{0}}{4\pi S} \int\limits_{-k_{cl}}^{\pi-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}+C\mu/2S}. \end{aligned}$$ The last terms in $I_1$ can be written as $$\begin{aligned} &-& \frac{ \tilde \Gamma_{-2k_{cl}}}{4\pi S} \int\limits_{-\pi-k_{cl}}^{-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}+C\mu/2S}=\nonumber\\ &&- \frac{ \tilde \Gamma_{-2k_{cl}}}{4\pi S} \dashint\limits_{-\pi-k_{cl}}^{-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-(V_0+V_{-2k_{cl}}) } {2\epsilon_{k_{cl}+p}}\quad\nonumber\\ &&- \frac{ \tilde \Gamma_{-2k_{cl}}}{4\pi S} \int\limits_{-\pi-k_{cl}}^{-k_{cl}} {\mathrm d} p \frac{ V_0+V_{-2k_{cl}}-2V_{k_{cl}}} {2\epsilon_{k_{cl}+p}+C\mu/2S}. \end{aligned}$$ Similarly, for the last terms in $I_2$ we get, $$\begin{aligned} &-& \frac{ \tilde \Gamma_{0}}{4\pi S} \int\limits_{-k_{cl}}^{\pi-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-2V_{k_{cl}} } {2\epsilon_{k_{cl}+p}+C\mu/2S}=\nonumber\\ &&- \frac{ \tilde \Gamma_{0}}{4\pi S} \dashint\limits_{-k_{cl}}^{\pi-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-(V_0+V_{-2k_{cl}}) } {2\epsilon_{k_{cl}+p}}\quad\nonumber\\ &&- \frac{ \tilde \Gamma_{0}}{4\pi S} \int\limits_{-k_{cl}}^{\pi-k_{cl}} {\mathrm d} p \frac{ V_0+V_{-2k_{cl}}-2V_{k_{cl}}} {2\epsilon_{k_{cl}+p}+C\mu/2S} \, . \end{aligned}$$ Noting that $$\begin{aligned} &-& \frac{ 1}{4\pi S} \dashint\limits_{-k_{cl}}^{\pi-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-(V_0+V_{-2k_{cl}}) } {2\epsilon_{k_{cl}+p}}\nonumber\\ &&=- \frac{ 1}{4\pi S} \dashint\limits_{-\pi-k_{cl}}^{-k_{cl}} {\mathrm d} p \frac{ V_p+V_{-2k_{cl}-p}-(V_0+V_{-2k_{cl}}) } {2\epsilon_{k_{cl}+p}} \quad \nonumber\\ &&=\frac{J^2-8}{16 S}, \end{aligned}$$ and gathering all contributions, Eq. (\[gamma2\]) takes the form $$\begin{aligned} \label{gamma21} &&\!\!\!\!\!\!\!\!\tilde g(S)\! \left[1\!+\!\frac{J^2-8}{16S}\! +\! \frac{V_0\!+\!V_{-2k_{cl}}-2V_{k_{cl}} }{16\pi S} \!\int\limits_{-\pi}^{\pi} \!\!\frac{ {\mathrm d} p } {\epsilon_{p}+C\mu/4S} \right] \nonumber\\ &&\!\!\!\!=V_0+V_{2k_{cl}}-2V_{k_{cl}}+\frac{8+J^2}{4 S} \nonumber\\ &&\!\!\!\!-\frac{1}{4\pi S}\dashint\limits_{-\pi}^{0}{\mathrm d}p\frac{V_{p-k_{cl}}+V_{p+k_{cl}}-2V_{k_{cl}}}{2\epsilon_{p}}(\tilde \Gamma_{p-k_{cl}}-\tilde\Gamma_{-2k_{cl}})\nonumber\\ &&\!\!\!\!-\frac{1}{4\pi S}\! \int\limits_{0}^{\pi}\!\! {\mathrm d}p\frac{V_{p-k_{cl}}+V_{p+k_{cl}}-2V_{k_{cl}}}{2\epsilon_{p}}(\tilde \Gamma_{p-k_{cl}}-\tilde\Gamma_{0}).\end{aligned}$$ Now we can plug the zeroth-order vertices into the right hand side of Eq. (\[gamma21\]), in the spirit of the $1/S$ expansion: $\tilde\Gamma_{p}\to 2\epsilon_p$. Noting that $V_0+V_{-2k_{cl}} -2V_{k_{cl}} = \tilde F+O(1/S)$, where $$\begin{aligned} \label{tildeF} \tilde F&=& V_0+V_{2k_{cl}}-2V_{k_{cl}}+\frac{8+J^2}{4 S} \\ &-&\frac{1}{2\pi S}\dashint\limits_{-\pi}^{0}{\mathrm d}p\frac{V_{p-k_{cl}}+V_{p+k_{cl}}-2V_{k_{cl}}}{2\epsilon_{p}}(\epsilon_{p-k_{cl}}-\epsilon_{-2k_{cl}})\nonumber\\ &-&\frac{1}{2\pi S}\int\limits_{0}^{\pi}{\mathrm d}p\frac{V_{p-k_{cl}}+V_{p+k_{cl}}-2V_{k_{cl}}}{2\epsilon_{p}}(\epsilon_{p-k_{cl}}-\epsilon_{0}),\nonumber\end{aligned}$$ in the same way as for $g(S)$ we obtain $$\label{renormalizedpotential_2} \tilde g(S)= \frac{\tilde g_0(S)}{1+\tilde g_0(S){\sqrt{2m}}/({\pi \sqrt{\mu}})}$$ with $\tilde g_0(S)$ given in Eq. . We see that in order $1/S$, quantum fluctuations do not modify the relation $\tilde g_0(S)>g_0(S)$ for $J<0$. Therefore, the interspecies interaction is always positive (repulsive) for any $J$ and $S>1/2$, and behaves as $\tilde g_0(S)\sim (J+4)^2$ for $J\to -4$ for all $S$. A hard-core constraint, stemming from the mapping of spins to bosons, is not included in our theory. While it is well-known how to treat the hard-core constraint in an exact way for $S=1/2$,[@nikuni95] this is not the case for $S>1/2$. DMRG results for periodic boundary conditions {#app:dmrg} ============================================= The DMRG results presented in the main text were computed for open boundary conditions (OBC), except for those in Fig. \[fig:svn\] where we used periodic boundary conditions (PBC). Alternatively, one may compute all the data, [*e.g.*]{}, the magnetization curves $M(h)$ using PBC. This approach is generally expected to suffer from (i) slower convergence with respect to the number of states kept in the DMRG runs [@schollwoeck05] and (ii) large finite-size effects due to the incommensurability in the problem. We here demonstrate that all main features can be seen with both OBC and PBC, namely the existence of the metamagnetic jump and the Haldane gap. Moreover, the quantitative results are comparable, except for the expected finite-size effects due to the incommensurability. ![(Color online) Magnetization curves $M(h)$ for $S=1$ (a) $J=-3.5$ and (b) $J=-1$ ($L=48$), calculated with DMRG on systems with periodic boundary conditions. In panel (a), we clearly see the metamagnetic jump of height $\Delta M_{\mathrm{jump}}$ and in panel (b), the Haldane gap that defines $h_c$ shows up as a zero-field magnetization plateau.[]{data-label="fig:mag_curve_pbc"}](figure7.eps){width="0.9\columnwidth"} Magnetization curves from periodic boundary conditions for $S=1$ ---------------------------------------------------------------- First, we discuss some examples of magnetization curves obtained for PBC. We kept up to $m=1600$ states for the PBC DMRG computations presented here. In addition, we used exact diagonalization for those sectors which are sufficiently small. Therefore, in particular the data close to the saturation field are free of truncation errors. Figure \[fig:mag\_curve\_pbc\] shows the magnetization curves for $J=-3.5$ \[panel (a)\] and $J=-1$ \[panel (b)\] for $L=48$ and PBC. As in the results for OBC, we observe the presence of the metamagnetic jump (here in the case of $J=-3.5$) and the Haldane gap (see the $J=-1$ curve), which manifests itself as a plateau in the magnetization curve at $M=0$. The latter defines the critical field $h_c$ that separates the gapped Double-Haldane phase from the gapless finite-$M$ phase (compare Fig. \[fig:svn\]). Note that at intermediate $M$, the PBC data show spurious small steps with $\Delta S^z>1$. We have checked that these features disappear as one goes to larger system sizes. ![(Color online) Phase diagram of the frustrated ferromagnetic $S=1$ chain, comparing DMRG results from systems with OBC (open symbols) to results from systems with PBC (solid symbols). $L=64$ in all cases. Lines are guides to the eye.[]{data-label="fig:phase_diag_pbc"}](figure8.eps){width="0.9\columnwidth"} Comparison of OBC vs PBC for $S=1$ ---------------------------------- Figure \[fig:phase\_diag\_pbc\] contains the results for the Haldane gap that defines the critical field $h_c$ separating the gapped zero-field phase from the gapless phase at finite magnetizations, the saturation field $h_{\mathrm{sat}}$, and the jump height (inset), comparing data for OBC (open symbols) with data from PBC (solid symbols). Here we choose a chain length of $L=64$ for both PBC and OBC. For PBC we can determine the spin gap from $$h_c^{\text{PBC}} = E(S^z=1) - E(S^z=0) \, , \label{eq:HcPBC}$$ where, as in Eq. , $E(S^z)$ is the ground-state energy in a sector with a given total $S^z$. The good agreement between the OBC and PBC results for $h_c$ in Fig. \[fig:phase\_diag\_pbc\] confirms that it is indeed appropriate to use Eq.  for OBC. The PBC data for $\Delta M_{\mathrm{jump}}$ in Fig. \[fig:phase\_diag\_pbc\] suffer from the presence of several peaks, resulting in a non-monotonic dependence on $J$. This is due to the incommensurability in the finite-magnetization region, which is incompatible with the lattice vectors of a system with periodic boundary conditions that is translationally invariant. Therefore, in the main text we focus on the discussion of DMRG data from systems with OBC. The results from OBC and PBC data for the saturation field $h_{\mathrm{sat}}$, however, agree very well with each other. [10]{} T. Giamarchi, C. Rüegg, and O. Tchernyshyov, Nature Phys. [**4**]{}, 198 (2008). E. M. Chudnovsky and L. Gunther, Phys. Rev. Lett. [**60**]{}, 661 (1988). L. Balents, Nature [**464**]{}, 199 (2010). K. Liu and M. Fisher, J. Low Temp. Phys. [**10**]{}, 655 (1973). J. Struck, C. Ölschläger, R. Le Targat, P. Soltan-Panahi, A. Eckardt, M. Lewenstein, P. Windpassinger, K. Sengstock, Science [**333**]{}, 996 (2011). Gyu-Boong Jo, J. Guzman, C. K. Thomas, P. Hosur, A. Vishwanath, and D. M. Stamper-Kurn, arXiv:1109.1591v1 (unpublished). I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. [**80**]{}, 885 (2008); C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, , 1225 (2010). J. M. Gerton, D. Strekalov, I. Prodan, and R. G. Hulet, Nature [**408**]{}, 692 (2000). P. Nozi[è]{}res and D. Saint James, J. de Physique [**43**]{}, 1133 (1982). E. J. Mueller and G. Baym, Phys. Rev. A [**62**]{}, 053605 (2000). T. Vekua, A. Honecker, H.-J. Mikeska, and F. Heidrich-Meisner, Phys. Rev. B [**76**]{}, 174420 (2007). L. Kecke, T. Momoi, and A. Furusaki, Phys. Rev. B [**76**]{}, 060407(R) (2007). T. Hikihara, L. Kecke, T. Momoi, and A. Furusaki, Phys. Rev. B [**78**]{}, 144404 (2008). J. Sudan, A. Lüscher, and A. M. Läuchli, Phys. Rev. B [**80**]{}, 140402(R) (2009). F. Heidrich-Meisner, I. P. McCulloch, and A. K. Kolezhuk, Phys. Rev. B [**80**]{}, 144417 (2009). H. P. Bader and R. Schilling, Phys. Rev. B [**19**]{}, 3556 (1979). See, e.g., N. Shannon, T. Momoi, and P. Sindzingre, Phys. Rev. Lett. [**96**]{}, 027213 (2006); M. E. Zhitomirsky and H. Tsunetsugu, Europhys. Lett. [**92**]{}, 37001 (2010). A. V. Chubukov, Phys. Rev. B [**44**]{}, 4693 (1991). A. Kolezhuk and T. Vekua, Phys. Rev. B [**72**]{}, 094424 (2005). F. Heidrich-Meisner, A. Honecker, and T. Vekua, Phys. Rev. B [**74**]{}, 020403(R) (2006). M. Enderle, C. Mukherjee, B. F[å]{}k, R. K. Kremer, J.-M. Broto, H. Rosner, S.-L. Drechsler, J. Richter, J. Malek, A. Prokofiev, W. Assmus, S. Pujol, J.-L. Raggazzoni, H. Rakoto, M. Rheinstädter, and H. M. R[ø]{}nnow, Europhys. Lett. [**70**]{}, 237 (2005). M. Enderle, B. F[å]{}k, H.-J. Mikeska, R. K. Kremer, A. Prokofiev, and W. Assmus, Phys. Rev. Lett. [**104**]{}, 237207 (2010). S.-L. Drechsler, O. Volkova, A. N. Vasiliev, N. Tristan, J. Richter, M. Schmitt, H. Rosner, J. Malek, R. Klingeler, A. A. Zvyagin, and B. Büchner, Phys. Rev. Lett. [**98**]{}, 077202 (2007). F. Heidrich-Meisner, I. A. Sergienko, A. E. Feiguin, and E. R. Dagotto, Phys. Rev. B [**75**]{}, 064413 (2007); I. P. McCulloch, R. Kube, M. Kurz, A. Kleine, U. Schollwöck, and A. K. Kolezhuk, [*ibid.*]{} [**77**]{}, 094404 (2008). É. G. Batyev and L. S. Braginskiĭ, Zh. Eksp. Teor. Fiz. [**87**]{} 1361 (1984) \[Sov. Phys. JETP [**60**]{}, 781 (1984)\]. T. Nikuni and H. Shiba, J. Phys. Soc. Jpn. [**64**]{}, 3471 (1995). S. R. White, Phys. Rev. Lett. [**69**]{}, 2863 (1992). U. Schollwöck, Rev. Mod. Phys. [**77**]{}, 259 (2005). I. G. Gochev, Theor. Math. Phys. [**15**]{}, 402 (1974). E. H. Lieb and W. Liniger, Phys. Rev. [**130**]{}, 1605 (1963). K. Okunishi, Y. Hieida, and Y. Akutsu, Phys. Rev B. [**59**]{}, 6806 (1999). M. D. Lee, S. A. Morgan, M. J. Davis, and K. Burnett, Phys. Rev. A [**65**]{}, 043617 (2002); M. D. Lee, S. A. Morgan, and K. Burnett, arXiv:cond-mat/0305416. M. D. Johnson and M. Fowler, Phys. Rev. B [**34**]{}, 1728 (1986). F. J. Dyson, Phys. Rev. [**102**]{}, 1217 (1956); [*ibid.*]{} [**102**]{}, 1230 (1956); S. V. Maleev, Zh. Eksp. Teor. Fiz. [**33**]{}, 1010 (1957) \[Sov. Phys. JETP [**6**]{}, 776 (1958)\]. T. Holstein and H. Primakoff, Phys. Rev. [**58**]{}, 1098 (1940). E. B. Kolomeisky and J. P. Straley, Phys. Rev. B [**46**]{}, 11749 (1992). A. K. Kolezhuk, Low Temp. Phys. [**36**]{}, 752 (2010); Phys. Rev. A [**81**]{}, 013601 (2010). G. Jackeli and M. E. Zhitomirsky, Phys. Rev. Lett. [**93**]{}, 017201 (2004). D. V. Dmitriev, V. Ya. Krivnov, and A. A. Ovchinnikov, JETP [**92**]{}, 146 (2001). C. Gerhardt, K.-H. Mütter, and H. Kröger, Phys. Rev. B [**57**]{}, 11504 (1998). S. Hirata, arXiv:cond-mat/9912066 (unpublished). Part of the DMRG runs were done using ALPS 1.3, A. F. Albuquerque, F. Alet, P. Corboz, P. Dayal, A. Feiguin, S. Fuchs, L. Gamper, E. Gull, S. Gürtler, A. Honecker, R. Igarashi, M. Körner, A. Kozhevnikov, A. Läuchli, S. R. Manmana, M. Matsumoto, I. P. McCulloch, F. Michel, R. M. Noack, G. Paw[ł]{}owski, L. Pollet, T. Pruschke, U. Schollwöck, S. Todo, S. Trebst, M. Troyer, P. Werner, and S. Wessel, J. Mag. Mag. Mat. [**310**]{}, 1187 (2007). See, e.g., S. R. White and D. A. Huse, Phys. Rev. B [**48**]{}, 3844 (1993), and references therein. G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Phys. Rev. Lett. [**90**]{}, 227902 (2003). P. Calabrese and J. Cardy, J. Stat. Mech. [**(2004)**]{} P06002. A. Kolezhuk, R. Roth, and U. Schollwöck, Phys. Rev. Lett. [**77**]{}, 5142 (1996). T. Kennedy, J. Phys.: Condens. Matter [**2**]{}, 5737 (1990).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the performance of the range-separated hybrid (RSH) scheme, which combines long-range Hartree-Fock (HF) and a short-range density-functional approximation (DFA), for calculating photoexcitation/photoionization spectra of the H and He atoms, using a B-spline basis set in order to correctly describe the continuum part of the spectra. The study of these simple systems allows us to quantify the influence on the spectra of the errors coming from the short-range exchange-correlation DFA and from the missing long-range correlation in the RSH scheme. We study the differences between using the long-range HF exchange (nonlocal) potential and the long-range exact exchange (local) potential. Contrary to the former, the latter supports a series of Rydberg states and gives reasonable photoexcitation/photoionization spectra, even without applying linear-response theory. The most accurate spectra are obtained with the linear-response time-dependent range-separated hybrid (TDRSH) scheme. In particular, for the He atom at the optimal value of the range-separation parameter, TDRSH gives slightly more accurate photoexcitation and photoionization spectra than standard linear-response time-dependent HF. More generally, the present work shows the potential of range-separated density-functional theory for calculating linear and nonlinear optical properties involving continuum states.' author: - Felipe Zapata - Eleonora Luppi - Julien Toulouse date: 'March 14, 2019' title: | Linear-response range-separated density-functional theory\ for atomic photoexcitation and photoionization spectra --- Introduction ============ Nowadays, time-dependent density-functional theory (TDDFT) [@RunGro-PRL-84], applied within the linear-response formalism [@GroKoh-PRL-85; @Cas-INC-95; @PetGosGro-PRL-96], is a widely used approach for calculating photoexcitation spectra (transitions from bound to bound states) of electronic systems. In spite of many successes, it is however well known that usual (semi-)local density-functional approximations (DFAs), i.e. the local-density approximation (LDA) and generalized-gradient approximations (GGAs), for the exchange-correlation potential and its associated exchange-correlation kernel do not correctly describe long-range electronic transitions, such as those to Rydberg [@CasJamCasSal-JCP-98] and charge-transfer [@DreWeiHea-JCP-03] states in atomic and molecular systems. A better description of Rydberg excitations can be obtained with exchange-correlation potential approximations having the correct $-1/r$ long-range asymptotic decay [@LeeBae-PRA-94; @TozHan-JCP-98; @CasSal-JCP-00; @SchGriGisBae-JCP-00], even though it has been shown that accurate Rydberg excitation energies and oscillator strengths can in fact be extracted from LDA calculations in small atoms [@WasMaiBur-PRL-03; @WasBur-PRL-05]. A more general solution for correcting both Rydberg and charge-transfer excitations is given by range-separated TDDFT approaches [@TawTsuYanYanHir-JCP-04; @YanTewHan-CPL-04; @PeaHelSalKeaLutTozHan-PCCP-06; @LivBae-PCCP-07; @BaeLivSal-ARPC-10; @FroKneJen-JCP-13; @RebSavTou-MP-13] which express the long-range part of the exchange potential and kernel at the Hartree-Fock (HF) level. These range-separated approaches also give reasonably accurate values for the ionization energy threshold [@YanTewHan-CPL-04; @GerAng-CPL-05a; @TsuSonSuzHir-JCP-10]. Linear-response TDDFT has also been used for calculating photoionization spectra (transitions from bound to continuum states) of atoms and molecules [@ZanSov-PRA-80; @LevSov-PRA-84; @SteDecLis-JPB-95; @SteAltFroDec-CP-97; @SteDec-JPB-97; @SteDec-JCP-00; @SteDecGor-JCP-01; @SteFroDec-JCP-05; @SteTofFroDec-JCP-06; @TofSteDec-PRA-06; @SteTofFroDec-TCA-07; @ZhoChu-PRA-09]. These calculations are less standard in quantum chemistry since they involve spatial grid methods or B-spline basis sets for a proper description of the continuum states. In this case as well, usual (semi-)local DFAs provide a limited accuracy and asymptotically corrected exchange-correlation potential approximations give more satisfactory results. More accurate still, but less common, are photoionization spectra calculated with the exact-exchange (EXX) potential [@SteDecGor-JCP-01] or the localized HF exchange potential and its associated kernel [@ZhoChu-PRA-09]. Recently, range-separated approximations have been successfully used for calculating photoexcitation and photoionization spectra of molecular systems using time-propagation TDDFT with Gaussian basis sets together with an effective lifetime model compensating for the missing continuum states [@LopGov-JCTC-13; @FerBalLop-JCTC-15; @SisAbaMauGaaSchLop-JCP-16]. However, to the best of our knowledge, range-separated approximations have not yet been used in frequency-domain linear-response TDDFT calculations of photoionization spectra. In this work, we explore the performance of the linear-response time-dependent range-separated hybrid (TDRSH) scheme [@RebSavTou-MP-13; @TouRebGouDobSeaAng-JCP-13] for calculating photoexcitation and photoionization spectra of the H and He atoms using a B-spline basis set to accurately describe the continuum part of the spectra. The TDRSH scheme allows us to treat long-range exchange effects at the HF level and short-range exchange-correlation effects within (semi-)local DFAs. First, the dependence of the range-separated hybrid (RSH) orbital energies on the range-separation parameter is investigated, as well as the effect of replacing the long-range HF exchange nonlocal potential by the long-range EXX local potential (resulting in a scheme that we refer to as RSH-EXX). Second, oscillator strengths directly computed with the RSH and the RSH-EXX orbitals are compared with oscillator strengths obtained with the linear-response TDRSH scheme. The study of the H atom allows us to quantify the residual self-interaction error coming from the short-range exchange-correlation DFA, and the study of the He atom permits to quantify the effect of the missing long-range correlation in the RSH scheme. This work constitutes a first step for applying range-separated TDDFT to strong-field phenomena, such as high-harmonic generation or above-threshold ionization, where long-range effects and continuum states play an important role. The outline of the paper is as follows. In Sec. \[sec:theory\], firstly, we briefly review the RSH scheme and introduce the RSH-EXX variant, and, secondly, we review the linear-response TDRSH method. In Sec. \[sec:implementation\], the basis set of B-spline functions is defined, and we indicate how the range-separated two-electron integrals are computed using an exact spherical harmonic expansion for the range-separated interaction. In Sec. \[sec:results\] results are presented and discussed. Firstly, we show the performance of the B-spline basis set for describing the density of continuum states of the H atom within the different methods. Secondly, the dependence of the orbital energies of the H and He atoms on the range-separation parameter is analyzed. Thirdly, different calculated photoexcitation/photoionization spectra for the H and He atoms are discussed and compared with exact results. In Sec. \[sec:conclusions\], conclusions and perspectives are given. Unless otherwise indicated, Hartree atomic units are used throughout the paper. Range-separated density-functional theory {#sec:theory} ========================================= Range-separated hybrid scheme ----------------------------- Range-separated density-functional theory (see, e.g., Refs. ) is based on the splitting of the Coulomb electron-electron interaction $w_\text{ee}(r)=1/r$ into long-range (lr) and short-range (sr) contributions $$w_{{\ensuremath{\text{ee}}}}(r)=w_{{\ensuremath{\text{ee}}}}^{\ensuremath{\text{lr}}}(r)+w_{{\ensuremath{\text{ee}}}}^{\ensuremath{\text{sr}}}(r),$$ and the most common forms for the long-range and short-range interactions are $$\label{erflr} w_{{\ensuremath{\text{ee}}}}^{\ensuremath{\text{lr}}}(r)=\frac{\operatorname{erf}(\mu r)}{r},$$ and $$\label{erfcsr} w_{{\ensuremath{\text{ee}}}}^{\ensuremath{\text{sr}}}(r)=\frac{\operatorname{erfc}(\mu r)}{r}.$$ where $\operatorname{erf}$ and $\operatorname{erfc}$ are the error function and the complementary error function, respectively, and $\mu$ is a tunable range-separation parameter controlling the range of the separation. Using this decomposition, it is possible to rigorously combine a long-range wave-function approach with a complementary short-range DFA. The simplest approach in range-separated density-functional theory consists in using a single-determinant wave function for the long-range interaction. This leads to the RSH scheme [@AngGerSavTou-PRA-05] which spin orbitals $\{\varphi_{p}({\ensuremath{\mathbf{x}}})\}$ (where ${\ensuremath{\mathbf{x}}}=({\ensuremath{\mathbf{r}}},\sigma)$ are space-spin coordinates) and orbital energies $\varepsilon_{p}$ can be determined for a given system by the following eigenvalue problem, $$\begin{aligned} \left( -\frac{1}{2} \bm{\nabla}^2 +v_{\text{ne}}({\ensuremath{\mathbf{r}}}) + v_\text{H}({\ensuremath{\mathbf{r}}}) + v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}}) \right) \varphi_{p}({\ensuremath{\mathbf{x}}}) \nonumber\\ + \int v_{\text{x}}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}},{\ensuremath{\mathbf{x}}}') \varphi_{p}({\ensuremath{\mathbf{x}}}') {\ensuremath{\text{d}}}{\ensuremath{\mathbf{x}}}' = \varepsilon_{p}\varphi_{p}({\ensuremath{\mathbf{x}}}), \label{RSH}\end{aligned}$$ where $v_{\text{ne}}({\ensuremath{\mathbf{r}}})$ is the nuclei-electron potential, $v_\text{H}({\ensuremath{\mathbf{r}}})$ is the Hartree potential for the Coulomb electron-electron interaction, $$\begin{aligned} v_\text{H}({\ensuremath{\mathbf{r}}}) = \int n({\ensuremath{\mathbf{x}}}') w_{\ensuremath{\text{ee}}}(|{\ensuremath{\mathbf{r}}}-{\ensuremath{\mathbf{r}}}'|) {\ensuremath{\text{d}}}{\ensuremath{\mathbf{x}}}',\end{aligned}$$ where $n({\ensuremath{\mathbf{x}}})=\sum_i^\text{occ} |\varphi_{i}({\ensuremath{\mathbf{x}}})|^2$ are the spin densities ($i$ refers to occupied spin orbitals), $v_{\text{x}}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}},{\ensuremath{\mathbf{x}}}')$ is the nonlocal HF exchange potential for the long-range electron-electron interaction, $$\begin{aligned} v_{\text{x}}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}},{\ensuremath{\mathbf{x}}}') = - \sum_{i}^\text{occ} \varphi_{i}^*({\ensuremath{\mathbf{x}}}') \varphi_{i}({\ensuremath{\mathbf{x}}}) w_{\ensuremath{\text{ee}}}^{\ensuremath{\text{lr}}}(|{\ensuremath{\mathbf{r}}}-{\ensuremath{\mathbf{r}}}'|),\end{aligned}$$ and $v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}})$ is the short-range exchange-correlation potential $$\begin{aligned} v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}}) = \frac{\delta \bar{E}_\text{xc}^{\ensuremath{\text{sr}}}}{\delta n({\ensuremath{\mathbf{x}}})},\end{aligned}$$ where $\bar{E}_\text{xc}^{\ensuremath{\text{sr}}}$ is the complement short-range exchange-correlation density functional. In this work, we use the short-range spin-dependent LDA exchange-correlation functional of Ref.  for $\bar{E}_\text{xc}^{\ensuremath{\text{sr}}}$. The long-range and short-range potentials, $v_{\text{x}}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}},{\ensuremath{\mathbf{x}}}')$ and $v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}})$, explicitly depend on the range-separation parameter $\mu$, and consequently the spin orbitals, the orbital energies, and the density also implicitly depend on it. For $\mu=0$, $v_{\text{x}}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}},{\ensuremath{\mathbf{x}}}')$ vanishes and $v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}})$ becomes the usual full-range LDA exchange-correlation potential, and thus the RSH scheme reduces to standard Kohn-Sham LDA. For $\mu\to\infty$, $v_{\text{x}}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}},{\ensuremath{\mathbf{x}}}')$ becomes the usual full-range HF exchange potential and $v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}})$ vanishes, and thus the RSH scheme reduces to standard HF. In the present paper, we also consider the following variant of the RSH scheme, $$\begin{aligned} \left( -\frac{1}{2} \bm{\nabla}^2 +v_{\text{ne}}({\ensuremath{\mathbf{r}}}) + v_\text{H}({\ensuremath{\mathbf{r}}}) + v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}}) + v_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}) \right) \varphi_{p}({\ensuremath{\mathbf{x}}}) \nonumber\\ = \varepsilon_{p} \varphi_{p}({\ensuremath{\mathbf{x}}}), \;\;\;\;\;\; \label{RSHEXX}\end{aligned}$$ in which the long-range nonlocal HF exchange potential has been replaced by the long-range local EXX [@TalSha-PRA-76; @GorLev-PRA-94; @GorLev-IJQC-95] potential $$\begin{aligned} \label{vexx} v_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}) = \frac{\delta E_\text{x}^{\ensuremath{\text{lr}}}}{\delta n({\ensuremath{\mathbf{x}}})},\end{aligned}$$ where $E_\text{x}^{\ensuremath{\text{lr}}}$ is the long-range exchange density functional [@TouGorSav-IJQC-06; @TouSav-JMS-06]. We will refer to this scheme as RSH-EXX. The calculation of the EXX potential is involved [@FilUmrGon-PRA-96; @Gor-PRL-99; @IvaHirBar-PRL-99], with the exception of one- and two-electron systems. Indeed, for one-electron systems, the long-range EXX potential is simply $$\begin{aligned} v_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}) = - v^{{\ensuremath{\text{lr}}}}_\text{H}({\ensuremath{\mathbf{r}}}),\end{aligned}$$ and for systems of two electrons in a single spatial orbital, it is $$\begin{aligned} v_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}) = - \frac{1}{2} v^{{\ensuremath{\text{lr}}}}_\text{H}({\ensuremath{\mathbf{r}}}),\end{aligned}$$ where $v^{\ensuremath{\text{lr}}}_\text{H}({\ensuremath{\mathbf{r}}}) = \int n({\ensuremath{\mathbf{x}}}') w_{\ensuremath{\text{ee}}}^{\ensuremath{\text{lr}}}(|{\ensuremath{\mathbf{r}}}-{\ensuremath{\mathbf{r}}}'|) {\ensuremath{\text{d}}}{\ensuremath{\mathbf{x}}}'$ is the long-range Hartree potential. For these one- and two-electron cases, it can be shown that Eqs. (\[RSH\]) and (\[RSHEXX\]) give identical occupied orbitals but different unoccupied orbitals. More generally, for systems with more than two electrons, the HF and EXX exchange potentials give similar occupied orbitals but very different unoccupied orbitals. Once orbitals and orbital energies are obtained from Eqs. (\[RSH\]) and (\[RSHEXX\]), the bare oscillator strengths can be calculated. They are defined as $$\begin{aligned} \label{oscillator0} f_{ia}^0 = \frac{2}{3} \omega_{ia}^0 \sum_{\nu=x,y,z} |d_{\nu,ia}|^2,\end{aligned}$$ where $i$ and $a$ refer to occupied and unoccupied spin orbitals, respectively, $\omega_{ia}^0 = \varepsilon_{a} - \varepsilon_{i}$ are the bare excitation energies and $d_{\nu,ia} = \int \varphi_{i}^*({\ensuremath{\mathbf{x}}}) r_\nu \varphi_{a}({\ensuremath{\mathbf{x}}}) {\ensuremath{\text{d}}}{\ensuremath{\mathbf{x}}}$ are the dipole-moment transition integrals. We will consider these bare excitation energies $\omega_{ia}^0$ and oscillator strengths $f_{ia}^0$ for a first approximation to photoexcitation/photoionization spectra. Linear-response time-dependent range-separated hybrid scheme ------------------------------------------------------------ In the time-dependent extension of the RSH scheme within linear response (referred to as TDRSH) [@RebSavTou-MP-13; @TouRebGouDobSeaAng-JCP-13; @FroKneJen-JCP-13], one has to solve the following pseudo-Hermitian eigenvalue equation $$\begin{aligned} \label{TDRSH} \begin{pmatrix} {\ensuremath{\mathbf{A}}} & {\ensuremath{\mathbf{B}}} \\ -{\ensuremath{\mathbf{B}}}^* & -{\ensuremath{\mathbf{A}}}^* \end{pmatrix} \begin{pmatrix} {\ensuremath{\mathbf{X}}}_n \\ {\ensuremath{\mathbf{Y}}}_n \end{pmatrix} = \omega_n \begin{pmatrix} {\ensuremath{\mathbf{X}}}_n \\ {\ensuremath{\mathbf{Y}}}_n \end{pmatrix},\end{aligned}$$ whose solutions come in pairs: excitation energies $\omega_n>0$ with eigenvectors $({\ensuremath{\mathbf{X}}}_n,{\ensuremath{\mathbf{Y}}}_n)$, and de-excitation energies $\omega_n<0$ with eigenvectors $({\ensuremath{\mathbf{Y}}}_n^*,{\ensuremath{\mathbf{X}}}_n^*)$. The elements of the matrices ${\ensuremath{\mathbf{A}}}$ and ${\ensuremath{\mathbf{B}}}$ are $$\begin{aligned} A_{ia,jb} = (\varepsilon_{a} -\varepsilon_{i}) \delta_{ij} \delta_{ab} + K_{ia,jb},\end{aligned}$$ $$\begin{aligned} B_{ia,jb} = K_{ia,bj},\end{aligned}$$ where $i,j$ and $a,b$ refer to occupied and unoccupied RSH spin orbitals, respectively, and the coupling matrix ${\ensuremath{\mathbf{K}}}$ contains the contributions from the Hartree kernel $f_{\ensuremath{\text{H}}}({\ensuremath{\mathbf{r}}}_1,{\ensuremath{\mathbf{r}}}_2)=w_{\ensuremath{\text{ee}}}(|{\ensuremath{\mathbf{r}}}_1-{\ensuremath{\mathbf{r}}}_2|)$, the long-range HF exchange kernel $f_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}}_1,{\ensuremath{\mathbf{x}}}_2;{\ensuremath{\mathbf{x}}}_1',{\ensuremath{\mathbf{x}}}_2')=-w_{\ensuremath{\text{ee}}}^{\ensuremath{\text{lr}}}(|{\ensuremath{\mathbf{r}}}_1-{\ensuremath{\mathbf{r}}}_2|) \delta({\ensuremath{\mathbf{x}}}_1-{\ensuremath{\mathbf{x}}}_2') \delta({\ensuremath{\mathbf{x}}}_1'-{\ensuremath{\mathbf{x}}}_2)$, and the adiabatic short-range exchange-correlation kernel $f_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}}_1,{\ensuremath{\mathbf{x}}}_2)=\delta v_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}}_1)/\delta n({\ensuremath{\mathbf{x}}}_2)$ $$\begin{aligned} \label{K} K_{ia,jb} &=& {\ensuremath{\langle aj \vert}} f_{\ensuremath{\text{H}}}{\ensuremath{\vert ib \rangle}}+ {\ensuremath{\langle aj \vert}} f_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}} {\ensuremath{\vert ib \rangle}} + {\ensuremath{\langle aj \vert}} f_\text{xc}^{{\ensuremath{\text{sr}}}} {\ensuremath{\vert ib \rangle}} \nonumber\\ &=& {\ensuremath{\langle aj \vert}} w_{\ensuremath{\text{ee}}}{\ensuremath{\vert ib \rangle}} - {\ensuremath{\langle aj \vert}} w_{\ensuremath{\text{ee}}}^{\ensuremath{\text{lr}}}{\ensuremath{\vert bi \rangle}} + {\ensuremath{\langle aj \vert}} f_\text{xc}^{{\ensuremath{\text{sr}}}} {\ensuremath{\vert ib \rangle}},\end{aligned}$$ where ${\ensuremath{\langle aj \vert}} w_{\ensuremath{\text{ee}}}{\ensuremath{\vert ib \rangle}}$ and ${\ensuremath{\langle aj \vert}} w_{\ensuremath{\text{ee}}}^{\ensuremath{\text{lr}}}{\ensuremath{\vert bi \rangle}}$ are the two-electron integrals associated with the Coulomb and long-range interactions, respectively, and ${\ensuremath{\langle aj \vert}} f_\text{xc}^{{\ensuremath{\text{sr}}}} {\ensuremath{\vert ib \rangle}} = \iint \varphi_a^*({\ensuremath{\mathbf{x}}}_1) \varphi_j^*({\ensuremath{\mathbf{x}}}_2) f_\text{xc}^{{\ensuremath{\text{sr}}}}({\ensuremath{\mathbf{x}}}_1,{\ensuremath{\mathbf{x}}}_2) \varphi_i({\ensuremath{\mathbf{x}}}_1) \varphi_b({\ensuremath{\mathbf{x}}}_2) {\ensuremath{\text{d}}}{\ensuremath{\mathbf{x}}}_1 {\ensuremath{\text{d}}}{\ensuremath{\mathbf{x}}}_2 $. Since we use the short-range LDA exchange-correlation density functional, for $\mu=0$ the TDRSH scheme reduces to the usual linear-response time-dependent local-density approximation (TDLDA). For $\mu\to\infty$, the TDRSH scheme reduces to standard linear-response time-dependent Hartree-Fock (TDHF). The time-dependent extension of the RSH-EXX variant within linear response (referred to as TDRSH-EXX) leads to identical equations with the exception that the long-range HF exchange kernel $f_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{HF}}}}({\ensuremath{\mathbf{x}}}_1,{\ensuremath{\mathbf{x}}}_2;{\ensuremath{\mathbf{x}}}_1',{\ensuremath{\mathbf{x}}}_2')$ is replaced by the long-range frequency-dependent EXX kernel [@Gor-PRA-98; @Gor-IJQC-98] $f_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}_1,{\ensuremath{\mathbf{x}}}_2;\omega)=\delta v_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}_1,\omega)/\delta n({\ensuremath{\mathbf{x}}}_2,\omega)$. For one-electron systems, the long-range EXX kernel is simply $$\begin{aligned} f_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}_1,{\ensuremath{\mathbf{x}}}_2;\omega)= -f_{\ensuremath{\text{H}}}^{\ensuremath{\text{lr}}}({\ensuremath{\mathbf{r}}}_1,{\ensuremath{\mathbf{r}}}_2),\end{aligned}$$ and, for systems with two electrons in a single spatial orbital, it is $$\begin{aligned} f_\text{x}^{{\ensuremath{\text{lr}}},{\ensuremath{\text{EXX}}}}({\ensuremath{\mathbf{x}}}_1,{\ensuremath{\mathbf{x}}}_2;\omega)= -\frac{1}{2} f_{\ensuremath{\text{H}}}^{\ensuremath{\text{lr}}}({\ensuremath{\mathbf{r}}}_1,{\ensuremath{\mathbf{r}}}_2),\end{aligned}$$ where $f_{\ensuremath{\text{H}}}^{\ensuremath{\text{lr}}}({\ensuremath{\mathbf{r}}}_1,{\ensuremath{\mathbf{r}}}_2)=w_{\ensuremath{\text{ee}}}^{\ensuremath{\text{lr}}}(|{\ensuremath{\mathbf{r}}}_1-{\ensuremath{\mathbf{r}}}_2|)$ is the long-range Hartree kernel. For these one- and two-electron cases, TDRSH and TDRSH-EXX give rise to identical excitation energies and oscillator strengths. Finally, we can calculate the corresponding TDRSH (or TDRSH-EXX) oscillator strengths as $$\begin{aligned} \label{oscillator} f_{n} = \frac{2}{3} \omega_{n} \sum_{\nu=x,y,z} \left| d_{\nu,ia} (X_{n,ia} + Y_{n,ia}) \right|^2.\end{aligned}$$ In the limit of a complete basis set, the linear-response oscillator strengths in Eq. (\[oscillator\]) always fulfill the Thomas-Reiche-Kuhn (TRK) sum rule, $\sum_n f_{n} = N$ where $N$ is the electron number. The bare oscillator strengths of Eq. (\[oscillator0\]) fulfill the TRK sum rule only in the case where the orbitals have been obtained from an effective local potential, i.e. for LDA and RSH-EXX but not for HF and RSH (see Ref. ). Implementation in\ a B-spline basis set {#sec:implementation} ==================== In practice, each spin orbital is decomposed into a product of a spatial orbital and a spin function, $\varphi_{p}({\ensuremath{\mathbf{x}}})=\varphi_{p}({\ensuremath{\mathbf{r}}}) \delta_{\sigma_p,\sigma}$ where $\sigma_p$ is the spin of the spin orbital $p$, and we use spin-adapted equations. As we investigate atomic systems, the spatial orbitals are written in spherical coordinates, $$\varphi_{p}({\ensuremath{\mathbf{r}}})=R_{n_pl_p}(r)Y_{l_p}^{m_p}(\Omega),$$ where $Y_{l_p}^{m_p}(\Omega)$ are the spherical harmonics ($\Omega$ stands for the angles $\theta,\phi$) and the radial functions $R_{n_pl_p}(r)$ are expressed as linear combinations of B-spline functions of order $k_{\ensuremath{\text{s}}}$, $$\begin{aligned} R_{n_p l_p}(r)=\sum_{\alpha=1}^{N_{\ensuremath{\text{s}}}}c_\alpha^{n_p l_p}\frac{B^{k_{\ensuremath{\text{s}}}}_\alpha(r)}{r},\end{aligned}$$ where $N_{\ensuremath{\text{s}}}$ is the dimension of the basis. To completely define a basis of B-spline functions, a non-decreasing sequence of $N_{\ensuremath{\text{s}}}+k_{\ensuremath{\text{s}}}$ knot points (some knot points are possibly coincident) must be given [@Boor-78]. The B-spline function $B^{k_{\ensuremath{\text{s}}}}_\alpha(r)$ is non zero only on the supporting interval $[r_\alpha,r_{\alpha+k_{\ensuremath{\text{s}}}}]$ (containing $k_{\ensuremath{\text{s}}}+1$ consecutive knot points) and is a piecewise function composed of polynomials of degree $k_{\ensuremath{\text{s}}}-1$ with continuous first $k_{\ensuremath{\text{s}}}-m$ derivatives across each knot of multiplicity $m$. We have chosen the first and the last knots to be $k_{\ensuremath{\text{s}}}$-fold degenerate, i.e. $r_1 = r_2 = \cdots = r_{k_{\ensuremath{\text{s}}}} = R_{\text{min}}$ and $r_{{N_{\ensuremath{\text{s}}}+1}} = r_{{N_{\ensuremath{\text{s}}}+2}} = \cdots = r_{{N_{\ensuremath{\text{s}}}+k_{\ensuremath{\text{s}}}}}= R_{\text{max}}$, while the multiplicity of the other knots is unity. The spatial grid spacing was chosen to be constant in the whole radial space between two consecutive non-coincident points and is given by $\Delta r = R_{\text{max}}/(N_{\ensuremath{\text{s}}}-k_{\ensuremath{\text{s}}}+1)$. In the present work, the first and the last B-spline functions were removed from the calculation to ensure zero boundary conditions at $r=R_{\text{min}}$ and $r=R_{\text{max}}$. The results presented in this paper have been obtained using the following parameters: $k_{\ensuremath{\text{s}}}=8$, $N_{\ensuremath{\text{s}}}=200$, $R_{\text{min}}=0$, and $R_{\text{max}} = 100$ bohr. Moreover, we need to use only s and p$_z$ spherical harmonics. Working with such a B-spline representation, one must compute matrix elements involving integrals over B-spline functions. The principle of the calculation of one-electron and two-electron integrals over B-spline functions are well described by Bachau *et al.* in Ref. . We will now briefly review the computation of the standard Coulomb two-electron integrals over B-spline functions, and then we will present the calculation of the long-range or short-range two-electron integrals over B-spline functions, the latter being original to the present work. Coulomb two-electron integrals ------------------------------ The Coulomb electron-electron interaction is given by $$\label{coulomb} w_{\ensuremath{\text{ee}}}(|{\ensuremath{\mathbf{r}}}-{\ensuremath{\mathbf{r}}}'|)=\frac{1}{\left(|{\ensuremath{\mathbf{r}}}|^2+|{\ensuremath{\mathbf{r}}}'|^2-2|{\ensuremath{\mathbf{r}}}||{\ensuremath{\mathbf{r}}}'|\cos\gamma \right)^{1/2}},$$ where ${\ensuremath{\mathbf{r}}}$ and ${\ensuremath{\mathbf{r}}}'$ are electron vector positions and $\gamma$ is the angle between them. The multipolar expansion for this interaction is $$\label{coulomb} w_{\ensuremath{\text{ee}}}(|{\ensuremath{\mathbf{r}}}-{\ensuremath{\mathbf{r}}}'|)=\sum_{k=0}^{\infty}\left[\frac{r_<^k}{r_>^{k+1}}\right]\sum_{m_k=-k}^{k}(-1)^{m_k} C_{-m_k}^k(\Omega)C_{m_k}^k(\Omega'),$$ where $r_<=\mathrm{min}(|{\ensuremath{\mathbf{r}}}|,|{\ensuremath{\mathbf{r}}}'|)$ and $r_>=\mathrm{max}(|{\ensuremath{\mathbf{r}}}|,|{\ensuremath{\mathbf{r}}}'|)$ and $C_{m_k}^k(\Omega)=\left(4\pi/(2k+1)\right)^{1/2}Y_k^{m_k}(\Omega)$ are the renormalized spherical harmonics. The Coulomb two-electron integrals, in the spatial orbital basis, can then be expressed as the sum of products of radial integrals and angular factors $$\begin{aligned} \label{coulomb_integral} \nonumber \langle pq|w_{\ensuremath{\text{ee}}}|tu\rangle&=&\sum_{k=0}^{\infty}R^k(p, q; t, u)\sum_{m_k=-k}^{k}\delta_{m_k,m_p-m_t}\delta_{m_k,m_q-m_u}\\ &\times&(-1)^{m_k} c^k(l_p, m_p, l_t, m_t) c^k(l_q, m_q, l_u, m_u),\end{aligned}$$ where $R^k(p, q; t, u)$ are the two-dimensional radial Slater integrals and the angular coefficients $c^k(l_p, m_p, l_t, m_t)$ and $c^k(l_q, m_q, l_u, m_u)$ are obtained from the Gaunt coefficients [@RCowan-81; @Cer-THESIS-12]. The coefficient $c^k(l, m, l', m')$ is non zero only if $|l-l'|\leq k \leq l+l'$ and if $l+l'+k$ is an even integer, which makes the sum over $k$ in Eq. (\[coulomb\_integral\]) exactly terminate. The Slater integrals are defined as $$\begin{aligned} \nonumber R^k(p, q; t, u)&=&\sum_{\alpha=1}^{N_{\ensuremath{\text{s}}}}\sum_{\lambda=1}^{N_{\ensuremath{\text{s}}}}\sum_{\beta=1}^{N_{\ensuremath{\text{s}}}}\sum_{\nu=1}^{N_{\ensuremath{\text{s}}}}c_{\alpha}^{n_pl_p}c_{\lambda}^{n_ql_q}c_{\beta}^{n_tl_t}c_{\nu}^{n_ul_u}\\ & &\times R^k(\alpha, \lambda; \beta, \nu),\end{aligned}$$ where $R^k(\alpha, \lambda; \beta, \nu)$ are the Slater matrix elements given by $$\begin{aligned} \label{slaterelement} \nonumber R^k(\alpha, \lambda; \beta, \nu)&=&\int_{0}^{\infty}\int_{0}^{\infty}B_{\alpha}^{k_{\ensuremath{\text{s}}}}(r)B_{\lambda}^{k_{\ensuremath{\text{s}}}}(r')\left[\frac{r_<^k}{r_>^{k+1}}\right]\\ & &\times B_{\beta}^{k_{\ensuremath{\text{s}}}}(r)B_{\nu}^{k_{\ensuremath{\text{s}}}}(r'){\ensuremath{\text{d}}}r {\ensuremath{\text{d}}}r'.\end{aligned}$$ In order to compute the Slater matrix elements $R^k(\alpha, \lambda; \beta, \nu)$, we have implemented the integration-cell algorithm developed by Qiu and Froese Fischer [@CFFischer-99]. This algorithm exploits all possible symmetries and B-spline properties to evaluate efficiently the integrals in each two-dimensional radial region on which the integrals are defined. Gaussian quadrature is used to compute the integrals in each cell. Long-range and short-range two-electron integrals ------------------------------------------------- A closed form of the multipolar expansion of the short-range electron-electron interaction defined in Eq. (\[erfcsr\]) was determined by Ángyán *et al.* [@Janos-06], following a previous work of Marshall [@Marshall-02] who applied the Gegenbauer addition theorem to the Laplace transform of Eq. (\[erfcsr\]). This exact expansion is $$\begin{aligned} w_{{\ensuremath{\text{ee}}}}^{{\ensuremath{\text{sr}}}}(|{\ensuremath{\mathbf{r}}}-{\ensuremath{\mathbf{r}}}'|)&=&\sum_{k=0}^{\infty}S^k(r_>,r_<;\mu) \nonumber\\ &&\times\sum_{m_k=-k}^{k}(-1)^{m_k} C_{-m_k}^k(\Omega)C_{m_k}^k(\Omega'), \;\;\;\end{aligned}$$ where the $\mu$-dependent radial function is written in terms of the scaled radial coordinates $\Xi=\mu \; r_>$ and $\xi=\mu\; r_<$ as $$\label{srkernel} S^k(r_>,r_<;\mu)=\mu \; \Phi^k(\Xi,\xi),$$ with $$\begin{aligned} \label{Phi} \nonumber \Phi^k(\Xi,\xi)&=&H^k(\Xi,\xi)+F^k(\Xi,\xi)\\ & &+\sum_{m=1}^k F^{k-m}(\Xi,\xi)\frac{\Xi^{2m}+\xi^{2m}}{(\xi\;\Xi)^m},\end{aligned}$$ and the introduced auxiliary functions $$\begin{aligned} \nonumber H^k(\Xi,\xi)&=&\frac{1}{2(\xi\;\Xi)^{k+1}}\left[\left(\Xi^{2k+1}+\xi^{2k+1}\right)\operatorname{erfc}(\Xi+\xi)\right.\\ & &\left.-\left(\Xi^{2k+1}-\xi^{2k+1}\right)\operatorname{erfc}(\Xi-\xi)\right], \end{aligned}$$ and $$\begin{aligned} \nonumber F^k(\Xi,\xi)&=&\frac{2}{\pi^{1/2}}\sum_{p=0}^k\left(-\frac{1}{4(\xi\;\Xi)}\right)^{p+1}\frac{(k+p)!}{p!(k-p)!}\\ & &\times\left[(-1)^{k-p} e^{-(\Xi+\xi)^2}-e^{-(\Xi-\xi)^2} \right].\end{aligned}$$ In order to arrive at a separable expression in $\Xi$ and $\xi$, Ángyán *et al.* [@Janos-06] also introduced a power series expansion of the radial function $\Phi^k(\Xi,\xi)$ in the smaller reduced variable $\xi$. However, the range of validity of this expansion truncated to the first few terms is limited to small values of $\xi$, i.e. $\xi \lesssim 1.5$, and higher-order expansions show spurious oscillations. After some tests, we decided to use the exact short-range radial function $\Phi^k(\Xi,\xi)$ without expansion in our work. The expression of the short-range two-electron integrals $\langle pq|w^{{\ensuremath{\text{sr}}}}_{\ensuremath{\text{ee}}}|tu\rangle$ is then identical to the one in Eq. (\[coulomb\_integral\]) with the simple difference that the radial term is not given by the standard Slater matrix elements. Now, the radial kernel in Eq. (\[slaterelement\]) is changed to that of Eq. (\[srkernel\]). Due to the fact that the radial kernel is not multiplicatively separable in the variables $r_>$ and $r_<$, the integration-cell algorithm is modified in order to calculate all integrals as non-separable two-dimensional integrals. In a second step, the long-range two-electron integrals can be simply obtained by difference $$\langle pq|w^{{\ensuremath{\text{lr}}}}_{\ensuremath{\text{ee}}}|tu\rangle=\langle pq|w_{\ensuremath{\text{ee}}}|tu\rangle-\langle pq|w^{{\ensuremath{\text{sr}}}}_{\ensuremath{\text{ee}}}|tu\rangle.$$ Results and discussion {#sec:results} ====================== In this section, photoexcitation and photoionization spectra for the H and He atoms are presented. Photoexcitation and photoionization processes imply transitions from bound to bound and from bound to continuum states, respectively. For this reason, we first check the density of continuum states obtained with our B-spline basis set. After that, we show how orbital energies for the H and He atoms are influenced by the range-separation parameter $\mu$. Finally, having in mind these aspects, we discuss the different calculated spectra. All the studied transitions correspond to dipole-allowed spin-singlet transitions from the Lyman series, i.e. $1\text{s}\rightarrow n\text{p}$. Density of continuum states {#sec:DOS} --------------------------- In Fig. \[DOS\], the radial density of states (DOS) of a free particle in a spherical box is compared with the radial DOS of the continuum p orbitals of the H atom computed with the exact Hamiltonian or with the HF or LDA effective Hamiltonian using the B-spline basis set. The radial DOS of a free particle is given by [@BachCorDecHanMart-RepProgPhys-01] $\rho(\varepsilon)=R_\text{max}/\pi\sqrt{2\varepsilon}$ where $R_\text{max}$ is the radial size of the box, while for the different Hamiltonians using the B-spline basis set (with the same $R_\text{max}$) the radial DOS is calculated by finite differences as $\rho(\varepsilon_p)=2/(\varepsilon_{p+1} -\varepsilon_{p-1})$ where $\varepsilon_{p}$ are positive orbital energies. As one can observe, the radial DOS computed with the LDA or the HF Hamiltonian is essentially identical to the DOS of the free particle. This can be explained by the fact that since the unoccupied LDA and HF orbitals do not see a $-1/r$ attractive potential they are all unbound and they all contribute to the continuum, similarly to the free-particle case. By contrast, for the exact Hamiltonian with the same B-spline basis set, one obtains a slightly smaller DOS in the low-energy region. This is due to the presence of the $-1/r$ attractive Coulomb potential which supports a series of bound Rydberg states, necessarily implying less unoccupied orbitals in the continuum for a given basis. ![Radial density of states (DOS) for a free particle, $\rho(\varepsilon_p)=R_\text{max}/\pi\sqrt{2\varepsilon_p}$, in a spherical box of size $R_\text{max} = 100$ bohr, and for the continuum p orbitals of the H atom computed with the exact Hamiltonian, or with the HF or LDA effective Hamiltonian using the B-spline basis set with the same $R_\text{max}$.[]{data-label="DOS"}](Fig1.pdf) We have checked that, by increasing the size of the simulation box, together with the number of B-spline functions in the basis so as to keep constant the density of B-spline functions, the DOS of the exact Hamiltonian converges, albeit slowly, to the free-particle DOS. This must be the case since, for potentials vanishing at infinity, the global density of unbound states is independent of the potential for an infinite simulation box (only the local DOS depends on the potential, see e.g. Ref. ). From a numerical point of view, the computation of the DOS can be seen as a convergence test. With the present basis set, a huge energy range of the continuum spectrum is described correctly, and the difference between the DOS of the exact Hamiltonian and the free-particle DOS at low energies ($0.0 - 0.2$ Ha) is only about $10^{-4}$ Ha$^{-1}$. This difference is small enough to fairly compare the different methods considered in this paper. The calculation of the DOS is also important in order to compute proper oscillator strengths involving continuum states. Because of the use of a finite simulation box, the calculated positive-energy orbitals form, of course, a discrete set and not strictly a continuum. These positive-energy orbitals are thus not energy normalized as the exact continuum states should be. To better approximate pointwise the exact continuum wave functions, the obtained positive-energy orbitals should be renormalized. Following Macías *et al.* [@Macias88], we renormalize the positive-energy orbitals by the square root of the DOS as $\tilde{\varphi}_p({\ensuremath{\mathbf{r}}})= \sqrt{\rho(\varepsilon_p)}\varphi_p({\ensuremath{\mathbf{r}}})$. Range-separated orbital energies {#sec:orbital} -------------------------------- ![image](Fig2a.pdf) ![image](Fig2b.pdf) ![image](Fig3a.pdf) ![image](Fig3b.pdf) In Fig. \[Horbital\] we show the 1s and the low-lying p orbital energies for the H atom calculated with both the RSH and RSH-EXX methods as a function of the range-separation parameter $\mu$. As one observes in Fig. \[Horbital\]a, with the RSH method only the 1s ground state is bound, and the energy of this state is strongly dependent on $\mu$. At $\mu=0$, the self-interaction error introduced by the LDA exchange-correlation potential is maximal. But, when $\mu$ increases, the long-range HF exchange potential progressively replaces the long-range part of the LDA exchange-correlation potential and the self-interaction error is gradually eliminated until reaching the HF limit for $\mu\to\infty$, where one obtains the exact 1s orbital energy. The p orbitals (and all the other unoccupied orbitals) are always unbound and their (positive) energies are insensible to the value of $\mu$. One also observes that the approximate continuum of p orbitals has a DOS correctly decreasing as the energy increases, as previously seen in Fig. \[DOS\]. In Fig. \[Horbital\]b, one sees that the 1s orbital energy computed with the RSH-EXX method is identical to the 1s orbital energy obtained by the RSH scheme, as expected. However, a very different behavior is observed for the unoccupied p orbitals. Starting from the LDA limit at $\mu=0$ where all unoccupied orbitals are unbound, when the value of $\mu$ increases one sees the emergence of a series of bound Rydberg states coming down from the continuum. This is due to the introduction of an attractive $-1/r$ term in the long-range EXX potential, which supports a Rydberg series. For $\mu\to\infty$, we obtain the spectrum of the exact hydrogen Hamiltonian calculated with the B-spline basis set. Necessarily, with the finite basis used, the appearance of the discrete bound states is accompanied by a small reduction of the density of continuum states, as we already observed in Fig. \[DOS\] with the exact Hamiltonian. Another interesting aspect that can be observed in Fig. \[Horbital\]b is the fact that the different bound-state energies reach their exact $\mu\to\infty$ values at different values of $\mu$. Thus, for a fixed small value of $\mu$, each bound-state energy is affected differently by the self-interaction error. For the compact 1s orbital, the self-interaction error is eliminated for $\mu \gtrsim 1$ bohr$^{-1}$. For the more diffuse 2p Rydberg state, the self-interaction error is essentially eliminated with $\mu \gtrsim 0.5$ bohr$^{-1}$. When we continue to climb in the Rydberg series, the orbitals become more and more diffuse and the self-interaction error is eliminated from smaller and smaller values of $\mu$. In Fig. \[Heorbital\], the 1s and low-lying p orbital energies for the He atom are shown. Again, for the RSH method, one sees in Fig. \[Heorbital\]a that only the occupied 1s orbital is bound and all the unoccupied p orbitals are in the continuum. Similarly to the case of the H atom, at $\mu=0$ the 1s orbital energy is too high, which can essentially be attributed to the self-interaction error in the LDA exchange-correlation potential. This error decreases when $\mu$ increases and the 1s orbital energy converges to its HF value for $\mu\to\infty$. However, contrary to the case of the H atom, for this two-electron system, the 1s HF orbital energy is not equal to the opposite of the exact ionization energy but is slightly too low due to missing correlation effects. In the spirit of the optimally tuned range-separated hybrids [@LivBae-PCCP-07; @SteKroBae-JACS-09; @SteKroBae-JCP-09], the range-separation parameter $\mu$ can be chosen so that the HOMO orbital energy is equal to the opposite of the exact ionization energy, which gives $\mu=1.115$ bohr$^{-1}$ for the He atom. As regards the RSH-EXX method, one sees again in Fig. \[Heorbital\]b that, for this two-electron system, the 1s RSH-EXX orbital energy is identical to the 1s RSH orbital energy. As in the case of the H atom, the introduction of the long-range EXX potential generates a series of bound Rydberg states, whose energies converge to the Kohn-Sham EXX orbital energies for $\mu\to\infty$. For the Rydberg states of the He atom, it turns out that the Kohn-Sham EXX orbital energies are practically identical to the exact Kohn-Sham orbital energies [@UmrSavGon-INC-98], implying that the Kohn-Sham correlation potential has essentially no effect on these Rydberg states. As we will see, contrary to the RSH case, the set of unoccupied RSH-EXX orbitals can be considered as a reasonably good first approximation for the computation of photoexcitation and photoionization spectra, even before applying linear-response theory. Photoexcitation and photoionization\ spectra for the hydrogen atom ------------------------------------ ![Photoexcitation/photoionization spectra calculated with different methods for the H atom. In [**(a)**]{} comparison of the HF, LDA, and TDLDA methods with respect to the calculation with the exact Hamiltonian. In [**(b)**]{} comparison of the RSH, RSH-EXX, and TDRSH methods (all of them with a range-separation parameter of $\mu=0.5$ bohr$^{-1}$) with respect to the calculation with the exact Hamiltonian.[]{data-label="Hspectra"}](Fig4a.pdf "fig:") ![Photoexcitation/photoionization spectra calculated with different methods for the H atom. In [**(a)**]{} comparison of the HF, LDA, and TDLDA methods with respect to the calculation with the exact Hamiltonian. In [**(b)**]{} comparison of the RSH, RSH-EXX, and TDRSH methods (all of them with a range-separation parameter of $\mu=0.5$ bohr$^{-1}$) with respect to the calculation with the exact Hamiltonian.[]{data-label="Hspectra"}](Fig4b.pdf "fig:") ![Comparison of the renormalized radial amplitude $\tilde{R}(r) = \sqrt{\rho(\varepsilon)} R(r)$ of the continuum p orbital involved in the transition energy $\omega_n=\varepsilon - \varepsilon_\text{1s}=0.8$ Ha calculated by HF, LDA, RSH, and RSH-EXX (with a range-separation parameter of $\mu=0.5$ bohr$^{-1}$) with respect to the exact calculation for the H atom.[]{data-label="p-orbital"}](Fig5.pdf) In Fig. \[Hspectra\], photoexcitation/photoionization spectra for the H atom calculated with different methods are shown. For the calculation using the exact Hamiltonian, the spectrum is correctly divided into a discrete and a continuum part, corresponding to the photoexcitation and photoionization processes, respectively. As already discussed in Sec. \[sec:DOS\], for all calculations, the continuum states have been renormalized, or equivalently the oscillator strengths of the continuum part of the spectrum have been renormalized as $\tilde{f}_{1\text{s}\to n\text{p}} = \rho(\varepsilon_{n\text{p}}) f_{1\text{s}\to n\text{p}}$ where $\rho(\varepsilon_{n\text{p}})$ is the DOS at the corresponding positive orbital energy $\varepsilon_{n\text{p}}$. Moreover, for better readability of the spectra, following Refs. , we have also renormalized the oscillator strengths of the discrete part of the spectrum as $\tilde{f}_{1\text{s}\to n\text{p}} = n^3 f_{1\text{s}\to n\text{p}}$ where $n$ is the principal quantum number of the excited p orbital. This makes the transition between the discrete and the continuum part of the spectrum smooth. Another thing is, since we are working with a finite B-spline basis set principally targeting a good continuum, we obtain only a limited number of Rydberg states and the last Rydberg states near the ionization threshold are not accurately described. In particular, the corresponding oscillator strengths are overestimated (not shown). To fix this problem, we could for example use quantum defect theory in order to accurately extract the series of Rydberg states [@AlSharif98; @Friedrich98; @Faassen06; @Faassen09]. However, for the propose of the present work, we did not find necessary to do that, and instead we have simply corrected the oscillator strengths of the last Rydberg states by interpolating between the oscillator strengths of the first five Rydberg states and the oscillator strength of the first continuum state using a second-order polynomial function of the type $\tilde{f}_n=c_0+c_1\;\omega_n+c_2\;\omega_n^2$. This procedure was applied for all spectra having a discrete part. Let us first discuss the spectra in Fig. \[Hspectra\]a. The LDA spectrum, calculated using the bare oscillator strengths of Eq. (\[oscillator0\]), does not possess a discrete photoexcitation part, which was of course expected since the LDA potential does not support bound Rydberg states, as seen in the $\mu=0$ limit of Fig. \[Horbital\]. The ionization threshold energy, giving the onset of the continuum spectrum, is much lower than the exact value (0.5 Ha) due to the self-interaction error in the ground-state orbital energy. At the ionization threshold, the LDA oscillator strengths are zero, in agreement with the Wigner-threshold law [@Wig-PR-48; @SadBohCavEsrFabMacRau-JPB-00] for potentials lacking a long-range attractive $-1/r$ Coulomb tail. Close above the ionization threshold, the LDA spectrum has an unphysical large peak, which corresponds to continuum states with an important local character. However, as noted in Ref. , at the exact Rydberg transition energies, the LDA continuum oscillator strengths are actually reasonably good approximations to the exact discrete oscillator strengths, which was explained by the fact that the LDA potential is approximately the exact Kohn-Sham potential shifted by a constant. Moreover, above the exact ionization energy, LDA reproduces relatively well the exact photoionization spectrum and becomes essentially asymptotically exact in the high-energy limit. This is consistent with the fact that, at a sufficiently high transition energy, the LDA continuum orbitals are very similar to the exact ones, at least in the spatial region relevant for the calculation of the oscillation strengths, as shown in Fig. \[p-orbital\]. The TDLDA spectrum differs notably from the LDA spectrum only in that the unphysical peak at around $0.3$ Ha, close above its ionization threshold, has an even larger intensity. This increased intensity comes from the contribution of the LDA exchange-correlation kernel (not shown). The LDA exchange-correlation kernel being local, its larger impact is for the low-lying LDA continuum orbitals having a local character. As the TRK sum rule must be satisfied, the higher peak in the TDLDA spectrum is followed by a decrease of the oscillator strengths faster than in the LDA spectrum, until they reach the same asymptotic behavior. The HF spectrum in Fig. \[Hspectra\]a not only has no discrete photoexcitation part, as expected since the unoccupied HF orbitals are unbound (see the $\mu\to\infty$ limit of Fig. \[Horbital\]a), but does not even look as a photoionization spectrum. The HF unoccupied orbitals actually represent approximations to the continuum states of the H$^-$ anion, and are thus much more diffuse than the exact continuum states of the H atom, as shown in Fig. \[p-orbital\]. Consequently, the HF spectrum has in fact the characteristic shape of the photodetachment spectrum of the H$^-$ anion [@BetSal-BOOK-57; @Rau-JAA-96] (with the caveat that the initial state is the 1s orbital of the H atom instead of the 1s orbital of the H$^-$ anion). Finally, note that, for the H atom, linear-response TDHF gives of course the exact photoexcitation/photoionization spectrum. Let us now discuss the spectra obtained with the range-separated methods in Fig. \[Hspectra\]b. The common value of the range-separation parameter $\mu=0.5$ bohr$^{-1}$ has been used [@GerAng-CPL-05a]. The RSH spectrum looks like the photodetachment spectrum of the H$^-$ anion. This is not surprising since the RSH effective Hamiltonian contains a long-range HF exchange potential. The RSH continuum orbitals are similarly diffuse as the HF continuum orbitals, as shown in Fig. \[p-orbital\]. The RSH ionization threshold energy is slightly smaller than the exact value (0.5 Ha) due to the remaining self-interaction error in the 1s orbital energy stemming from the short-range LDA exchange-correlation potential at this value of $\mu$. The RSH-EXX ionization threshold is identical to the RSH one, but, contrary to the RSH spectrum, the RSH-EXX spectrum correctly shows a discrete photoexcitation part and a continuum photoionization part. Beside the small redshift of the spectrum, the self-interaction error at this value of $\mu$ manifests itself in slightly too small RSH-EXX oscillator strengths. The RSH-EXX continuum orbitals are very similar to the exact continuum orbitals, as shown in Fig. \[p-orbital\]. Finally, at this value of $\mu$, TDRSH gives a photoexcitation/photoionization spectrum essentially identical to the RSH-EXX spectrum. Photoexcitation and photoionization\ spectra for the helium atom ------------------------------------ ![Photoexcitation and photoionization spectra calculated with different methods for the He atom. In [**(a)**]{} comparison of HF, TDHF, LDA, and TDLDA methods. In [**(b)**]{} comparison of RSH, RSH-EXX, and TDRSH methods (all of them with a range-separation parameter of $\mu=1.115$ bohr$^{-1}$).[]{data-label="Hespectra"}](Fig6a.pdf "fig:") ![Photoexcitation and photoionization spectra calculated with different methods for the He atom. In [**(a)**]{} comparison of HF, TDHF, LDA, and TDLDA methods. In [**(b)**]{} comparison of RSH, RSH-EXX, and TDRSH methods (all of them with a range-separation parameter of $\mu=1.115$ bohr$^{-1}$).[]{data-label="Hespectra"}](Fig6b.pdf "fig:") ![Photoionization cross-section profile for the He atom. Normalized cross sections are given (in Hartree atomic units) by $\sigma_{n}= (2\pi^2/c) \tilde{f}_{n}$ where $\tilde{f}_{n}$ are the renormalized oscillator strengths and $c$ is the speed of light. Conversion factors 1 Ha = 27.207696 eV and 1 bohr$^2=28.00283$ Mb are employed. The experimental data and the FCI results are from Ref. . []{data-label="exp"}](Fig7.pdf) --------------------- ------------ -------- -- ------------ -------- -- ------------ -------- -- ------------ -------- Transition $\omega_n$ $f_n$ $\omega_n$ $f_n$ $\omega_n$ $f_n$ $\omega_n$ $f_n$ 1$^1$S $\to$ 2$^1$P 0.7799 0.2762 0.7970 0.2518 0.7766 0.3303 0.7827 0.2547 1$^1$S $\to$ 3$^1$P 0.8486 0.0734 0.8636 0.0704 0.8474 0.0857 0.8493 0.0708 1$^1$S $\to$ 4$^1$P 0.8727 0.0299 0.8872 0.0291 0.8721 0.0344 0.8729 0.0292 1$^1$S $\to$ 5$^1$P 0.8838 0.0150 0.8982 0.0148 0.8835 0.0172 0.8839 0.0148 1$^1$S $\to$ 6$^1$P 0.8899 0.0086 0.9042 0.0087 0.8897 0.0100 0.8899 0.0087 Ionization energy $^a$From Ref. . --------------------- ------------ -------- -- ------------ -------- -- ------------ -------- -- ------------ -------- \[tab:helium\] In Fig. \[Hespectra\], different photoexcitation/photoionization spectra for the He atom are shown. As in the H atom case, the oscillator strengths of the discrete part of the TDHF, RSH-EXX, and TDRSH spectra have been interpolated (using again the oscillator strengths of first five Rydberg states and of the first continuum state) to correct the overestimation of the oscillator strengths for the last Rydberg transitions. The excitation energies and the (non-interpolated) oscillator strengths of the first five discrete transitions are reported in Table \[tab:helium\] and compared with exact results. The photoionization part of some of the calculated spectra are compared with full configuration-interaction (FCI) calculations and experimental results in Fig. \[exp\]. In Fig. \[Hespectra\]a, one sees that the HF spectrum looks again like a photodetachment spectrum, corresponding in this case to the He$^-$ anion. By contrast, TDHF gives a reasonable photoexcitation/photoionization spectrum. In particular, for the first discrete transitions listed in Table \[tab:helium\], TDHF gives slightly too large excitation energies by at most about 0.02 Ha (or 0.5 eV) and slightly too small oscillator strengths by at most about 0.025. The ionization energy is also slightly too large by about 0.015 Ha, as already seen from the HF 1s orbital energy in the $\mu\to\infty$ limit of Fig. \[Heorbital\]. As regards the photoionization part of the spectrum, one sees in Fig. \[exp\] that TDHF gives slightly too large photoionization cross sections. The LDA spectrum in Fig. \[Hespectra\]a is also similar to the LDA spectrum for the H atom. The ionization threshold energy is much too low, and the spectrum lacks a discrete part and has an unphysical maximum close above the ionization threshold. Except from that, taking as reference the TDHF spectrum (which is close to the exact spectrum), the LDA spectrum is a reasonable approximation to the photoionization spectrum and, again as noted in Ref. , a reasonable continuous approximation to the photoexcitation spectrum. In comparison to LDA, TDLDA [@ZapLupTou-JJJ-XX-note] gives smaller and less accurate oscillator strengths in the lower-energy part of the spectrum but, the TRK sum rule having to be preserved, larger oscillator strengths in the higher-energy part of the spectrum, resulting in an accurate high-energy asymptotic behavior as seen in Fig. \[exp\]. Fig. \[Hespectra\]b shows the spectra calculated with RSH, RSH-EXX, and TDRSH using for the range-separation parameter the value $\mu=1.115$ bohr$^{-1}$ which imposes the exact ionization energy, as explained in Sec. \[sec:orbital\]. The RSH spectrum is similar to the HF spectrum and does not represent a photoexcitation/photoionization spectrum. By contrast, the RSH-EXX spectra is qualitatively correct for a photoexcitation/photoionization spectrum. As shown in Table \[tab:helium\], in comparison with TDHF, RSH-EXX gives more accurate Rydberg excitation energies, with a largest error of about 0.003 Ha (or 0.08 eV), but less accurate oscillator strengths which are significantly overestimated. The TDRSH method also gives a correct photoexcitation/photoionization spectrum, with the advantage that it gives Rydberg excitation energies as accurate as the RSH-EXX ones and corresponding oscillator strengths as accurate as the TDHF ones. As shown in Fig. \[exp\], TDRSH also gives a slightly more accurate photoionization cross-section profile than TDHF. Conclusions {#sec:conclusions} =========== We have investigated the performance of the RSH scheme for calculating photoexcitation/photoionization spectra of the H and He atoms, using a B-spline basis set in order to correctly describe the continuum part of the spectra. The study of these simple systems allowed us to quantify the influence on the spectra of the errors coming from the short-range exchange-correlation LDA and from the missing long-range correlation in the RSH scheme. For the He atom, it is possible to choose a value for the range-separation parameter $\mu$ for which these errors compensate each other so as to obtain the exact ionization energy. We have studied the differences between using the long-range HF exchange nonlocal potential and the long-range EXX local potential. Contrary to the former, the latter supports a series of Rydberg states and the corresponding RSH-EXX scheme, even without applying linear-response theory, gives reasonable photoexcitation/photoionization spectra. Nevertheless, the most accurate spectra are obtained with linear-response TDRSH (or TDRSH-EXX since they are equivalent for one- and two-electron systems). In particular, for the He atom at the optimal value of $\mu$, TDRSH gives slightly more accurate photoexcitation and photoionization spectra than standard TDHF. The present work calls for further developments. First, the merits of TDRSH (and/or TDRSH-EXX) for calculating photoexcitation/photoionization spectra of larger atoms and molecules, where screening effects are important, should now be investigated. Second, it would be interesting to test the effects of going beyond the LDA for the short-range exchange-correlation functional [@TouColSav-JCP-05; @GolWerStoLeiGorSav-CP-06] and adding long-range wave-function correlation [@FroKneJen-JCP-13; @HedHeiKneFroJen-JCP-13; @RebTou-JCP-16]. Third, time-propagation TDRSH could be implemented to go beyond linear response and tackle strong-field phenomena, such as high-harmonic generation and above-threshold ionization [@LabZapCocVenTouCaiTaiLup-JCTC-18]. [80]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , in **, edited by (, , ), p. . , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , in **, edited by (, , ), pp. . , , , ****, (). , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ** (, ). , , , , , ****, (). , **, Los Alamos Series in Basic and Applied Sciences (, , ). , Ph.D. thesis, (). , ****, (). , , , ****, (). , ****, (). , ** (, , ). , , , , ****, (). , , , in **, edited by , , (, ), pp. . , , , ****, (). , , , ****, (). , ** (, ). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , ** (, , ). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , , , ****, ().
{ "pile_set_name": "ArXiv" }
--- author: - Qiang Tu - Chuanxi Wu - Xueting Qiu - 'Faculty of Mathematics and Statistics, Hubei University, Wuhan 430062, China [^1]' title: 'The distributional hyper-Jacobian determinants in fractional Sobolev spaces' --- [**Abstract:**]{} In this paper we give a positive answer to a question raised by Baer-Jerison in connection with hyper-Jacobian determinants and associated minors in fractional Sobolev spaces. Inspired by recent works of Brezis-Nguyen and Baer-Jerison on the Jacobian and Hessian determinants, we show that the distributional $m$th-Jacobian minors of degree $r$ are weak continuous in fractional Sobolev spaces $W^{m-\frac{m}{r},r}$, and the result is optimal, satisfying the necessary conditions, in the frame work of fractional Sobolev spaces. In particular, the conditions can be removed in case $m=1,2$, i.e., the $m$th-Jacobian minors of degree $r$ are well defined in $W^{s,p}$ if and only if $W^{s,p} \subseteq W^{m-\frac{m}{r},m}$ in case $m=1,2$. [**Key words:**]{} Hyper-Jacobian, Higher dimensional determinants, Fractional Sobolev spaces, Distributions. [**2010 MR Subject Classification:**]{} 46E35, 46F10, 42B35. Introduction and main results ============================= Fix integer $m\geq 1$ and consider the class of non-smooth functions $u$ from $\Omega$, a smooth bounded open subset of $\mathbb{R}^N$, into $\mathbb{R}^n$( $N\geq 2$). The aim of this article is to identify when the hyper($m$th)-Jacobian determinants and associated minors of $u$, which were introduced by Olver in [@O], make sense as a distribution. In the case $N=n$ and $m=1$, starting with seminal work of Morrey[@MC], Reshetnyak[@RY] and Ball[@BJ] on variational problems of non-linear elasticity, it is well known that the distributional ($1$th-)Jacobian determinant $\mbox{Det}(Du)$ of a map $u\in W^{1,\frac{N^2}{N+1}}(\Omega,\mathbb{R}^N)$ (or $u\in L^{q}\cap W^{1,p}(\Omega,\mathbb{R}^N)$ with $\frac{N-1}{p}+\frac{1}{q}=1$ and $N-1< p\leq \infty$) is defined by $$\mbox{Det}(Du):=\sum_{j} \partial_j(u^i(\mbox{adj} Du)^i_j),$$ where $\mbox{adj}Du$ means the adjoint matrix of $Du$. Furthermore, Brezis-Nguyen [@BN] extended the range of the map $u\mapsto \mbox{Det} (Du)$ in the framework of fractional Sobolev spaces. They showed that the distributional Jacobian determinant $\mbox{Det}(Du)$ for any $u\in W^{1-\frac{1}{N},N}(\Omega,\mathbb{R}^N)$ can be defined as $$\langle\mbox{Det}(Du), \psi\rangle:=\lim_{k\rightarrow \infty}\int_{\Omega}\det(Du_k)\psi dx~~~\forall \psi \in C_{c}^{1}(\Omega, \mathbb{R}),$$ where $u_k\in C^1(\overline{\Omega}, \mathbb{R}^N)$ such that $u_k\rightarrow u$ in $ W^{1-\frac{1}{N},N}$. They pointed out that the result recovers all the definitions of distributional Jacobian determinants mentioned above, except $N=2$, and the distributional Jacobian determinants are well-defined in $W^{s,p}$ if and only if $W^{s,p}\subseteq W^{1-\frac{1}{N},N}$ for $1<p<\infty$ and $0<s<1$. In the case $n=1$ and $m=2$, similar to the results in [@BN], the distributional Hessian(2th-Jacobian) determinants are well-defined and continuous on $W^{2-\frac{2}{N},N}(\mathbb{R}^N)$ (see [@IT; @BJ]). Baer-Jersion [@BJ] pointed out that the continuous results of Hessian determinant in $W^{2-\frac{2}{N},N}(\mathbb{R}^N)$ with $N\geq 3$ implies the known continuity results in space $W^{1,p}(\mathbb{R}^N)\cap W^{2,q}(\mathbb{R}^N)$ with $1<p,r<\infty$, $\frac{2}{p}+\frac{N-2}{q}=1$, $N\geq 3$ (see [@DGG; @DM; @FM]). Furthermore they showed that the distributional Hessian determinants are well-defined in $W^{s,p}$ if and only if $W^{s,p}\subseteq W^{2-\frac{2}{N},N}$ for $1<p<\infty$ and $1<s<2$. For $m>2$, $m$th-Jacobian, as a generalization of ordinary Jacobian, was first introduced by Escherich [@EG] and Gegenbauer [@GL]. In fact, the general formula for hyper-Jacobian can be expressed by using Cayley’s theory of higher dimensional determinants. All these earlier investigations were limited to polynomial functions until Olver [@O] turn his attention to some non-smooth functions. Especially he showed that the $m$th-Jacboian determinants (minors) of degree $r$ can be defined as a distribution provided $$u\in W^{m-[\frac{m}{r}],\gamma}(\Omega, \mathbb{R}^n)\cap W^{m-[\frac{m}{r}]-1,\delta} (\Omega, \mathbb{R}^n) ~\mbox{with}~\frac{r-t}{\gamma}+\frac{t}{\delta}\leq 1, t:=m ~\mbox{mod}~r$$ or $$u\in W^{m-[\frac{m}{r}], \gamma}(\Omega, \mathbb{R}^n)~\mbox{with}~\gamma\geq \max\{\frac{ Nr}{N+t}\}.$$ Bare-Jersion [@BJ] raised an interesting question: whether do there exist fractional versions of this result? I.e., is the $m$th-Jacboian determinant of degree $r$ continuous from space $W^{m-\frac{m}{r},r}$ into the space of distributions? Our first results give a positive answer to the question. We refer to Sec. 2 below for the following notation. \[hm-thm-1\] Let $q,n,N$ be integers with $2\leq q\leq \underline{n}:=\min\{n,N\}$, for any integer $1\leq r\leq q$, multi-indices $\beta\in I(r,n)$ and $\bm{\alpha}= (\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$ ($j=1,\cdots,m$), the $mth$-Jacobian $(\beta, \bm{\alpha})$-minor operator $u \longmapsto M_{\bm{\alpha}}^{\beta}(D^mu) (\mbox{see}~ (\ref{hm-pre-for-1})):C^m(\Omega,\mathbb{R}^n)\rightarrow \mathcal{D}'(\Omega)$ can be extended uniquely as a continuous mapping $u \longmapsto \mbox{Div}_{\bm{\alpha}}^{\beta}(D^mu):W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)\rightarrow \mathcal{D}'(\Omega)$. Moreover for all $u,v\in W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)$, $\psi \in C^{\infty}_c(\Omega,\mathbb{R})$, we have $$\begin{split} &\left|\langle\mbox{Div}_{\bm{\alpha}}^{\beta}(D^mu)-\mbox{Div}_{\bm{\alpha}}^{\beta}(D^mv),\psi\rangle\right|\\ &\leq C_{r,q,n,N,\Omega}\|u-v\|_{W^{m-\frac{m}{q},q}}\left(\|u\|_{W^{m-\frac{m}{q},q}}^{r-1} +\|v\|_{W^{m-\frac{m}{q},q}}^{r-1}\right)\|D^m\psi\|_{L^{\infty}}. \end{split}$$ We recall that for $0<s<\infty$ and $1\leq p<\infty$, the fractional Sobolev space $W^{s,p}(\Omega)$ is defined as follows: when $s<1$ $$W^{s,p}(\Omega):=\left\{u\in L^p(\Omega)\mid \left(\int_{\Omega}\int_{\Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}dxdy\right)^{\frac{1}{p}}<\infty\right\},$$ and the norm $$\|u\|_{W^{s,p}}:=\|u\|_{L^p}+\left(\int_{\Omega}\int_{\Omega} \frac{|u(x)-u(y)|^p}{|x-y|^{N+sp}}dxdy\right)^{\frac{1}{p}}.$$ When $s>1$ with non-integer, $$W^{s,p}(\Omega):=\{u\in W^{[s],p}(\Omega)\mid D^{[s]} u\in W^{s-[s],p}(\Omega)\},$$ the norm $$\|u\|_{W^{s,p}}:=\|u\|_{W^{[s],p}}+\left(\int_{\Omega}\int_{\Omega} \frac{|D^{[s]}u(x)-D^{[s]}u(y)|^p}{|x-y|^{N+(s-[s])p}}dxdy\right)^{\frac{1}{p}}.$$ It is worth pointing out that we may use the same method to get a similar result, see Corollary \[hm-cor-3-1\], for $u\in W^{m-\frac{m}{q},q}(\Omega)$ with $m\geq 2$. Theorem \[hm-thm-1\] and Corollary \[hm-cor-3-1\] recover not only all the definitions of Jacobian and Hessian determinants mentioned above, but also the definitions of $m$-th Jacobian in [@O] since the following facts 1. $W^{m-[\frac{m}{r}],\gamma}(\Omega, \mathbb{R}^n)\cap W^{m-[\frac{m}{r}]-1,\delta} (\Omega, \mathbb{R}^n)\subset W^{m-\frac{m}{r},r}(\Omega, \mathbb{R}^n)$ with continuous embedding if $\frac{r-t}{\gamma}+\frac{t}{\delta}\leq 1$ $(1<\delta<\infty, 1<r\leq N)$, where $t:=m ~\mbox{mod}~r$. 2. $W^{m-[\frac{m}{r}], \gamma}(\Omega, \mathbb{R}^n)\subset W^{m-\frac{m}{r},r}(\Omega, \mathbb{R}^n)$ $(1<r\leq N)$ with continuous embedding if $\gamma\geq \max\{\frac{ Nr}{N+t}\}$. Similar to the optimal results for the ordinary distributional Jacobian and Hessian determinants in [@BN; @BJ], an natural question is that wether the results in Theorem \[hm-thm-1\] is optimal in the framework of the space $W^{s,p}$? I.e., is the distributional $m$-th Jacobian minors of degree $r$ well-defined in $W^{s,p}(\Omega, \mathbb{R}^n)$ if and only if $W^{s,p}(\Omega, \mathbb{R}^n)\subset W^{m-\frac{r}{m},r}(\Omega, \mathbb{R}^n)$? Such a question is connected with the construction of counter-examples in some special fractional Sobolev spaces. Indeed, the above conjecture is obviously correct in case $r=1$. Our next results give a partial positive answer in case $r>1$. \[hm-thm-2\] Let $m, r$ be integers with $1< r\leq \underline{n}$, $1<p<\infty$ and $0<s<\infty$ be such that $W^{s,p}(\Omega, \mathbb{R}^n) \nsubseteq W^{m-\frac{m}{r},r}(\Omega, \mathbb{R}^n)$. If the condition $$\label{hm-thm-2-for-1} 1<r<p, s=m-m/r ~\mbox{non-integer}$$ fails, then there exist a sequence $\{u_k\}_{k=1}^{\infty} \subset C^{m}(\overline{\Omega}, \mathbb{R}^n)$, multi-indices $\beta\in I(r,n)$, $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j\in I(r,N)$ and a function $\psi\in C_c^{\infty}(\Omega)$ such that $$\label{hm-thm-2-for-2} \lim_{k\rightarrow \infty} \|u_k\|_{s,p} =0, ~~~~\lim_{k\rightarrow \infty} \int_{\Omega} M^{\beta}_{\bm{\alpha}}(D^mu) \psi dx=\infty,$$ one still unanswered question is whether the above optimal results hold in case (\[hm-thm-2-for-1\]). We give some discuss in Sec. 4 and give positive answers in case $m=1$ and $2$. Indeed \[hm-thm-3\] Let $m=1~\mbox{or}~2$ and $r,s,p$ be as in Theorem \[hm-thm-2\]. Then there exist a sequence $\{u_k\}_{k=1}^{\infty} \subset C^{m}(\overline{\Omega}, \mathbb{R}^n)$, multi-indices $\beta\in I(r,n)$, $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j\in I(r,N)$ and a function $\psi\in C_c^{\infty}(\Omega)$ such that (\[hm-thm-2-for-2\]) holds. Furthermore, we give reinforced versions of optimal results, see Theorem \[hm-thm-4-14\], for $u\in W^{2-\frac{2}{r},r}(\Omega)$ with $1<r\leq N$. we expect that there are reinforced versions of optimal results for $W^{m-\frac{m}{r},r}(\Omega)$($m>2$), for instance there exist a sequence $\{u_k\}_{k=1}^{\infty} \subset C^{m}(\overline{\Omega})$ and a function $\psi\in C_c^{\infty}(\Omega)$ such that $$\lim_{k\rightarrow \infty} \|u_k\|_{s,p} =0, ~~~~\lim_{k\rightarrow \infty} \int_{\Omega} M_{\bm{\alpha}}(D^mu) \psi dx=\infty$$ for any $s,p$ with $W^{s,p}(\Omega) \nsubseteq W^{m-\frac{m}{r},r}(\Omega)$. This paper is organized as follows. Some facts and notion about higher dimensional determinant and hyper-Jacobian are given in Section 2. In Section 3 we establish the weak continuity results and definitions for distributional hyper-Jacobian minors in fractional Sobolev space. Then we turn to the question about optimality and get some positive results in Section 4. Higher dimensional determinants =============================== In this section we collect some notation and preliminary results for hyper-Jacobian determinants and minors. Fist we recall some notation and facts about about ordinary determinants and minors, whereas further details can be found in [@GMS]. Fix $0\leq k\leq n$, we shall use the standard notation for ordered multi-indices $$\label{subnotation01} I(k,n):=\{\alpha=(\alpha_1,\cdot\cdot\cdot,\alpha_k) \mid \alpha_i ~\mbox{integers}, 1\leq \alpha_1 <\cdot\cdot\cdot< \alpha_k\leq n\},$$ where $n \geq 2$. Set $I(0,n)=\{0\}$ and $|\alpha|=k$ if $\alpha \in I(k,n)$. For $\alpha\in I(k,n)$, 1. $\overline{\alpha}$ is the element in $I(n-k,n)$ which complements $\alpha$ in $\{1,2,\cdot\cdot\cdot,n\}$ in the natural increasing order. 2. $\alpha-i$ means the multi-index of length $k-1$ obtained by removing $i$ from $\alpha$ for any $i \in \alpha$. 3. $\alpha+j$ means the multi-index of length $k+1$ obtained by adding j to $\alpha$ for any $j\notin \alpha$, . 4. $\sigma(\alpha,\beta)$ is the sign of the permutation which reorders $(\alpha,\beta)$ in the natural increasing order for any multi-index $\beta$ with $\alpha\cap \beta=\emptyset$. In particular set $\sigma(\overline{0},0):=1$. Let $n,N \geq 2$ and $A=(a_{ij})_{n \times N}$ be an $n \times N$ matrix. Given two ordered multi-indices $\alpha\in I (k,N)$ and $\beta \in I(k,n)$, then $A_{\alpha}^{\beta}$ denotes the $k \times k $-submatrix of $A$ obtained by selecting the rows and columns by $\beta$ and $\alpha$, respectively. Its determinant will be denoted by $$M_{\alpha}^{\beta}(A):=\det A_{\alpha}^{\beta},$$ and we set $M_{0}^{0}(A):=1$. The adjoint of $A_{\alpha}^{\beta}$ is defined by the formula $$(\mbox{adj}~ A_{\alpha}^{\beta})_j^i:= \sigma(i,\beta-i) \sigma(j,\alpha-j) \det A_{\alpha-j}^{\beta-i},~~~~ i \in \beta, j\in \alpha.$$ So Laplace formulas can be written as $$M_{\alpha}^{\beta}(A)= \sum_{j \in \alpha} a_{ij} (\mbox{adj}~ A_{\alpha}^{\beta})_j^i,~~~~ i\in\beta.$$ Next we pay attention to the higher dimensional matrix and determinant. An $m$-dimensional matrix $\bm{A}$ of order $N^m$ is a hypercubical array of $N^m$ as $$\bm{A}=(a_{l_1l_2\cdot\cdot\cdot l_m})_{N\times\cdot\cdot\cdot\times N},$$ where the index $l_i\in \{1,\cdot\cdot\cdot N\}$ for any $1\leq i\leq m$. \[hm-def-2-1\] Let $\bm{A}$ be an $m$-dimensional matrix, then the (full signed) determinant of $\bm{A}$ is the number $$\det \bm{A}=\sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_N} \Pi_{s=2}^m\sigma(\tau_s) a_{1\tau_2(1)\cdot\cdot\cdot\tau_m(1)} a_{2 \tau_2(2)\cdot\cdot\cdot\tau_m(2)}\cdot\cdot\cdot a_{N\tau_2(N)\cdot\cdot\cdot\tau_m(N)},$$ where $S_N$ is the permutation group of $\{1,2,\cdot\cdot\cdot,N\}$ and $\sigma(\cdot)$ is the sign of $\cdot$. For any $1\leq i\leq m$ and $1\leq j\leq N$, the $j$-th $i$-layer of $\bm{A}$, the $(m-1)$-dimensional matrix denoted by $\bm{A}|_{l_i=j}$, which generalizing the notion of row and column for ordinary matrices, is defined by $$\bm{A}|_{l_i=j}:=(a_{l_1l_2\cdot\cdot l_{i-1}jl_{i+1}\cdot\cdot\cdot l_m)})_{N\times\cdot\cdot\cdot\times N}.$$ According to Definition \[hm-def-2-1\], we can easily obtain that \[hm-lem-2-11\] Let $\bm{A}$ be an $m$-dimensional matrix and $1\leq i\leq m$. $\bm{A}'$ is a matrix such that a pair of $i$-layers in $\bm{A}$ is interchanged, then $$\det \bm{A}'= \begin{cases} (-1)^{m-1}\det \bm{A}~~~~~~~~i=1,\\ -\det \bm{A}~~~~~~~~ i\geq 2. \end{cases}$$ For any $\bm{A}$ and $1\leq i<j\leq m$, the $(i,j)$-transposition of $\bm{A}$, denoting by $\bm{A}^{T(i,j)}$, is a $m$-dimensional matrix defined by $$a'_{l_1,\cdot\cdot\cdot,l_i,\cdot\cdot\cdot,l_j,\cdot\cdot\cdot,l_m}=a_{l_1,\cdot\cdot\cdot,l_j,\cdot\cdot\cdot,l_i,\cdot\cdot\cdot,l_m}$$ for any $l_1,\cdot\cdot\cdot,l_m=1,\cdot\cdot\cdot,N$. where $$\bm{A}^{T(i,j)}:=(a'_{l_1l_2\cdot\cdot\cdot\cdot\cdot l_m)})_{N\times\cdot\cdot\cdot\times N}.$$ Then we have Let $\bm{A}$ be an $m$-dimensional matrix and $1\leq i<j\leq m$, if $m$ is odd and $1<i<j\leq m$ or $m$ is even, then $$\det \bm{A}^{T(i,j)}=\det \bm{A}.$$ According to the definition of the $m$-dimensional determinant, we only to show the claim in case $m$ is even ,$i=1$ and $j=2$. $$\begin{split} \det \bm{A}&=\sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_N} \Pi_{s=2}^m\sigma(\tau_s) a_{1\tau_2(1)\cdot\cdot\cdot\tau_m(1)} a_{2 \tau_2(2)\cdot\cdot\cdot\tau_m(2)}\cdot\cdot\cdot a_{N\tau_2(N)\cdot\cdot\cdot\tau_m(N)}\\ &=\sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_N} \Pi_{s=2}^m\sigma(\tau_s) a_{\tau_2^{-1}(1)1\tau_3\circ \tau^{-1}_2(1)\cdot\cdot\cdot\tau_m\circ\tau_2^{-1}(1)} a_{\tau_2^{-1}(2) 2\tau_3\circ\tau_2^{-1}(2)\cdot\cdot\cdot\tau_m\circ\tau_2^{-1}(2)}\cdot\cdot\cdot a_{\tau_2^{-1}(N) N \tau_3\circ\tau_2^{-1}(N)\cdot\cdot\cdot\tau_m\circ \tau_2^{-1}(N)}\\ &=\sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_N} (\sigma(\tau_2))^{m-2} \sigma(\tau^{-1}_2)\sigma(\tau_3\circ\tau^{-1}_2)\cdot\cdot\cdot \sigma(\tau_m\circ\tau^{-1}_2)\\ &\cdot a'_{1\tau_2^{-1}(1)\tau_3\circ \tau^{-1}_2(1)\cdot\cdot\cdot\tau_m\circ\tau_2^{-1}(1)} a'_{2\tau_2^{-1}(2) \tau_3\circ\tau_2^{-1}(2)\cdot\cdot\cdot\tau_m\circ\tau_2^{-1}(2)}\cdot\cdot\cdot a'_{N\tau_2^{-1}(N) \tau_3\circ\tau_2^{-1}(N)\cdot\cdot\cdot\tau_m\circ \tau_2^{-1}(N)}\\ &=\sum_{\tau'_2,\cdot\cdot\cdot,\tau'_m\in S_N} \Pi_{s=2}^m\sigma(\tau'_s) a'_{1\tau'_2\tau'_3(1)\cdot\cdot\cdot\tau'_m(1)} a'_{2\tau'_2(2) \tau'_3(2)\cdot\cdot\cdot\tau'_m(2)}\cdot\cdot\cdot a'_{N\tau'_2(N) \tau'_3(N)\cdot\cdot\cdot\tau'_m(N)}. \end{split}$$ More generally, suppose $\bm{A}$ be an $m$-dimensional matrix of order $N_1\times \cdot\cdot\cdot\times N_m$, $1\leq r\leq \min\{N_1,\cdot\cdot\cdot,N_m\}$, and an type of multi-index $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ where $\alpha^j:=(\alpha^j_1,\cdot\cdot,\cdot, \alpha^j_r)$, $\alpha^j_i \in \{1,2,\cdot\cdot\cdot,N_j\}$ and $\alpha^j_{i_1}\neq \alpha^j_{i_2}$ for $i_1\neq i_2$. Define the $\bm{\alpha}$-minor of $\bm{A}$, denoted by $\bm{A}_{\bm{\alpha}}$, to be the $m$-dimensional matrix of order $r^m$ as $$\bm{A}_{\bm{\alpha}}=(b_{l_1l_2\cdot\cdot\cdot l_m})_{r\times\cdot\cdot\cdot\times r},$$ where $b_{l_1l_2\cdot\cdot\cdot l_m}:=a_{\alpha^1_{l_1}\alpha^2_{l_2}\cdot\cdot\cdot\alpha^m_{l_m}}$. Its determinant will be denoted by $$M_{\bm{\alpha}}(\bm{A}):=\det \bm{A}_{\bm{\alpha}}.$$ If $\alpha^j$ is not increasing, let $\widetilde{\alpha^j}$ be the increasing multi-indices generated by $\alpha^j$ and $\widetilde{\bm{\alpha}}:=(\widetilde{\alpha^1},\cdot\cdot\cdot,\widetilde{\alpha^m})$, then Lemma \[hm-lem-2-11\] implies that $M_{\bm{\alpha}}(\bm{A})$ and $M_{\widetilde{\bm{\alpha}}}(\bm{A})$ differ only by a sign. Without loss of generality, we can assume $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j\in I(r,N_j)$. Moreover we set $M_{\bm{0}}(\bm{A}):=1$. Next we pay attention to hyper-Jacobian determinants and minors for a map $u\in C^{m}(\Omega, \mathbb{R}^n)$. We will denote by $D^mu$ the hyper-Jacobian matrix of $u$, more precisely, $D^m u$ is a $(m+1)$-dimensional matrix with order $n\times N\times\cdot\cdot\cdot\times N$ given by $$D^mu:=(a_{l_1l_2\cdot\cdot\cdot l_{m+1}})_{n\times N\times\cdot\cdot\cdot\times N}$$ where $$a_{l_1l_2\cdot\cdot\cdot l_{m+1}}=\partial_{l_2}\partial_{l_3}\cdot\cdot\cdot\partial_{l_{m+1}} u^{l_1}.$$ Then for any $\beta\in I(r,n)$, $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j\in I(r,N)$ and $1\leq r\leq \min\{n,N\}$, the $m$th-Jacobian $(\beta, \bm{\alpha})$-minor of $u$, denoted by $M^{\beta}_{\bm{\alpha}}(D^mu)$, is the determinant of the $(\beta, \bm{\alpha})$- minor of $D^mu$, i.e., $$\label{hm-pre-for-1} M^{\beta}_{\bm{\alpha}}(D^mu):=M_{(\beta,\bm{\alpha})}(D^m u).$$ In particular if $N=n$ and $\beta=\alpha^1=\cdot\cdot\cdot=\alpha^m=\{1,2,\cdot\cdot\cdot,N\}$, $\det (D^mu)$ is called the $m$-th Jacobian determinant of $u$. Similarly, the hyper-Jacobian matrix $D^m u$ of $u\in C^{m}(\Omega)$ is a $m$-dimensional matrix with order $N\times\cdot\cdot\cdot\times N$ and the $m$th-Jacobian $\bm{\alpha}$-minor of $u$ is defined by $M_{\bm{\alpha}}(D^mu)$. In order to prove the main results, some lemmas, which can be easily manipulated by the definition of hyper-Jacobian minors, are introduced as follows. \[hm-lem-2-2\] Let $u=(v,\cdots,v)\in C^{m}(\Omega, \mathbb{R}^n)$ with $v\in C^{m}(\Omega)$. For any $\beta\in I(r,n)$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j\in I(r,N)$, $1\leq r\leq \underline{n}$ $$M^{\beta}_{\bm{\alpha}}(D^mu)= \begin{cases} r! M_{\bm{\alpha}}(D^mv)~~~~m~\mbox{is even},\\ 0~~~~~~~~~~~~~~~~~m~\mbox{is odd}. \end{cases}$$ \[hm-lem-2-1\] Let $u\in C^{m}(\Omega, \mathbb{R}^n)$, $\beta\in I(r,n)$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j\in I(r,N)$, $1\leq r\leq \underline{n}$. Then for any $1\leq i\leq m$ $$M^{\beta}_{\bm{\alpha}}(D^mu)=\sum_{\tau_1,\cdot\cdot\cdot,\tau_{i-1},\tau_{i+1},\cdot\cdot\cdot,\tau_m\in S_r} \Pi_{s\in \overline{i}}\sigma(\tau_s) M^{\overline{0}}_{\alpha^i} (Dv(i)),$$ where $M^{\overline{0}}_{\alpha^i}(\cdot)$ is the ordinary minors and $v(i)\in C^1(\Omega, \mathbb{R}^r)$ can be written as $$v^j(i)=\partial_{\alpha^1_{\tau_1(j)}}\cdot\cdot\cdot \partial_{\alpha^{i-1}_{\tau_{i-1}(j)}} \partial_{\alpha^{i+1}_{\tau_{i+1}(j)}} \cdot\cdot\cdot \partial_{\alpha^{m}_{\tau_{m}(j)}}u^{\beta_j},~~~~~~j=1,\cdots,r.$$ Hyper-jacobians in fractional Sobolev spaces ============================================ In this section we establish the weak continuity results for the Hyper-jacobian minors in the fractional Sobolev spaces $W^{m-\frac{m}{q},q}(\Omega, \mathbb{R}^n)$. Let $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$, we set $$\bm{\widetilde{\alpha}}=(\alpha^1+(N+1),\cdot\cdot\cdot,\alpha^m+(N+m)), R(\bm{\widetilde{\alpha}}):=\{(i_1,\cdot\cdot\cdot,i_m)\mid i_j\in \alpha^j+(N+j)\}.$$ For any $I=(i_1,\cdot\cdot\cdot,i_m)\in R(\bm{\widetilde{\alpha}})$, $$\widetilde{\bm{\alpha}}-I:=(\alpha^1+(N+1)-i_1,\cdot\cdot\cdot,\alpha^m+(N+m)-i_m);$$ $$\sigma(\widetilde{\bm{\alpha}}-I,I):=\Pi_{s=1}^m \sigma(\alpha^s+(N+s)-i_s,i_s);$$ $$\partial_I:=\partial_{x_{i_1}}\cdot\cdot\cdot\partial_{x_{i_m}};~~~~ \widetilde{x}:=(x_1,\cdots,x_N,x_{N+1},\cdots,x_{N+m}).$$ We begin with the following simple lemma: \[hm-lem-3-1\] Let $u \in C^m(\Omega, \mathbb{R}^n)$, $\psi\in C_c^m(\Omega)$, $0\leq r\leq \underline{n}:=\min\{n,N\}$, $\beta\in I(r,n)$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$ ($1\leq j\leq m$), then $$\label{hm-lem-3-1} \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^mu)\psi dx=\sum_{I\in R(\bm{\widetilde{\alpha}})}(-1)^m\sigma(\widetilde{\bm{\alpha}}-I,I)\int_{\Omega\times [0,1)^m} M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mU) \partial_{I} \Psi d\widetilde{x},$$ for any extensions $U\in C^m(\Omega\times[0,1)^m,\mathbb{R}^n)\cap C^{m+1}(\Omega\times (0,1)^m,\mathbb{R}^n)$ and $\Psi\in C^m_c(\Omega\times[0,1)^m,\mathbb{R})$ of $u$ and $\psi$, respectively. It is easy to show the results in case $r=0,1$ or $\underline{n}=1$. So we give the proof only for the case $2\leq r\leq \underline{n}$. Denote $$U_i:=\begin{cases} U|_{x_{N+i+1}=\cdot\cdot\cdot=x_{N+m}=0},~~~~1\leq i\leq m-1,\\ U,~~~~i=m. \end{cases} \Psi_i:=\begin{cases} \Psi|_{x_{N+i+1}=\cdot\cdot\cdot=x_{N+m}=0},~~~~1\leq i\leq m-1,\\ \Psi,~~~~i=m. \end{cases}$$ $$\Omega_i:=\Omega\times [0,1)_{x_{N+1}}\times \cdot\cdot\cdot\times [0,1)_{x_{N+i}};~~~~\widetilde{x_i}:=(x, x_{N+1},\cdot\cdot\cdot x_{N+i}).$$ Applying the fundamental theorem of calculus and the definition of $M_{\bm{\alpha}}^{\beta}(D^mu)$, we have $$\label{hm-lem-3-for-1} \begin{split} \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^mu)\psi dx&=-\int_{\Omega_1} \partial_{N+1} \left(M_{\bm{\alpha}}^{\beta}(D^mU_1)\Psi_1\right) d\widetilde{x_1}\\ &=-\int_{\Omega_1} \partial_{N+1}M_{\bm{\alpha}}^{\beta}(D^mU_1)\Psi_1 d\widetilde{x_1}-\int_{\Omega_1}M_{\bm{\alpha}}^{\beta}(D^mU_1)\partial_{N+1}\Psi_1 d\widetilde{x_1}. \end{split}$$ According to the Lemma \[hm-lem-2-1\], $M_{\bm{\alpha}}^{\beta}(D^mU_1)$ can be written as $$M_{\bm{\alpha}}^{\beta}(D^mU_1)= \sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_r} \Pi_{s=2}^m\sigma(\tau_s) M^{\overline{0}}_{\alpha^1} (DV_1),$$ where $\overline{0}:=\{1,2,\cdot\cdot\cdot,r\}$ and $$V_1(\widetilde{x_1}):=(V_1^1(\widetilde{x_1}),\cdot\cdot\cdot,V_1^r(\widetilde{x_1})),~~~~V^j_1=\partial_{\alpha^2_{\tau_2(j)}}\cdot\cdot\cdot\partial_{\alpha^{m}_{\tau_{m}(j)}}u^{\beta_j}.$$ Then $$\label{hm-lem-3-for-12} \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^mu)\psi dx=\sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_r} \Pi_{s=2}^m\sigma(\tau_s) \left\{-\int_{\Omega_1} \partial_{N+1}M^{\overline{0}}_{\alpha^1} (DV_1)\Psi_1 d\widetilde{x_1}-\int_{\Omega_1}M^{\overline{0}}_{\alpha^1} (DV_1)\partial_{N+1}\Psi_1 d\widetilde{x_1} \right\}.$$ We denote the first part integral on the right-hand side by $I$, Laplace formulas of the $2$-dimensional minors imply that $$\begin{split} I&=-\sum_{i\in \alpha^1} \sum_{j=1}^r \int_{\Omega_1} \sigma(i, \alpha^1-i) \sigma(j, \overline{0}-j) \partial_{N+1}\partial_i V_1^j M^{\overline{0}-j}_{\alpha^1-i} (DV_1)\Psi_1 d\widetilde{x_1}\\ &=\sum_{i\in \alpha^1} \sum_{j=1}^r \int_{\Omega_1} \sigma(i, \alpha^1-i) \sigma(j, \overline{0}-j) \partial_{N+1} V_1^j \left(\partial_i M^{\overline{0}-j}_{\alpha^1-i} (DV_1)\Psi_1+ M^{\overline{0}-j}_{\alpha^1-i} (DV_1)\partial_i\Psi_1\right) d\widetilde{x_1}.\\ \end{split}$$ Since $$\sum_{i\in \alpha^1} \sigma(i, \alpha^1-i) \sigma(j, \overline{0}-j)\partial_i M^{\overline{0}-j}_{\alpha^1-i} (DV_1)=0$$ for any $j$, it follows that $$\begin{split} I&=\sum_{i\in \alpha^1} \sum_{j=1}^r \int_{\Omega_1} \sigma(i, \alpha^1-i) \sigma(j, \overline{0}-j) \partial_{N+1} V_1^j M^{\overline{0}-j}_{\alpha^1-i} (DV_1)\partial_i\Psi_1 d\widetilde{x_1}\\ &=\sum_{i\in \alpha^1} \int_{\Omega_1} \sigma(i, \alpha^1-i) \sigma(N+1, \alpha^1-i) M^{\overline{0}}_{\alpha^1+(N+1)-i} (DV_1)\partial_i\Psi_1 d\widetilde{x_1}\\ &=-\sum_{i\in \alpha^1} \int_{\Omega_1} \sigma(\alpha^1+(N+1)-i,i) M^{\overline{0}}_{\alpha^1+(N+1)-i} (DV_1)\partial_i\Psi_1 d\widetilde{x_1}. \end{split}$$ Combing with (\[hm-lem-3-for-12\]), we obtain that $$\int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^mu)\psi dx= -\sum_{i_1\in \alpha^1+(N+1)} \sigma(\alpha^1+(N+1)-i_1,i_1) \sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_r} \Pi_{s=2}^m\sigma(\tau_s) \int_{\Omega_1} M^{\overline{0}}_{\alpha^1+(N+1)-i_1} (DV_1)\partial_{i_1}\Psi_1 d\widetilde{x_1}.$$ For any $i_1\in \alpha^1+(N+1)$, we denote $\gamma:=\alpha^1+(N+1)-i_1$, then $$\label{hm-lem-3-for-2} \begin{split} &\sum_{\tau_2,\cdot\cdot\cdot,\tau_m\in S_r} \Pi_{s=2}^m\sigma(\tau_s) M^{\overline{0}}_{\alpha^1+(N+1)-i_1} (DV_1)=\sum_{\tau_1,\tau_2,\cdot\cdot\cdot,\tau_m\in S_r} \Pi_{s=1}^m\sigma(\tau_s) \partial_{\gamma_{\tau_1(1)}}V_1^1 \cdot\cdot\cdot \partial_{\gamma_{\tau_1(r)}}V_1^r\\ &=\sum_{\tau_1,\tau_2,\cdot\cdot\cdot,\tau_m\in S_r} \Pi_{s=1}^m\sigma(\tau_s) \left(\partial_{\gamma_{\tau_1(1)}}\partial_{\alpha^2_{\tau_2(1)}} \cdot\cdot\cdot \partial_{\alpha^m_{\tau_m(1)}} U_1^{\beta_1}\right)\cdot\cdot\cdot \left( \partial_{\gamma_{\tau_1(r)}} \partial_{\alpha^2_{\tau_2(r)}} \cdot\cdot\cdot \partial_{\alpha^m_{\tau_m(r)}} U_1^{\beta_r}\right)\\ &= M^{\beta}_{\bm{\alpha}(i_1)} (D^m U_1), \end{split}$$ where $\bm{\alpha}(i_1):=(\alpha^1+(N+1)-i_1,\alpha^2,\cdot\cdot\cdot,\alpha^m )$. Hence $$\begin{split} \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^mu)\psi dx&= -\sum_{i_1\in \alpha^1+(N+1)} \sigma(\alpha^1+(N+1)-i_1,i_1) \int_{\Omega_1} M^{\beta}_{\bm{\alpha}(i_1)} (D^m U_1)\partial_{i_1}\Psi_1 d\widetilde{x_1}\\ &=\sum_{i_1\in \alpha^1+(N+1)} \sigma(\alpha^1+(N+1)-i_1,i_1) \int_{\Omega_2} \partial_{N+2} \left( M^{\beta}_{\bm{\alpha}(i_1)} (D^m U_2)\partial_{i_1}\Psi_2\right) d\widetilde{x_2}. \end{split}$$ An easy induction and the argument similar to the one used in (\[hm-lem-3-for-1\])-(\[hm-lem-3-for-2\]) shows that $$\begin{split} \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^mu)\psi dx=(-1)^j\sum_{s=1}^j\sum_{i_s\in \alpha^s+(N+s)} \Pi_{s=1}^j \sigma(\alpha^s+(N+s)-i_s,i_s) \int_{\Omega_j} M_{\bm{\alpha}(i_1i_2\cdot\cdot\cdot i_j)}^{\beta}(D^mU_j)\partial_{i_1i_2\cdot\cdot\cdot i_j}\Psi_jd\widetilde{x_j} \end{split}$$ for any $1\leq j\leq m$, where $$\bm{\alpha}(i_1i_2\cdot\cdot\cdot i_j):=(\alpha^{1}+(N+1)-i_1,\cdot\cdot\cdot,\alpha^{j}+(N+j)-{i_j},\alpha^{j+1},\cdot\cdot\cdot,\alpha^{m}).$$ \[hm-lem-3-2\] Let $u,v \in C^m(\Omega, \mathbb{R}^n)$ and $\psi\in C_c^m(\Omega)$ and $2\leq q\leq \underline{n}$. Then for any $1\leq r\leq q$, $\beta\in I(r,n)$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$, $$\left|\int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^m u) \psi dx- \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^m v) \psi dx\right|\leq C\|u-v\|_{W^{m-\frac{m}{q},q}}(\|u\|^{r-1}_{W^{m-\frac{m}{q},q}}+ \|v\|^{r-1}_{W^{m-\frac{m}{q},q}}) \|D^m \psi\|_{L^{\infty}},$$ the constant $C$ depending only on $q,r,m,n,N$ and $\Omega$. Let $\widetilde{u}$ and $\widetilde{v}$ be extensions of $u$ and $v$ to $\mathbb{R}^N$ such that $$\|\widetilde{u}\|_{W^{m-\frac{m}{q},q}(\mathbb{R}^N,\mathbb{R}^n)}\leq C \|u\|_{W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)},~~~~\|\widetilde{v}\|_{W^{m-\frac{m}{q},q}(\mathbb{R}^N,\mathbb{R}^n)}\leq C\|v\|_{W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)}$$ and $$\|\widetilde{u}-\widetilde{v}\|_{W^{m-\frac{m}{q},q}(\mathbb{R}^N,\mathbb{R}^n)}\leq C \|u-v\|_{W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)},$$ where $C$ depending only on $q,m,n,N$ and $\Omega$. According to a well known trace theorem of Stein in [@STE1; @STE2], where $W^{m-\frac{m}{q},q}(\mathbb{R}^N)$ is identified as the space of traces of $W^{m,q}(\mathbb{R}^N\times(0,+\infty)^m)$, there is a bounded linear extension operator $$E:W^{m-\frac{m}{q},q}(\mathbb{R}^N,\mathbb{R}^n)\rightarrow W^{m,q}(\mathbb{R}^N\times (0,+\infty)^m,\mathbb{R}^n).$$ Let $U$ and $V$ be extensions of $\widetilde{u}$ and $\widetilde{v}$ to $\mathbb{R}^N\times (0,+\infty)^m$, respectively, i.e., $$U=E\widetilde{u},~~V=E\widetilde{v}.$$ We then have $$\|D^mU\|_{L^{q}(\Omega \times (0,1)^m)}\leq C\|u\|_{W^{m-\frac{q}{m},q}(\Omega,\mathbb{R}^n)},~~~~\|D^mV\|_{L^{q}(\Omega \times (0,1)^m)}\leq C \|v\|_{W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)}$$ and $$\|D^mU-D^mV\|_{L^{q}(\Omega \times (0,1)^m)}\leq C \|u-v\|_{W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)}.$$ Let $\Psi \in C^m_c(\Omega\times [0,1)^m)$ be an extension of $\psi$ such that $$\|D^m\Psi\|_{L^{\infty}(\Omega\times [0,1)^m)}\leq C\|D^m\psi\|_{L^{\infty}(\Omega)}.$$ According to Lemma \[hm-lem-3-1\], we have $$\label{hm-lem-for-3-31} \begin{split} &\left|\int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^m u) \psi dx- \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^m v) \psi dx\right|\leq \sum_{I\in R(\bm{\widetilde{\alpha}})}\int_{\Omega\times [0,1)^m} \left|M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mU)-M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mV)\right| |\partial_{I} \Psi| d\widetilde{x}\\ &\leq \| D^m \Psi\|_{L^{\infty}(\Omega\times [0,1)^m)} \sum_{I\in R(\bm{\widetilde{\alpha}})}\int_{\Omega\times [0,1)^m} \left|M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mU)-M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mV)\right| d\widetilde{x}. \end{split}$$ Note that for any $I\in R(\bm{\widetilde{\alpha}})$ $$\begin{split} &\left|M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mU)-M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mV)\right|\\ &\leq \sum_{\tau_1,\cdot\cdot\cdot, \tau_m\in S_r} |\partial_{\tau_1(1)\cdot\cdot\cdot\tau_m(1)}U^{\beta_1} \cdot\cdot\cdot \partial_{\tau_1(r)\cdot\cdot\cdot\tau_m(r)}U^{\beta_r}- \partial_{\tau_1(1)\cdot\cdot\cdot\tau_m(1)}V^{\beta_1}\cdot\cdot\cdot \partial_{\tau_1(r)\cdot\cdot\cdot\tau_m(r)}V^{\beta_r}| \\ &\leq \sum_{\tau_1,\cdot\cdot\cdot, \tau_m\in S_r} \sum_{s=1}^{r} |D^mU|^{s-1}|D^mU-D^mV||D^mV|^{r-s}\\ &\leq C|D^mU-D^mV|(|D^mU|^{r-1}+|D^mV|^{r-1}). \end{split}$$ Combining with (\[hm-lem-for-3-31\]), we can easily obtain $$\begin{split} &\left|\int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^m u) \psi dx- \int_{\Omega} M_{\bm{\alpha}}^{\beta}(D^m v) \psi dx\right|\\ &\leq C \int_{\Omega\times [0,1)^m} |D^mU-D^mV|(|D^mU|^{r-1}+|D^mV|^{r-1}) d\widetilde{x} \|D^m\Psi\|_{L^{\infty}(\Omega\times [0,1)^m)}\\ &\leq C \|u-v\|_{W^{m-\frac{m}{q},q}}(\|u\|^{r-1}_{W^{m-\frac{m}{q},q}}+ \|v\|^{r-1}_{W^{m-\frac{m}{q},q}}) \|D^m \psi\|_{L^{\infty}}. \end{split}$$ According to the above lemma, we can give the definitions of distributional $m$th-Jacobian minors of $u$ with degree less that $q$ when $u\in W^{m-\frac{m}{q},q}(\Omega, \mathbb{R}^n)$ ($2\leq q\leq \underline{n}$). \[hm-def-3-1\] Let $u\in W^{m-\frac{m}{q},q}(\Omega, \mathbb{R}^n)$ with $2\leq q \leq \underline{n}$. For any $0\leq r\leq q$, $\beta\in I(r,n)$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$, the distributional $m$th-Jacobian $(\beta, \bm{\alpha})$-minors of $u$, denoted by $\mbox{Div}_{\bm{\alpha}}^{\beta}(D^mu)$, is defined by $$\langle \mbox{Div}_{\bm{\alpha}}^{\beta}(D^mu), \psi \rangle:= \begin{cases} \int_{\Omega} \psi(x)dx,~~~~~~~~~~~r=0; \\ \lim_{k\rightarrow \infty} \int_{\Omega} M^{\beta}_{\bm{\alpha}}(D^mu_k)\psi dx,~~~~ 1\leq r\leq q\\ \end{cases}$$ for any $\psi\in C^m_c(\Omega)$ and any sequence $\{u_k\}_{k=1}^{\infty}\subset C^m(\overline{\Omega},\mathbb{R}^n)$ such that $u_k\rightarrow u$ in $W^{m-\frac{m}{q},q}(\Omega,\mathbb{R}^n)$. Obviously this quantity is well-defined since Lemma \[hm-lem-3-2\] and the fact that $C^m(\overline{\Omega}, \mathbb{R}^n)$ is dense in $W^{m-\frac{m}{q},q}(\Omega, \mathbb{R}^n)$. It is clear that Theorem \[hm-thm-1\] is a consequence of Lemma \[hm-lem-3-2\] and Definition \[hm-def-3-1\]. According to the trace theory and the approximate theorem, we obtain a fundamental representation of the distributional m-th Jacobian minors in $W^{m-\frac{m}{q},q}$. \[hm-pro-3-1\] Let $u\in W^{m-\frac{m}{q},q}(\Omega, \mathbb{R}^n)$ with $2\leq q \leq \underline{n}$. For any $0\leq r\leq q$, $\beta\in I(r,n)$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$ , $$\int_{\Omega} \mbox{Div}_{\bm{\alpha}}^{\beta}(D^mu)\psi dx=\sum_{I\in R(\bm{\widetilde{\alpha}})}(-1)^m\sigma(\widetilde{\bm{\alpha}}-I,I)\int_{\Omega\times [0,1)^m} M_{\widetilde{\bm{\alpha}}-I}^{\beta}(D^mU) \partial_{I} \Psi d\widetilde{x}$$ for any extensions $U\in W^{m,q}(\Omega\times[0,1)^m,\mathbb{R}^n)$ and $\Psi\in C^m_c(\Omega\times[0,1)^m)$ of $u$ and $\psi$, respectively. Note that the $m$-dimensional matrix $D^m u$ is symmetric if $u\in C^m(\Omega)$, i.e., $(D^mu)^{T(i,j)}=D^mu$ for any $1\leq i<j\leq m$. An argument similar to the one used in Lemma \[hm-lem-3-1\] and \[hm-lem-3-2\] show that \[hm-cor-3-1\] Let $u\in W^{m-\frac{m}{q},q}(\Omega)$ with $2\leq q \leq N$ and $m\geq 2$. For any $0\leq r\leq q$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$, Then the $m$-th Jacobian $\bm{\alpha}$-minor operator $u \longmapsto M_{\bm{\alpha}}(D^mu):C^m(\Omega)\rightarrow \mathcal{D}'(\Omega)$ can be extended uniquely as a continuous mapping $u \longmapsto \mbox{Div}_{\bm{\alpha}}(D^mu):W^{m-\frac{m}{q},q}(\Omega)\rightarrow \mathcal{D}'(\Omega)$. Moreover for all $u,v\in W^{m-\frac{m}{q},q}(\Omega)$, $\psi \in C^{\infty}_c(\Omega,\mathbb{R})$ and $1\leq r\leq q$, we have $$\begin{split}\left|\langle\mbox{Div}_{\bm{\alpha}}(D^mu)-\mbox{Div}_{\bm{\alpha}}(D^mv),\psi\rangle\right|\leq C_{r,q,N,\Omega}\|u-v\|_{W^{m-\frac{m}{q},q}}\left(\|u\|_{W^{m-\frac{m}{q},q}}^{r-1} +\|v\|_{W^{m-\frac{m}{q},q}}^{r-1}\right)\|D^m\psi\|_{L^{\infty}}, \end{split}$$ where the constant depending only on $r, q, N$ and $ \Omega$. In particular, the distributional minor $\mbox{Div}_{\bm{\alpha}}(D^mu)$ can be expressed as $$\int_{\Omega} \mbox{Div}_{\bm{\alpha}}(D^mu)\psi dx=\sum_{I\in R(\bm{\widetilde{\alpha}})}(-1)^m\sigma(\widetilde{\bm{\alpha}}-I,I)\int_{\Omega\times [0,1)^m} M_{\widetilde{\bm{\alpha}}-I}(D^mU) \partial_{I} \Psi d\widetilde{x}$$ for any extensions $U\in W^{m,q}(\Omega\times[0,1)^m)$ and $\Psi\in C^m_c(\Omega\times[0,1)^m)$ of $u$ and $\psi$, respectively. The optimality results in fractional Sobolev spaces =================================================== In this section we establish the optimality results of Theorem 1 in the framework of spaces $W^{s,p}$. Before proving the main results, we state some interesting consequences (see [@BM2 Theorem 1 and Proposition 5.3]): \[hm-lem-4\] For $0\leq s_1<s_2<\infty$, $1\leq p_1, p_2,p\leq \infty$, $s=\theta s_1+(1-\theta) s_2$, $\frac{1}{p}=\frac{\theta}{p_1}+\frac{1-\theta}{p_2}$ and $0<\theta<1$, the inequality $$\|f\|_{W^{s,p}(\Omega)}\leq C \|f\|_{W^{s_1,p_1}(\Omega)}^{\theta} \|f\|_{W^{s_2,p_2}(\Omega)}^{1-\theta}.$$ holds if and only if the following condition fails $$s_2\geq 1~\mbox{is an integer}, ~p_2=1~\mbox{and}~s_2-s_1\leq 1-\frac{1}{p_1}.$$ \[hm-pro-1\] The following equalities of spaces holds: 1. $W^{s,p}(\Omega)=F^s_{p,p}(\Omega)$ if $s>0$ is a non-integer and $1\leq p\leq \infty$. 2. $W^{s,p}(\Omega)=F^s_{p,2}(\Omega)$ if $s\geq 0$ is an integer and $1<p<\infty$. The definition of Triebel-Lizorkin spaces $F^s_{p,q}$ can be seen in [@BM2; @TH]. \[hm-rem-41\] If $1<r\leq N$, according to the embedding properties of the Triebel-Lizorkin spaces $F^s_{p,q}$, see e.g. [@TH page 196], and Proposition \[hm-pro-1\], we consider all possible cases: 1. $s-m+\frac{m}{r}>\max\{0,\frac{N}{p}-\frac{N}{r}\}$, then the embedding $W^{s,p}(\Omega)\subset W^{m-\frac{m}{r},r}(\Omega)$ holds; 2. $s-m+\frac{m}{r}<\max\{0,\frac{N}{p}-\frac{N}{r}\}$, the embedding fails; 3. $s-m+\frac{m}{r}=\max\{0,\frac{N}{p}-\frac{N}{r}\}$, there are three sub-cases: 1. if $p\leq r$, then the embedding $W^{s,p}(\Omega)\subset W^{m-\frac{m}{r},r}(\Omega)$ holds; 2. if $p>r$ and $m-\frac{m}{r}$ integer, the embedding $W^{s,p}(\Omega)\subset W^{m-\frac{m}{r},r}(\Omega)$ holds; 3. if $p>r$ and $m-\frac{m}{r}$ non-integer, the embedding fails. In order to solve the optimality results, we just consider three cases: $$\begin{split} &(1) 1<p\leq r, s+\frac{m}{r}<m+\frac{N}{p}-\frac{N}{r};\\ &(2)1< r<p, 0<s<m-\frac{m}{r};\\ &(3) 1<r<p, s=m-m/r ~\mbox{non-integer}. \end{split}$$ Without loss of generality, one may assume that $n=N$, $(-8,8)^N\subset \Omega$, and $\bm{\alpha'}=(\alpha',\cdots,\alpha')$ with $\alpha'=(1,2,\cdot\cdot\cdot,r)$. First we establish the optimality results in case $1< r<p, 0<s<m-\frac{m}{r}$. \[hm-pro-41\] Let $m, r$ be integers with $1< r\leq \underline{n}$, $p>r$ and $0<s<m-\frac{m}{r}$. Then there exist a sequence $\{u_k\}_{k=1}^{\infty} \subset C^{m}(\overline{\Omega}, \mathbb{R}^N)$ and a function $\psi\in C_c^{\infty}(\Omega)$ such that $$\label{hm-thm-for-1} \lim_{k\rightarrow \infty} \|u_k\|_{s,p} =0, ~~~~\lim_{k\rightarrow \infty} \int_{\Omega} M^{\alpha'}_{\bm{\alpha'}}(D^mu_k) \psi dx=\infty.$$ For any integer $k$, we define $u_k: \Omega \rightarrow \mathbb{R}^N$ as $$u_k^i(x)= k^{-\rho} \sin (k x_i),~~1\leq i\leq r-1;~~~~ u_k^i(x)= 0,~~r< i\leq N$$ and $$u_k^r(x)= k^{-\rho} (x_r)^m \prod_{j=1}^{r-1} \sin (\frac{m\pi}{2}+k x_j).$$ Where $\rho$ is a constant such that $s<\rho<m-\frac{m}{r}$. Since $\|D^{[s]+1}u_k\|_{L^{\infty}}\leq C k^{[s]+1-\rho}$ and $\|u_k\|_{L^{\infty}}\leq C k^{-\rho}$, it follows that $$\|u_k\|_{s,p} \leq C\|u_k\|^{1-\theta}_{L^p}\|u_k\|^{\theta}_{[s]+1,p}\leq C k^{s-\rho}.$$ Where $\theta=\frac{s}{[s]+1}$. Let $\psi\in C^{\infty}_c(\Omega)$ be such that $$\label{hm-th2-for-2} \psi(x)=\prod_{i=1}^N \psi'(x_i), ~\mbox{with}~\psi'\in C^1_c((0,\pi)), \psi'\geq 0 ~\mbox{and}~\psi'=1~ \mbox{in}~(\frac{1}{4}\pi,\frac{3}{4}\pi).$$ Then $$\int_{\Omega} M^{\alpha'}_{\bm{\alpha'}} (D^m u_k) \psi dx\geq m!\int_{(\frac{1}{4}\pi, \frac{3}{4}\pi)^N} k^{mr-\rho r-m} \prod_{j=1}^{r-1} \sin^2 (\frac{m\pi}{2}+kx_j) dx=Ck^{mr-\rho r-m}.$$ Hence the conclusion (\[hm-thm-for-1\]) holds. Next we establishing the optimality results in case $1<r<p, s=m-m/r ~\mbox{non-integer}$ by constructing a lacunary sum of atoms, which is inspired by the work of Brezis and Nguyen [@BN]. \[hm-pro-42\] Let $m, r$ be integers with $1< r\leq \underline{n}$, $p>r$ and $s=m-m/r$ non-integer. Then there exist a sequence $\{u_k\}_{k=1}^{\infty} \subset C^{m}(\overline{\Omega}, \mathbb{R}^N)$ and a function $\psi\in C_c^{\infty}(\Omega)$ satisfying the conditions (\[hm-thm-for-1\]). Fix $k>>1$. Define $v_k=(v_k^1,\cdots,v_k^N):\Omega\rightarrow \mathbb{R}^N$ as follows $$v_k^i=\begin{cases} \sum_{l=1}^k \frac{1}{n_l^{s}(l+1)^{\frac{1}{r}}} \sin( n_l x_i), ~~~~1\leq i\leq r-1;\\ (x_r)^m\sum_{l=1}^k \frac{1}{n_l^{s}(l+1)^{\frac{1}{r}}} \prod_{j=1}^{r-1} \sin(\frac{m\pi}{2}+n_l x_j),~~~~i=r;\\ 0,~~~~~~~~r+1\leq i\leq N. \end{cases}$$ Where $n_l=k^{\frac{r^2}{m}} 8^l$ for $1\leq l\leq k$. Let $\psi\in C^{\infty}_c(\Omega)$ be defined as (\[hm-th2-for-2\]). We claim that $$\label{hm-thm-for-2} \|v_k\|_{s,p} \leq C,~~~~ \int_{\Omega} M^{\alpha'}_{\bm{\alpha'}}(D^mv_k) \psi dx \geq C \ln k,$$ where the constant $C$ is independent of $k$. Assuming the claim holds, we deduce $u_k= (\ln k)^{-\frac{1}{2r}} v_k$ and $\psi$ satisfies the conditions (\[hm-thm-for-1\]). Hence it remains to prove (\[hm-thm-for-2\]). On the one hand $$\begin{split} M_{\bm{\alpha'}}^{\alpha'} (D^m v_k) &=\left\{ \prod_{i=1}^{r-1} \left( \sum_{l_i=1}^k \frac{n_{l_i}^{\frac{m}{r}}}{(l_i+1)^{\frac{1}{r}}} \sin(\frac{m\pi}{2}+n_{l_i} x_i)\right)\right\}\times \left(m! \sum_{l_r=1}^k \frac{1}{n_{l_r}^{s}(l_r+1)^{\frac{1}{r}}} \prod_{j=1}^{r-1} \sin(\frac{m\pi}{2}+n_{l_r} x_j) \right) \\ &=m! \sum_{(l_1,\cdot\cdot\cdot,l_r)\in G} \frac{1}{n_{l_r}^{s}(l_r+1)^{\frac{1}{r}}} \prod_{i=1}^{r-1} \left(\frac{n_{l_i}^{\frac{m}{r}}}{(l_i+1)^{\frac{1}{r}}} \sin(\frac{m\pi}{2}+n_{l_i} x_i) \sin(\frac{m\pi}{2}+n_{l_r} x_i) \right)\\ &+m! \sum_{l=1}^k \frac{1}{l+1} \prod_{i=1}^{r-1} \sin^2(\frac{m\pi}{2}+n_l x_i), \end{split}$$ where $$G:=\{(l_1,\cdot\cdot\cdot,l_r)\mid (l_1,\cdot\cdot\cdot,l_r)\neq (l,\cdot\cdot\cdot,l) ~\mbox{for}~l,l_1,\cdot\cdot\cdot,l_r=1,\cdot\cdot\cdot,k\}.$$ Hence $$\label{hm-thm-for-6} \int_{\Omega} M^{\beta}_{\bm{\alpha'}}(D^mv_k) \psi dx\geq C \sum_{l=1}^k \frac{1}{l+1} \int_{(\frac{1}{4}\pi, \frac{3}{4}\pi)^N} \prod_{i=1}^{r-1} \sin^2(\frac{m\pi}{2}+n_l x_i) dx-CI,$$ where $$I:= \left| \int_{\Omega} \psi(x) \sum_{(l_1,\cdot\cdot\cdot,l_r)\in G} \frac{1}{n_{l_r}^{s}(l_r+1)^{\frac{1}{r}}} \prod_{i=1}^{r-1} \left(\frac{n_{l_i}^{\frac{m}{r}}}{(l_i+1)^{\frac{1}{r}}} \sin(\frac{m\pi}{2}+n_{l_i} x_i) \sin(\frac{m\pi}{2}+n_{l_r} x_i) \right) dx\right|.$$ Since $n_l=k^{\frac{r^2}{m}} 8^l$, it follows that $$\label{hm-thm-for-3} \frac{n_{l_i}}{n_{l_j}}\leq |n_{l_i}-n_{l_j}|~\mbox{for any}~l_i,l_j=1,\cdot\cdot\cdot,k~\mbox{with}~l_i\neq l_j,$$ $$\label{hm-thm-for-4} \min_{i\neq j} |n_{l_i}-n_{l_j}|\geq k^{\frac{r^2}{m(r-1)}}$$ and $$\label{hm-thm-for-5} \{n_l\mid l=1,\cdot\cdot\cdot,k\}\cap \{z\in \mathbb{R}\mid 2^{n-1}\leq |z|< 2^n\}~\mbox{has at most one element for any }~n\in \mathbb{N}.$$ For any $(l_1,\cdot\cdot\cdot,l_r)\in G$, there exists $1\leq i_0\leq r-1$ such that $l_{i_0}\neq l_r$, it follows from (\[hm-th2-for-2\]), (\[hm-thm-for-3\]) and (\[hm-thm-for-4\]) that $$\begin{split} &\left|\frac{1}{n_{l_r}^{s}(l_r+1)^{\frac{1}{r}}} \int_{\Omega} \psi(x) \prod_{i=1}^{r-1} \left(\frac{n_{l_i}^{\frac{m}{r}}}{(l_i+1)^{\frac{1}{r}}} \sin(\frac{m\pi}{2}+n_{l_i} x_i) \sin(\frac{m\pi}{2}+n_{l_r} x_i) \right) dx\right|\\ &\leq \frac{C}{n_{l_r}^{s}(l_r+1)^{\frac{1}{r}}} \prod_{i=1}^{r-1} \frac{n_{l_i}^{\frac{m}{r}}}{(l_i+1)^{\frac{1}{r}}} \left|\int_0^{\pi} \psi'(x_i) \sin(\frac{m\pi}{2}+n_{l_i} x_i) \sin(\frac{m\pi}{2}+n_{l_r} x_i) dx_i \right|\\ &\leq C \prod_{i=1}^{r-1} \left(\frac{n_{l_i}}{n_{l_r}}\right)^{\frac{m}{r}} \min \{\frac{1}{|n_{l_i}-n_{l_r}|^m},1\} \|D^m\psi\|_{L^{\infty}}\\ &\leq \frac{C}{|n_{l_{i_0}}-n_{l_r}|^{m-\frac{m}{r}}}\\ &\leq C k^{-r}. \end{split}$$ Combine with (\[hm-thm-for-6\]), we find $$\int_{\Omega} M^{\alpha'}_{\bm{\alpha'}}(D^mv_k) \psi dx\geq C \sum_{l=1}^k \frac{1}{l+1} -C,$$ which implies the second inequality of (\[hm-thm-for-2\]). On the other hand, in order to prove the first inequality of (\[hm-thm-for-2\]), it is enough to show that $$\label{hm-thm-for-10} \|v'_k\|_{s,p} \leq C,$$ where $v'_k:=(v^1_k,v^2_k,\cdot\cdot\cdot,v^{r-1}_k,\frac{v^r_k}{(x_r)^m})$. In fact, the Littlewood-Paley characterization of the Besov space $B^s_{p,p}([0,2\pi]^N)$ (e.g. [@TH]) implies that $$\label{hm-thm-for-11} \|v'_k\|_{s,p}\leq C \left(\|v'_k\|^p_{L^p([0,2\pi]^N)}+\sum_{j=1}^{\infty} 2^{sjp}\|T_j(v'_k)\|^p_{L^p([0,2\pi]^N)}\right)^{\frac{1}{p}}.$$ Here the bounded operators $T_j:L^p\rightarrow L^p$ are defined by $$T_j\left(\sum a_n e^{in\cdot x}\right)=\sum_{2^j\leq |n|< 2^{j+1}} \left( \rho(\frac{|n|}{2^{j+1}})- \rho(\frac{|n|}{2^{j}})\right)a_ne^{in\cdot x},$$ where $\rho\in C_c^{\infty}(\mathbb{R})$ is a suitably chosen bump function. Then we have $$\label{hm-thm-for-12} \|T_j(v'_k)\|^p_{L^p([0,2\pi]^N)}\leq C_p \sum_{l=1}^k \frac{1}{n_l^{sp}(l+1)^{\frac{p}{r}}} \|T_j(g_{l,k})\|^p_{L^p([0,2\pi]^N)},$$ where $g_{l,k}=(\sin(n_l x_1),\cdot\cdot\cdot,\sin(n_l x_{r-1}),\prod_{j=1}^{r-1} \sin(\frac{m\pi}{2}+n_l x_j))$. Indeed, since $\sin(n_l x_i)=\frac{1}{2i}(e^{in_lx_i}-e^{-in_lx_i})$, $g_{l,k}$ can be written as $$g_{l,k}(x)= \sum_{\varepsilon\in \{-1,0,1\}^{r-1}} a_{\varepsilon} e^{n_l i \varepsilon\cdot \widehat{x}},$$ where $\widehat{x}=(x_1,\cdot\cdot\cdot,x_{r-1})$, $|a_{\varepsilon}|\leq 1$ for any $\varepsilon$. Set $$S(j,l)=\{\varepsilon\in \{-1,0,1\}^{r-1}\mid 2^{j-1}\leq n_l |\varepsilon|<2^{j+2}\}$$ and $$\chi(j,l)=\begin{cases} 1~~~~~S(j,l)\neq \emptyset\\ 0~~~~~S(j,l)= \emptyset \end{cases}.$$ Hence $$\label{hm-thm-for-13} \|T_j(g_{l,k})\|^p_{L^p([0,2\pi]^N)}\leq C_{r,N} \chi(j,l).$$ For any $j$, if $S(j,l)\neq \emptyset$, then $\frac{2^{j-1}}{\sqrt{r-1}}\leq n_l< 2^{j+2}$, which implies that $\sum_{l=1}^{k}\chi(j,l)<[\frac{\log_2(r-1)}{6}]+1$. Thus, applying (\[hm-thm-for-11\]), (\[hm-thm-for-12\]) and (\[hm-thm-for-13\]), we have $$\begin{split} \|v'_k\|_{s,p}^p&\leq C_{p,s,N,r}\left( \|v'_k\|^p_{L^p([0,2\pi]^N)}+\sum_{j=1}^{\infty} \sum_{l=1}^k\frac{2^{sjp}}{n_l^{sp}(l+1)^{\frac{p}{r}}}\chi(j,l)\right)\\ &\leq C_{p,s,N,r}\left( \|v'_k\|^p_{L^p([0,2\pi]^N)}+ \sum_{l=1}^k\frac{1}{(l+1)^{\frac{p}{r}}} \left(\sum_{j=1}^{\infty}\chi(j,l)\right)\right). \end{split}$$ which implies (\[hm-thm-for-10\]) since $\sum_{j=1}^{\infty}\chi(j,l)\leq [\frac{\log_2(r-1)}{2}]+4$ for any $l$. Clearly Theorem \[hm-thm-2\] is a consequence of Proposition \[hm-pro-41\] and \[hm-pro-42\] as explained in Remark \[hm-rem-41\]. Next we pay attention to the optimality results in case $1<p\leq r, s+\frac{m}{r}<m+\frac{N}{p}-\frac{N}{r}$. \[hm-pro-43\] Let $m, r$ be integers with $1<p\leq r\leq \underline{n}$ and $s+\frac{m}{r}<m+\frac{N}{p}-\frac{N}{r}$. If there exist a function $g\in C_c^{\infty}(B(0,1), \mathbb{R}^n)$, $\beta\in I(r,n)$ and $\bm{\alpha}=(\alpha^1,\alpha^2,\cdot\cdot\cdot,\alpha^m)$ with $\alpha^j \in I(r,N)$ such that $$\label{hm-thm-for-15} \int_{B(0,1)} M_{\bm{\alpha}}^{\beta} (D^m g(x)) |x|^m dx \neq 0.$$ Then there exist a sequence $\{u_k\}_{k=1}^{\infty} \subset C^{m}(\overline{\Omega}, \mathbb{R}^N)$ and a function $\psi\in C_c^{\infty}(\Omega)$ satisfying the conclusions (\[hm-thm-2\]). For any $0<\varepsilon<<1$ we set $$u_{\varepsilon}=\varepsilon^\rho g(\frac{x}{\varepsilon}),$$ where $\rho$ is a constant such that $s-\frac{N}{p}<\rho<m-\frac{N}{r}-\frac{m}{r}$. On the one hand, Lemma \[hm-lem-4\] implies that $$\|u_{\varepsilon}\|_{s,p}\leq C\|u_{\varepsilon}\|^{\theta}_{L^p}\|u_{\varepsilon}\|^{1-\theta}_{[s]+1,p}\leq C\varepsilon^{\rho+\frac{N}{p}-s}\|g\|^{\theta}_{L^p}\|D^{[s]+1}g\|_{L^p}^{1-\theta},$$ where $\theta=\frac{[s]+1-s}{[s]+1}$. On the other hand, let $\psi\in C^{\infty}_c(\Omega)$ be such that $\psi(x)= |x|^m+ O(|x|^{m+1})$ as $x\rightarrow 0$. Then $$\begin{split} &\int_{\Omega} M_{\bm{\alpha}}^{\beta} (D^m u_{\varepsilon}) \psi dx=\varepsilon^{\rho r-rm+N} \int_{B(0,1)} M_{\bm{\alpha}}^{\beta}(D^m g(x)) \psi(\varepsilon x) dx\\ &=\varepsilon^{\rho r -rm+N+m} \int_{B(0,1)} M_{\alpha}^{\beta}(D^mg(x)) |x|^m dx +O( \varepsilon^{\rho r-rm+N+m+1}). \end{split}$$ Take $\varepsilon = \frac{1}{k}$ and hence the conclusion is proved. In order to establishing the optimality results in case $1<p\leq r, s+\frac{m}{r}<m+\frac{N}{p}-\frac{N}{r}$, a natural problem is raised whether there exists $g\in C_c^{\infty}(B(0,1), \mathbb{R}^N)$ such that the conclusion (\[hm-thm-for-15\]) holds. We have positive answers to the problem in case $m=1$ or $2$, see Theorem \[hm-thm-3\], according the following Lemma: \[hm-lem-3\] Let $g\in C_c^{\infty}(B(0,1))$ be given as $$\label{hm-lem-for-1} g(x)=\int_0^{|x|} h(\rho) d\rho$$ for any $x\in \mathbb{R}^N$, where $h\in C_c^{\infty}((0,1))$ and satisfies $$\int_0^1 h(\rho)d\rho=0,~~~~\int_0^1h^{r}(\rho)\rho^{-r+N+s-1} d\rho\neq 0.$$ Here $r\geq 2,s\geq 1$ are integers. Then for any $\alpha \in I(r,N)$, we have $$\label{hm-lem-for-2} \int_{B(0,1)} M_{\alpha}^{\alpha}( D^2 g(x)) |x|^s dx\neq 0.$$ It is easy to see that $$D^2 g=\frac{1}{|x|^3}(A+B),$$ where $A=(a_{ij})_{N\times N}$ and $B=(b_{ij})_{N\times N}$ are $N\times N$ matrices such that $$a_{ij}=h(|x|)|x|^2\delta_{i}^{j},~~b_{ij}=\left(h'(|x|)|x|-h(|x|)\right)x_ix_j,~~~~i,j=1,\ldots,N.$$ Using Binet formula and the fact $\mbox{rank}(B)=1$, one has $$\begin{aligned} M_{\alpha}^{\alpha}(A+B)&=M_{\alpha}^{\alpha}(A)+\sum_{i\in \alpha}\sum_{j\in\alpha}\sigma(i,\alpha-i)\sigma(j,\alpha-j)b_{ij}M_{\alpha-i}^{\alpha-j}(A)\\ &=h^r(|x|)|x|^{2r} - h^r(|x|)|x|^{2r-2}\sum_{i\in\alpha} x_i^2+h^{r-1}(|x|)h'(|x|)|x|^{2r-1} \sum_{i\in\alpha} x_i^2,\end{aligned}$$ Hence $$\begin{aligned} \int_{B(0,1)}M_{\alpha}^{\alpha}( D^2 g)|x|^s dx=\int_{B(0,1)} |x|^{-3r+s}M_{\alpha}^{\alpha}(A+B) dx={\setcounter{RomanNumber}{1}\Roman{RomanNumber}}-{\setcounter{RomanNumber}{2}\Roman{RomanNumber}}+{\setcounter{RomanNumber}{3}\Roman{RomanNumber}},\end{aligned}$$ where $${\setcounter{RomanNumber}{1}\Roman{RomanNumber}}:=\int_{B(0,1)} h^r(|x|) |x|^{-r+s} dx,$$ $${\setcounter{RomanNumber}{2}\Roman{RomanNumber}}:=\int_{B(0,1)} h^r(|x|) |x|^{-r-2+s} \sum_{i\in \alpha} x_i^2dx,$$ and $${\setcounter{RomanNumber}{3}\Roman{RomanNumber}}:=\int_{B(0,1)} h^{r-1}(|x|)h'(|x|)|x|^{-r-1+s} \sum_{i\in \alpha} x_i^2dx.$$ Then integration in polar coordinates gives $${\setcounter{RomanNumber}{3}\Roman{RomanNumber}}=\frac{r-N-s}{N} 2\pi\prod_{i=1}^{N-2}I(i) \int_0^1 h^r(\rho) \rho^{-r+N+s-1} d\rho,$$ where $I(i)=\int_0^{\pi} \sin^i \theta d\theta$. Similarly, $${\setcounter{RomanNumber}{2}\Roman{RomanNumber}}=\frac{r}{N} 2\pi\prod_{i=1}^{N-2}I(i) \int_0^1 h^r(\rho) r^{-r+N+s-1} d\rho,$$ and $${\setcounter{RomanNumber}{1}\Roman{RomanNumber}}= 2\pi\prod_{i=1}^{N-2}I(i) \int_0^1 h^r(\rho) \rho^{-r+N+s-1} d\rho,$$ which implies (\[hm-lem-for-2\]), and then the proof is complete. Note that if $m=2$ and $g= (g',\cdots,g')$ with $g'\in C^{2}(\Omega)$, then Lemma \[hm-lem-2-2\] implies $$M_{\bm{\alpha}}^{\alpha}(D^2g)=r! M_{\alpha^1}^{\alpha^2}(D^2g')$$ for any $\bm{\alpha}=(\alpha^1,\alpha^2)$, $\alpha\in I(r,N)$. Hence Theorem \[hm-thm-3\] is the consequence of Proposition \[hm-pro-41\], \[hm-pro-42\], \[hm-pro-43\] and Lemma \[hm-lem-3\]. In particular, we can give a reinforced versions of optimal results in case $m=2$. \[hm-thm-4-14\] Let $1< r\leq N$, $1<p<\infty$ and $0<s<\infty$ be such that $W^{s,p}(\Omega) \nsubseteq W^{2-\frac{2}{r},r}(\Omega)$. Then there exist a sequence $\{u_k\}_{k=1}^{\infty} \subset C^{m}(\overline{\Omega})$ and a function $\psi\in C_c^{\infty}(\Omega)$ such that $$\lim_{k\rightarrow \infty} \|u_k\|_{s,p} =0, ~~~~\lim_{k\rightarrow \infty} \int_{\Omega} M^{\alpha'}_{\alpha'}(D^2u_k) \psi dx=\infty.$$ We divide our proof in three case: **Case 1:** $1<p\leq r$ and $s+\frac{2}{r}<2+\frac{N}{p}-\frac{N}{r}$ Apply Lemma \[hm-lem-3\] and the argument similar to one used in Proposition \[hm-pro-43\]. **Case 2:** $r<p$ and $0<s<2-\frac{2}{r}$ For $k>>1$, we set $$u_k:=k^{-\rho} x_{r}\Pi_{i=1}^{r-1} \sin^2(kx_i),$$ where $\rho$ is a constant with $s<\rho<2-\frac{2}{r}$. According to the facts that $\|u_k\|_{L^{\infty}}\leq C k^{-\rho}$ and $\|D^2u_k\|_{L^{\infty}}\leq C k^{2-\rho}$, it follows that $$\|u_k\|_{s,p}\leq C \|u_k\|_{L^p}^{1-\frac{s}{2}} \|u_k\|_{2,p}^{\frac{s}{2}}\leq C k^{s-\rho}.$$ On the other hand, Let $\psi\in C^{\infty}_c(\Omega)$ be defined as (\[hm-th2-for-2\]), the (4.1) in [@BJ Proposition 4.1] implies that $$\begin{split} &\left |\int_{\Omega} M_{\alpha'}^{\alpha'} (D^2 u_k) \psi dx\right|\geq \left| \int_{(\frac{1}{4}\pi, \frac{3}{4}\pi)^N} M_{\alpha'}^{\alpha'}(D^2u_{k})dx\right|\\ &\geq k^{2r-2-r\rho} 2^r \int_{(\frac{1}{4}\pi, \frac{3}{4}\pi)^N} x_r^{r-2} \left(\prod_{i=1}^{r-1} \sin(kx_i)\right)^{2r-2} \left(\sum_{j=1}^{r-1} \cos^2(kx_j)\right)dx\\ &=Ck^{2r-2-r\rho}. \end{split}$$ **Case 3:** $2<r<p$ and $s=2-\frac{2}{r}$ For any $k\in \mathbb{N}$ with $k\geq 2$, define $u_k$ with $$u_k(x)=\frac{1}{(\ln k)^{\frac{1}{2r}}} x_r \sum_{l=1}^k \frac{1}{n_l^{2-\frac{2}{r}} l^{\frac{1}{r}}} \prod_{i=1}^{r-1} \sin^2 (n_l x_i)~~~~x\in \mathbb{R}^N,$$ where $n_l=k^{r^{3l}}$. Let $\psi\in C^{\infty}_c(\Omega)$ be defined as (\[hm-th2-for-2\]). The argument similar to the one used in [@BJ Proposition 5.1] shows that $$\|u_k\|_{W^{s,p}(\Omega)}\leq C \|u_k\|_{W^{s,p}((0,2\pi)^N)}\leq C \frac{1}{(\ln k)^{\frac{1}{2r}}}$$ and $$\left| \int_{\Omega} M_{\alpha'}^{\alpha'}(D^2u_{k}) \psi dx\right|=C \left| \int_{(0,2\pi)^{r}} M_{\alpha'}^{\alpha'}(D^2u_{k}) \prod_{i=1}^r \psi'(x_i) dx_1\cdot\cdot\cdot dx_r\right| \geq C (\ln k)^{\frac{1}{2}}.$$ Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported by NSF grant of China ( No. 11131005, No. 11301400) and Hubei Key Laboratory of Applied Mathematics (Hubei University). [1]{} E. Baer and D. Jerison, *Optimal function spaces for continuity of the Hessian determinant as a distribution*, J. Funct. Anal., **269** (2015), 1482-1514. J. Ball, *Convexity conditions and existence theorems in nonlinear elasticity*, Arch. Ration. Mech. Anal., **63** (1977), 337-403. H. Brezis and P. Mironescu, *Gagliardo-Nirenberg, composition and products in fractional Sobolev spaces,* J. Evol. Equ., **4** (2001), 387-404. H. Brezis and P. Mironescu, *Gagliardo-Nirenberg inequalities and non-inequalities: The full story,* Ann. Inst. H. Poincaré Anal. Non Lin$\acute{e}$aire, **35** (2018), 1355-1376. H. Brezis and H. Nguyen, *The Jacobian determinant revisited,* Invent. Math., **185** (2011), 17-54. L. D’Onofrio, F. Giannetti and L. Greco, *On weak Hessian determinants*, Rend. Mat. Acc. Lincei (9), **16** (2005), 159-169. B. Dacorogna and F. Murat, *On the optimality of certain Sobolev exponents for the weak continuity of determinants*, J. Funct. Anal., **105** (1992), 42-62. G. Escherich, *Die Determinanten hoheren Ranges und ihre Verwendung zur Bildung von Invarianten*, Denkshr. Kais. Akad. Wiss., **43** (1882), 1-12. I. Fonseca and J. Mal$\acute{y}$, *From Jacobian to Hessian: distributinal form and relaxation*, Riv. Mat. Univ. Parma, **7** (2005), 45-74. E. Gagliardo, *Caratterizzazione delle tracce sulla frontiera relative ad alcune classi di funzioni in n variabili*, Rend. Semin. Mat. Univ. Padova, **27** (1957), 284-305. L. Gegenbauer, *$\ddot{U}$ber Determinanten hoheren Ranges*, Denkshr. Kais. Akad. Wiss., **43** (1882), 17-32. M. Giaquinta, G. Modica and J. Souček, *Cartesian currents in the calculus of variations, I, II,* Springer-Verlag, Berlin, 1998. T. Iwaniec, *On the concept of the weak Jacobian and Hessian*, Papers on analysis, Rep. Univ. Jyväskylä Dep. Math. Stat., **83** (2001) 181-205, Univ. Jyväskylä, Jyväskyla. C. Morrey, *Multiple Integrals in the Calculus of Variations. Die Grundlehren der mathematischen Wissenschaften*, vol. 130, Springer, New York(1966). Y. Reshetnyak, *The weak convergence of completely additive vector-valued set functions*, Sib. Mat. Zh., **9** (1968), 1386-1394. P. Olver, *Hyper-Jacobians, determinantal ideals and weak solutions to variational problems*, Proc. Roy. Soc. Edinburgh Sect. A, **3-4** (1983), 317-340. E. Stein, *The characterization of functions arising as potentials I*, Bull. Amer. Math. Soc., **67** (1961), 102-104. E. Stein, *The characterization of functions arising as potentials II*, Bull. Amer. Math. Soc., **68** (1962), 577-582. H. Treibel, *Theory of Functions Spaces*, Monogr. Math., **78**, Birkhäuser Verlag, Basel, 1983. [^1]: *Email addresses*: qiangtu@whu.edu.cn(Qiang Tu), cxwu@hubu.edu.cn (Chuanxi Wu), qiuxueting1996@163.com (Xueting Qiu).
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'mips.bib' title: 'Supplement to “A Bandit Approach to Maximum Inner Product Search"' ---
{ "pile_set_name": "ArXiv" }
CPHT-RR-112.1203\ DESY-03-200\ hep-ph/0312125 **Probing the partonic structure of pentaquarks\ in hard electroproduction**\ [M. Diehl$^{a}$, B. Pire$^b$, L. Szymanowski$^c$]{}\ ${}^a$Deutsches Elektronen-Synchroton DESY, 22603 Hamburg, Germany\ ${}^b$CPHT, [É]{}cole Polytechnique, 91128 Palaiseau, France\ ${}^c$Soltan Institute for Nuclear Studies, Hoża 69, 00-681 Warsaw, Poland\ **Abstract**\ Introduction ============ There is increasing experimental evidence [@Nakano:2003qx; @Barth:2003es] for the existence of a narrow baryon resonance $\Theta^+$ with strangeness $S=+1$, whose minimal quark content is $uudd\bar{s}$. Triggered by the prediction of its mass and width in [@Diakonov:1997mm], the observation of this hadron promises to shed new light on our picture of baryons in QCD, with theoretical approaches as different as the soliton picture [@Diakonov:1997mm; @Praszalowicz:2003ik], quark models [@Karliner:2003sy], and lattice calculations [@Csikor:2003ng], to cite only a fraction of the literature. A fundamental question is how the structure of baryons manifests itself in terms of the basic degrees of freedom in QCD, at the level of partons. This structure at short distances can be probed in hard exclusive scattering processes, where it is encoded in generalized parton distributions [@Muller:1994fv] (see [@Goeke:2001tz; @Diehl:2003ny] for recent reviews). In this letter we introduce the transition GPDs from the nucleon to the $\Theta^+$ and investigate electroproduction processes where they could be measured, hopefully already in existing experiments at DESY and Jefferson Lab. In the next section we give some basics of the processes we propose to study. We then define the generalized parton distributions for the $N\to \Theta$ transition and discuss their physics content (throughout this paper we write $N$ for the nucleon and $\Theta$ for the $\Theta^+$). The scattering amplitudes and cross sections for different production channels are given in Section \[sec:scatter\]. In Section \[sec:pole\] we evaluate the contribution from kaon exchange in the $t$-channel to the processes under study. Concluding remarks are given in Section \[sec:concl\]. Processes {#sec:channels} ========= We consider the electroproduction processes $$\label{proc-p} e p\to e \bar{K}^0 \, \Theta , \qquad \qquad e p\to e \bar{K}^{*0} \, \Theta ,$$ where the $\Theta$ subsequently decays into $K^0 p$ or $K^+ n$. Note that the decay $\bar{K}^{*0} \to K^- \pi^+$ of the ${K}^{*}(892)$ tags the strangeness of the produced baryon. In contrast, the observation of a $\bar{K}^0$ as $K_S$ or $K_L$ includes a background from final states with a $K^0$ and an excited $\Sigma^+$ state in the mass region of the $\Theta$, unless the strangeness of the baryon is tagged by the kaon in the decay mode $\Theta\to K^+ n$. Apart from their different experimental aspects the channels with $\bar{K}$ or $\bar{K}^*$ production are quite distinct in their dynamics, as we will see in Section \[sec:scatter\]. We will also investigate the channels $$\label{proc-n} e n\to e {K}^- \, \Theta , \qquad \qquad e n\to e {K}^{*-} \, \Theta$$ accessible in scattering on nuclear targets. The reconstruction of the final state and of its kinematics is more involved in this case because of the spectator nucleons in the target, but we will see in Section \[sec:scatter\] that comparison of the processes (\[proc-p\]) and (\[proc-n\]) may give valuable clues on the dynamics. We remark that the crossed process $K^+ n\to e^+e^-\, \Theta$ could be analyzed along the lines of [@Berger:2001zn] at an intense kaon beam facility. The kinematics of the $\gamma^* p$ or $\gamma^* n$ subprocess is specified by the invariants $$\label{invariants} Q^2 = - q^2 , \qquad W^2 = (p+q)^2 , \qquad t = (p-p')^2 ,$$ with four-momenta as given in Fig. \[fig:meson\]. We are interested in the Bjorken limit of large $Q^2$ at fixed $t$ and fixed scaling variable $x_B = Q^2 /(2 pq)$. According to the factorization theorem for meson production [@Collins:1997fb], the Bjorken limit implies factorization of the $\gamma^* p$ amplitude into a perturbatively calculable subprocess at quark level, the distribution amplitude (DA) of the produced meson, and a generalized parton distribution (GPD) describing the transition from $p$ to $\Theta$ (see Fig. \[fig:meson\]). The dominant polarization of the photon and (if applicable) the produced meson is then longitudinal, and the corresponding $\gamma^* p$ cross section scales like $d\sigma_L /(dt) \sim Q^{-6}$ at fixed $x_B$ and $t$, up to logarithmic corrections in $Q^2$ due to perturbative evolution. We remark that pentaquarks with strangeness $S=-2$, like the $\Xi^{--}$ recently reported in [@Alt:2003vb], cannot be produced from the nucleon by this leading-twist mechanism. We also note that if the $\Theta$ had isospin $I=2$ as proposed in [@Capstick:2003iq] (but not favored by the experimental analyses in [@Barth:2003es]), leading-twist electroproduction would be isospin violating and hence tiny. The Bjorken limit implies a large invariant mass $W$ of the hadronic final state, so that the produced baryon and meson are well separated in phase space. This provides a clean environment to study the $\Theta$ resonance, with a low background obtained of course at the price of a lower cross section than for inclusive production. Large enough $W$ in particular drives one away from kinematic reflections which could fake a $\Theta$ resonance signal, discussed in [@Dzierba:2003cm] for the process at hand. To illustrate this we show in Fig. \[fig:mass\] the smallest kinematically possible invariant masses of the $\bar{K}^0 K^0$ and of the $\bar{K}^0 p$ system in $e p\to \bar{K}^0 K^0 p$ with the $K^0 p$ invariant mass fixed at ${m_\Theta}$. Here and in the following we take ${m_\Theta}= 1540 {\,{\rm MeV}}$ in numerical evaluations (our results do not change significantly if we take ${m_\Theta}= 1525 {\,{\rm MeV}}$ or ${m_\Theta}= 1555 {\,{\rm MeV}}$ instead). We also remark that strong interactions (in particular resonance) effects between the $\bar{K}^0$ and the $K^0 p$ system will have a faster power falloff than $Q^{-6}$ in the $\gamma^* p$ cross section at fixed $x_B$, provided $Q^2$ is large enough for the analysis of the factorization theorem to apply. The transition GPDs and their physics {#sec:gpds} ===================================== Let us take a closer look at the transition GPDs that occur in the processes we are interested in. For their definition we introduce light-cone coordinates $v^\pm = (v^0 \pm v^3) /\sqrt{2}$ and transverse components $v_T = (v^1, v^2)$ for any four-vector $v$. The skewness variable $\xi = (p-p')^+ /(p+p')^+$ describes the loss of plus-momentum of the incident nucleon and is connected with $x_B$ by $$\label{xi-vs-xB} \xi \approx \frac{x_B}{2-x_B}$$ in the Bjorken limit. In the following we assume that the $\Theta$ has spin $J={\textstyle \frac{1}{2}}$ and isospin $I=0$. Different theoretical approaches predict either ${\eta_\Theta}= 1$ or ${\eta_\Theta}= -1$ for the intrinsic parity of the $\Theta$, and we will give our discussion for the two cases in parallel. The hadronic matrix elements that occur in the electroproduction processes (\[proc-p\]) at leading-twist accuracy are $$\begin{aligned} \label{matrix-elements} F_V &=& \frac{1}{2} \int \frac{d z^-}{2\pi}\, e^{ix P^+ z^-} \langle \Theta|\, \bar{d}(-{\textstyle \frac{1}{2}}z)\, \gamma^+ s({\textstyle \frac{1}{2}}z) \,|p \rangle \Big|_{z^+=0,\, {z}_T=0} \; , \nonumber \\ F_A &=& \frac{1}{2} \int \frac{d z^-}{2\pi}\, e^{ix P^+ z^-} \langle \Theta|\, \bar{d}(-{\textstyle \frac{1}{2}}z)\, \gamma^+ \gamma_5\, s({\textstyle \frac{1}{2}}z) \,|p \rangle \Big|_{z^+=0,\, {z}_T=0}\end{aligned}$$ with $P = {\textstyle \frac{1}{2}}(p+p')$, where here and in the following we do not explicitly label the hadron spin degrees of freedom. We define the corresponding $p\to \Theta$ transition GPDs by $$\begin{aligned} \label{gpd-pos} F_V &=& \frac{1}{2P^+} \left[ H(x,\xi,t)\, \bar{u}(p') \gamma^+ u(p) + E(x,\xi,t)\, \bar{u}(p') \frac{i \sigma^{+\alpha} (p'-p)_\alpha}{{m_\Theta}+m_N} u(p) \, \right] , \nonumber \\ F_A &=& \frac{1}{2P^+} \left[ \tilde{H}(x,\xi,t)\, \bar{u}(p') \gamma^+ \gamma_5 u(p) + \tilde{E}(x,\xi,t)\, \bar{u}(p') \frac{\gamma_5\, (p'-p)^+}{{m_\Theta}+m_N} u(p) \, \right]\end{aligned}$$ for ${\eta_\Theta}= 1$ and by $$\begin{aligned} \label{gpd-neg} F_V &=& \frac{1}{2P^+} \left[ \tilde{H}(x,\xi,t)\, \bar{u}(p') \gamma^+ \gamma_5 u(p) + \tilde{E}(x,\xi,t)\, \bar{u}(p') \frac{\gamma_5\, (p'-p)^+}{{m_\Theta}+m_N} u(p) \, \right] , \nonumber \\ F_A &=& \frac{1}{2P^+} \left[ H(x,\xi,t)\, \bar{u}(p') \gamma^+ u(p) + E(x,\xi,t)\, \bar{u}(p') \frac{i \sigma^{+\alpha} (p'-p)_\alpha}{{m_\Theta}+m_N} u(p) \, \right]\end{aligned}$$ for ${\eta_\Theta}= -1$. Notice that the tilde in our notation indicates the dependence on the spin of the hadrons, not on the spin of the quarks. The scale dependence of the matrix elements is governed by the nonsinglet evolution equations for GPDs [@Muller:1994fv; @Blumlein:1997pi], with the unpolarized evolution kernels for $F_V$ and the polarized ones for $F_A$. Isospin invariance gives $\langle \Theta | \bar{d}_\alpha s_\beta | p\rangle = - \langle \Theta | \bar{u}_\alpha s_\beta | n\rangle$, so that the transition GPDs for $n \to \Theta$ and those for $p \to \Theta$ are equal up to a global sign. For simplicity we write $F_V$, $F_A$ and $H$, $E$, $\tilde{H}$, $\tilde{E}$ without labels for the transition $p\to \Theta$. The value of $x$ determines the partonic interpretation of the GPDs. For $\xi<x<1$ the proton emits an $s$ quark and the $\Theta$ absorbs a $d$ quark, whereas for $-1<x<-\xi$ the proton emits a $\bar{d}$ and the $\Theta$ absorbs an $\bar{s}$. The region $-\xi<x<\xi$ describes emission of an $s\bar{d}$ pair by the proton. In all three cases sea quark degrees of freedom in the proton are involved. The interpretation of GPDs becomes yet more explicit when the GPDs are expressed as the overlap of light-cone wave functions for the proton and the $\Theta$. As shown in Fig. \[fig:partons\], the proton must be in *at least* a five-quark configuration for $\xi<|x|<1$ and *at least* a seven-quark configuration for $-\xi<x<\xi$. We emphasize however that all possible spectator configurations have to be summed over in the wave function overlap, including Fock states with additional partons in the nucleon and in the pentaquark. As shown in [@Burkardt:2000za], GPDs contain information about the spatial structure of hadrons. A Fourier transform converts their dependence on $t$ into the distribution of quarks or antiquarks in the plane transverse to their direction of motion in the infinite momentum frame. This tells us about the transverse size of the hadrons in question. The wave function overlap can also be formulated in this impact parameter representation, with wave functions specifying transverse position and plus-momentum fraction of each parton. This has in fact been done in Fig. \[fig:partons\], and we refer to [@Diehl:2002he] for a full discussion. We see in particular that for $\xi<|x|<1$ the transverse positions of all partons must match in the proton and the $\Theta$, including the quark or antiquark taking part in the hard scattering. For $-\xi<x<\xi$ the transverse positions of the spectator partons in the proton must match those in the $\Theta$, whereas the $s$ and $\bar{d}$ are extracted from the proton at the same transverse position (within an accuracy of order $1/Q$ set by the factorization scale of the hard scattering process). Note that small-size quark-antiquark pairs with net strangeness are not necessarily rare in the proton, as is shown by the rather large kaon pole contribution in the $p\to \Lambda$ transition (see the discussion after (\[pole-factor\]) below). In summary, the $p\to \Theta$ transition GPDs probe the partonic structure of the $\Theta$, requiring the plus-momenta and transverse positions of its partons to match with appropriate configurations in the nucleon. The helicity and color structure of the parton configurations must match as well. We recall that for elastic transitions like $p\to p$ the analogs of the matrix elements (\[matrix-elements\]) reduce to the usual parton densities in the forward limit of $\xi=0$ and $t=0$. One then has $H(x) = q(x)$, $H(-x) = -\bar{q}(x)$ and $\tilde{H}(x) = \Delta q(x)$, $\tilde{H}(-x) = \Delta\bar{q}(x)$ for $x>0$, and the positivity of parton densities results in inequalities like $| H(x) + H(-x) | \le |H(x) - H(-x)|$ and $| \tilde{H}(x) | \le |H(x)|$. One may expect that this hierarchy persists at least in a limited region of nonzero $\xi$ and $t$. For the $p\to\Theta$ transition the situation is different. At given $\xi$ and $t$ the combinations $F_V(x) - F_V(-x)$ and $F_A(x) + F_A(-x)$ still give the sum of the configurations in Fig. \[fig:partons\] with emission of a quark ($\xi<x<1$) and of an antiquark ($\xi<-x<1$), whereas $F_V(x) + F_V(-x)$ and $F_A(x) - F_A(-x)$ give their difference. In the same $x$ regions $F_V$ still gives the sum and $F_A$ the difference of configurations with positive and negative helicity of the emitted and the absorbed parton. There are however no positivity constraints now, since the $p\to \Theta$ transition GPDs do not become densities in any limit. They rather describe the correlation between wave functions of $\Theta$ and nucleon, which may be quite different. Knowledge of the relative size of the GPD combinations just discussed would in turn translate into characteristic information about the wave functions of the $\Theta$ relative to those of the proton. In the transition GPDs we have defined, the $\Theta$ is treated as a stable hadron. The amplitude of a full process, say $e p\to e \bar{K}^0 K^0 p$ for definiteness, contains in addition a factor for the decay $\Theta \to K^0 p$ and a term for the nonresonant $K^0 p$ continuum. An alternative description is to use matrix elements analogous to (\[matrix-elements\]) directly for the hadronic state $|K^0 p \rangle$ of given invariant mass, including both resonance and continuum. The leading-twist expression of the amplitude then contains $p\to K^0 p$ transition GPDs, which have complex phases describing the strong interactions in the $K^0 p$ system. In the partial wave relevant for the $\Theta$ resonance, these phases will show a strong variation in the invariant $K^0 p$ mass around ${m_\Theta}$. Scattering amplitude and cross section {#sec:scatter} ====================================== The scattering amplitude for longitudinal polarization of photon and meson at leading order in $1/Q$ and in $\alpha_s$ readily follows from the general expressions for meson production given in [@Diehl:2003ny]. One has $$\begin{aligned} \label{amp-p} \mathcal{A}_{\gamma^* p\to \bar{K}^0\, \Theta} &=& i e\, \frac{8\pi\alpha_s}{27}\, \frac{f_K}{Q}\, \Bigg[ I_K \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big( F_A(x,\xi,t) - F_A(-x,\xi,t) \Big) \nonumber \\ && \hspace{4.2em} {}+ J_K \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big( F_A(x,\xi,t) + F_A(-x,\xi,t) \Big) \, \Bigg] , \nonumber \\ \mathcal{A}_{\gamma^* p\to \bar{K}^{*0}\, \Theta} &=& i e\, \frac{8\pi\alpha_s}{27}\, \frac{f_{K^*}}{Q}\, \Bigg[ I_{K^*} \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big( F_V(x,\xi,t) - F_V(-x,\xi,t) \Big) \nonumber \\ && \hspace{3.8em} {}+ J_{K^*} \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big( F_V(x,\xi,t) + F_V(-x,\xi,t) \Big) \, \Bigg] ,\end{aligned}$$ independently of the parity of the $\Theta$. Our phase conventions for meson states are fixed by $$\begin{aligned} \label{kaon-decay} \langle \bar{K}^0(q') | \bar{s}(0) \gamma^\mu\gamma_5\, d(0) | 0\rangle &=& \langle K^-(q') | \bar{s}(0) \gamma^\mu\gamma_5\, u(0) | 0\rangle \;=\; -i q'^\mu f_K , \nonumber \\ \langle \bar{K}^{*0}(q',\epsilon') | \bar{s}(0) \gamma^\mu d(0) | 0\rangle &=& \langle K^{*-}(q',\epsilon') | \bar{s}(0) \gamma^\mu u(0) | 0\rangle \;=\; -i \epsilon'^\mu m_{K^*} f_{K^*} ,\end{aligned}$$ where $f_{K} = 160 {\,{\rm MeV}}$, $f_{K^*} = (218 \pm 4) {\,{\rm MeV}}$ [@Beneke:2003zv], and $\epsilon'$ is the polarization vector of the $K^*$. This differs from the convention in [@Diehl:2003ny] by the factors of $-i$ on the r.h.s. In (\[amp-p\]) we have integrals $$\begin{aligned} \label{DA-integrals} I &=& \int_0^1 dz\, \frac{1}{z(1-z)}\, \phi(z) \;=\; 6 \sum_{n=0}^{\infty} a_{2n} , \nonumber \\ J &=& \int_0^1 dz\, \frac{2z-1}{z(1-z)}\, \phi(z) \;=\; 6 \sum_{n=0}^{\infty} a_{2n+1} ,\end{aligned}$$ over the twist-two distribution amplitudes of either $\bar{K}^{0}$ or $\bar{K}^{*0}$. Our DAs are normalized to $\int_0^1 dz\, \phi(z) = 1$, and $z$ denotes the momentum fraction of the $s$-quark in the kaon. Because of isospin invariance $\bar{K}^0$ and $K^-$ have the same DA, as have $\bar{K}^{*0}$ and $K^{*-}$. In (\[DA-integrals\]) we have used the expansion of DAs on Gegenbauer polynomials, $$\label{Gegenbauer} \phi(z) = 6 z(1-z) \sum_{n=0}^\infty a_n \, C^{3/2}_n(2z-1)$$ with $a_0=1$ due to our normalization condition. Note that odd Gegenbauer coefficients $a_{2n+1}$ are nonzero due to the breaking of flavor SU(3) symmetry. A recent estimate from QCD sum rules by Ball and Boglione [@Ball:2003sc] obtained $a_1^{K^-} = -0.18 \pm 0.09$, $a_2^{K^-} = 0.16 \pm 0.10$ and $a_1^{K^{*-}} = -0.4 \pm 0.2$, $a_2^{K^{*-}} = 0.09 \pm 0.05$ at a factorization scale $\mu = 1 {\,{\rm GeV}}$. Note that the sign of $a_1$ in both cases is such that the $s$ quark tends to carry less momentum than the light antiquark, see the discussion in [@Ball:2003sc]. In contrast, Bolz et al. [@Bolz:1997ez] estimated $a_1^{K^-}$ to be of order $+0.1$ for the kaon, using results of a calculation in the Nambu-Jona-Lasinio model. Note that the combination of GPDs going with $I_K$ corresponds to the difference of quark and antiquark configurations in the sense of our discussion at the end of Section \[sec:gpds\]. In contrast, the combination going with $I_{K^*}$ corresponds to the sum of quark and antiquark contributions. Given our ignorance about the relative sign of the transition GPDs at $x$ and $-x$ we cannot readily say whether the terms with $I$ or with $J$ tend to dominate in the amplitudes (\[amp-p\]). For a neutron target the scattering amplitudes read $$\begin{aligned} \label{amp-n} \mathcal{A}_{\gamma^* n\to {K}^-\, \Theta} &=& -i e\, \frac{8\pi\alpha_s}{27}\, \frac{f_K}{Q}\, \Bigg[ I_K \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big(F_A(x,\xi,t) + 2 F_A(-x,\xi,t) \Big) \nonumber \\ && \hspace{5.0em} {}+ J_K \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big(F_A(x,\xi,t) - 2 F_A(-x,\xi,t) \Big) \, \Bigg], \nonumber \\ \mathcal{A}_{\gamma^* n\to {K}^{*-}\, \Theta} &=& -i e\, \frac{8\pi\alpha_s}{27}\, \frac{f_{K^*}}{Q}\, \Bigg[ I_{K^*} \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big(F_V(x,\xi,t) + 2 F_V(-x,\xi,t) \Big) \nonumber \\ && \hspace{4.6em} {}+ J_{K^*} \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big(F_V(x,\xi,t) - 2 F_V(-x,\xi,t) \Big) \, \Bigg],\end{aligned}$$ where we have used the isospin relations between the GPDs for $p\to \Theta$ and $n\to \Theta$ and between the DAs for neutral and charged kaons. Due to the different factors for a photon coupling to $d$ and $u$ quarks, the proton and neutron amplitudes involve different combinations of GPDs at $x$ and $-x$. Information on the relative size of these combinations can thus be obtained by comparing data for proton and neutron targets, given our at least qualitative knowledge about the relative size of the integrals $I$ and $J$ over meson DAs. If for example one had $F_A(x,\xi,t) \approx F_A(-x,\xi,t)$, the amplitude for $\gamma^* p\to \bar{K}^0 \Theta$ would be dominated by the SU(3) breaking integral $J_K$ and hence be suppressed, whereas no such suppression would occur in the amplitude for $\gamma^* n \to K^- \Theta$. Comparison of $K$ and $K^*$ production on a given target can in turn reveal the relative size between the matrix elements $F_A$ and $F_V$. To leading accuracy in $1/Q^2$ and in $\alpha_s$ the cross section for $\gamma^* p$ for a longitudinal photon on transversely polarized target is $$\begin{aligned} \label{X-section} \frac{d\sigma_L}{dt} &=& \frac{64\pi^2 \alpha_{\mathit{em}}^{\phantom{2}} \alpha_s^2}{729}\, \frac{f_{K^{(*)}}^2}{Q^6}\, \frac{\xi^2}{1-\xi^2}\, ( S_U + S_T\, \sin\beta ) ,\end{aligned}$$ where we use Hand’s convention [@Hand:1963bb] for the virtual photon flux. $\beta$ is the azimuthal angle between the hadronic plane and the transverse target spin as defined in Fig. \[fig:angle\].[^1] The cross section for an unpolarized target is simply obtained by omitting the $\beta$-dependent term. To have concise expressions for $S_U$ and $S_T$ we define $$\begin{aligned} \label{gpd-integrals} \mathcal{H}(\xi,t) &=& I_{K^{(*)}} \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big( H(x,\xi,t) - H(-x,\xi,t) \Big) \nonumber \\ &+& J_{K^{(*)}} \int_{-1}^1 \frac{dx}{\xi-x-i\epsilon}\, \Big( H(x,\xi,t) + H(-x,\xi,t) \Big)\end{aligned}$$ and analogous expressions $\mathcal{E}$, $\tilde\mathcal{H}$, $\tilde\mathcal{E}$ for the other GPDs. For ${\eta_\Theta}= 1$ we have $$\begin{aligned} \label{H-tilde-combinations} S_U &=& (1-\xi^2) |\tilde\mathcal{H}|^2 + \frac{({m_\Theta}-m_N)^2 - t}{({m_\Theta}+m_N)^2}\, \xi^2 |\tilde\mathcal{E}|^2 - \Bigg( \xi + \frac{{m_\Theta}-m_N}{{m_\Theta}+m_N} \Bigg) 2\xi\, {\mathrm{Re}\,}( \tilde\mathcal{E}^* \tilde\mathcal{H} ) \, , \nonumber \\ S_T &=& -\sqrt{1-\xi^2}\, \frac{\sqrt{t_0-t}}{{m_\Theta}+m_N} \, 2\xi\, {\mathrm{Im}\,}( \tilde\mathcal{E}^* \tilde\mathcal{H} )\end{aligned}$$ for $K$ production and $$\begin{aligned} \label{H-combinations} S_U &=& (1-\xi^2) |\mathcal{H}|^2 - \Bigg( \frac{2\xi ({m_\Theta}^2 - m_N^2) + t}{({m_\Theta}+m_N)^2} + \xi^2 \Bigg) |\mathcal{E}|^2 - \Bigg( \xi + \frac{{m_\Theta}-m_N}{{m_\Theta}+m_N} \Bigg) 2\xi\, {\mathrm{Re}\,}( \mathcal{E}^* \mathcal{H} ) \, , \nonumber \\ S_T &=& \sqrt{1-\xi^2}\, \frac{\sqrt{t_0-t}}{{m_\Theta}+m_N} \, 2{\mathrm{Im}\,}( \mathcal{E}^* \mathcal{H} )\end{aligned}$$ for $K^*$ production. If ${\eta_\Theta}= -1$ then (\[H-tilde-combinations\]) describes $K^*$ production and (\[H-combinations\]) describes $K$ production. We see that one cannot determine the parity of the $\Theta$ from the leading twist cross section (\[X-section\]) without knowledge about the dependence of $\mathcal{H}$, $\mathcal{E}$, $\tilde\mathcal{H}$, $\tilde\mathcal{E}$ on $t$ or $\xi$. The same holds for scattering on a neutron target, where one has to replace $H(-x,\xi,t)$ with $-2 H(-x,\xi,t)$ in (\[gpd-integrals\]) and likewise change the expressions for the other GPDs, as follows from (\[amp-p\]) and (\[amp-n\]). There is theoretical and phenomenological evidence that higher-order corrections in $\alpha_s$ and in $1/Q$ can be substantial in meson electroproduction at moderate values of $Q^2$, see [@Goeke:2001tz; @Diehl:2003ny] for a discussion and references. For $K^*$ production one can in particular expect an important contribution from transverse polarization of the photon and the meson, in analogy to what has been measured for exclusive electroproduction of a $\rho^0$. A minimum requirement for the applicability of a leading-twist description is that $Q^2$ should be large compared to $-t$ and $m^2_K$ or $m_{K^*}^2$, which directly enter in the kinematics of the hard scattering process and should be negligible there. In kinematic relations, the squared baryon masses $m^2_N$ and ${m_\Theta}^2$ typically occur as corrections to terms of size $W^2$, although a complete analysis of target mass corrections in exclusive processes has not been performed yet. There are arguments [@Frankfurt:1999fp; @Goeke:2001tz; @Diehl:2003ny] that theoretical uncertainties from some of the corrections just discussed cancel at least partially in suitable ratios of cross sections. At the level of the leading order formulae (\[amp-p\]) and (\[amp-n\]) we see for instance that the scale uncertainty in $\alpha_s$ cancels in the ratio of cross sections on a proton and a neutron target, and that the dependence on the meson structure comes only via the ratio $J/I$. Other processes to compare with are given by $ep \to e K^0 \Sigma^+$, $ep \to e K^+ \Sigma^0$, $ep \to e K^+ \Lambda$ or their analogs for vector kaons or a neutron target, with the production of either ground state or excited hyperons. Such channels may also be useful for cross checks of experimental resolution and energy calibration. Their amplitudes are given as in (\[amp-p\]) with an appropriate replacement of matrix elements $F_V$ or $F_A$ listed in Table \[tab:channels\]. We have used isospin invariance to replace the transition GPDs from the neutron with those from the proton. Isospin invariance further gives $F_{p\to \Sigma^+} = \sqrt{2}\, F_{p\to \Sigma^0}$. $$\renewcommand{\arraystretch}{1.2} \begin{array}{lll} \hline\hline & ~~~~~~~~~~~~~~~I & ~~~~~~~~~~~~~~~J \\ \hline \gamma^* p \to \bar{K}^0 \Theta & \phantom{-} F_{p\to \Theta}(x) - F_{p\to \Theta}(-x) & \phantom{-} F_{p\to \Theta}(x) + F_{p\to \Theta}(-x) \\ \gamma^* p \to K^0 \Sigma^+ & \phantom{-} F_{p\to \Sigma^+}(x) - F_{p\to \Sigma^+}(-x) & - [ F_{p\to \Sigma^+}(x) + F_{p\to \Sigma^+}(-x) ] \\ \gamma^* p \to K^+ \Sigma^0 & - [ 2F_{p\to \Sigma^0}(x) + F_{p\to \Sigma^0}(-x) ] & \phantom{-} 2F_{p\to \Sigma^0}(x) - F_{p\to \Sigma^0}(-x) \\ \gamma^* p \to K^+ \Lambda & - [ 2F_{p\to \Lambda}(x) + F_{p\to \Lambda}(-x) ] & \phantom{-} 2F_{p\to \Lambda}(x) - F_{p\to \Lambda}(-x) \\ \hline \gamma^* n \to {K}^- \Theta & - [ F_{p\to \Theta}(x) + 2F_{p\to \Theta}(-x) ] & - [ F_{p\to \Theta}(x) - 2F_{p\to \Theta}(-x) ] \\ \gamma^* n \to K^+ \Sigma^- & \phantom{-} 2F_{p\to \Sigma^+}(x) + F_{p\to \Sigma^+}(-x) & - [ 2F_{p\to \Sigma^+}(x) - F_{p\to \Sigma^+}(-x) ] \\ \gamma^* n \to K^0 \Sigma^0 & - [ F_{p\to \Sigma^0}(x) - F_{p\to \Sigma^0}(-x) ] & \phantom{-} F_{p\to \Sigma^0}(x) + F_{p\to \Sigma^0}(-x) \\ \gamma^* n \to K^0 \Lambda & \phantom{-} F_{p\to \Lambda}(x) - F_{p\to \Lambda}(-x) & - [ F_{p\to \Lambda}(x) + F_{p\to \Lambda}(-x) ] \\ \hline\hline \end{array} \renewcommand{\arraystretch}{1}$$ For transitions within the ground state baryon octet, SU(3) flavor symmetry relates the transition GPDs to the flavor diagonal ones for $u$, $d$ and $s$ quarks in the proton [@Frankfurt:1999fp], $$\begin{aligned} F_{p\to \Lambda} &=& \frac{1}{\sqrt{6}}\, \Big( F^s_{p\to p} + F^d_{p\to p} - 2F^u_{p\to p} \Big) , \nonumber \\ F_{p\to \Sigma^0} &=& \frac{1}{\sqrt{2}}\, \Big( F^s_{p\to p} - F^d_{p\to p} \Big) .\end{aligned}$$ One may expect these relations to hold reasonably well, except for the distributions $\tilde{E}$, where SU(3) symmetry is strongly broken by the difference between pion and kaon mass in the respective pole contributions (see the following section). In the approximation of SU(3) symmetry, comparison of $\Theta$ production with the corresponding hyperon channels would thus compare the $N\to \Theta$ transition GPDs with the GPDs of the nucleon itself. Kaon pole contributions {#sec:pole} ======================= In analogy to the well-known pion exchange contribution to the elastic nucleon GPDs, the axial vector matrix elements $F_A$ for the transition between nonstrange and strange baryons receive a contribution from kaon exchange in the $t$-channel, as shown in Fig. \[fig:kaon-pole\]. It can be expressed in terms of the kaon distribution amplitude and the appropriate baryon-kaon coupling if $t=m_K^2$. This is of course outside the physical region for our electroproduction processes, where the contribution from the kaon pole is expected to be less and less dominant for increasing $-t$. With this caveat in mind we will now discuss the kaon pole contribution to the $N\to \Theta$ GPDs, as this can be done without a particular dynamical model for the $\Theta$. We recall at this point that the minimal kinematically allowed value of $-t$ at given $\xi$, $$\label{tmin} -t_0 = \frac{2 \xi^2 ({m_\Theta}^2 + m_N^2) + 2 \xi ({m_\Theta}^2 - m_N^2)}{1-\xi^2} ,$$ is not so small in typical kinematics of fixed target experiments. This is shown in Fig. \[fig:tmin\], where we have replaced $\xi$ with $x_B$ using the relation (\[xi-vs-xB\]) valid in Bjorken kinematics. We also show the corresponding values of $-t_0$ for the transition from the nucleon to a ground state $\Sigma$ or $\Lambda$. We define the $\Theta NK$ coupling through $$\mathcal{L} = ig_{\Theta NK} K_d (\bar{\Theta} \gamma_5 p) - ig_{\Theta NK} K_u (\bar{\Theta} \gamma_5 n) + \textrm{c.c.}$$ if ${\eta_\Theta}= 1$, and through $$\label{g-def-negative} \mathcal{L} = ig_{\Theta NK} K_d (\bar{\Theta} p) - ig_{\Theta NK} K_u (\bar{\Theta} n) + \textrm{c.c.}$$ if ${\eta_\Theta}= -1$. Here $K_d$ denotes the field that creates a $\bar{K}^0$ and $K_u$ the one creating a $K^-$. The factor of $i$ in (\[g-def-negative\]) is dictated by time reversal invariance, since we choose the phase of the $\Theta$ field such that it has the same transformation under time reversal as the nucleon field. Then the GPDs defined in (\[gpd-neg\]) are real valued. The above definitions can be rewritten in terms of the vector or axial vector current using the free Dirac equation for the $\Theta$ and the nucleon fields. Using the method of [@Mankiewicz:1998kg] we obtain kaon pole contributions $$\begin{aligned} \label{gpd-pole-pos} \xi \tilde{E}_{\mathrm{pole}} &=& \frac{g_{\Theta NK} f_K ({m_\Theta}+m_N)}{m_K^2 - t}\, \frac{1}{2} \phi\Big(\frac{x+\xi}{2\xi}\Big) \nonumber \\ \tilde{H}_{\mathrm{pole}} &=& H_{\mathrm{pole}} \;=\; E_{\mathrm{pole}} \;=\; 0\end{aligned}$$ for ${\eta_\Theta}= 1$ and $$\begin{aligned} \label{gpd-pole-neg} E_{\mathrm{pole}} &=& -H_{\mathrm{pole}} = \frac{g_{\Theta NK} f_K ({m_\Theta}+m_N)}{m_K^2 - t}\, \frac{1}{2} \phi\Big(\frac{x+\xi}{2\xi}\Big) \nonumber \\ \tilde{H}_{\mathrm{pole}} &=& \tilde{E}_{\mathrm{pole}} \;=\; 0\end{aligned}$$ for ${\eta_\Theta}= -1$, where it is understood that $x$ is limited to the region between $-\xi$ and $\xi$, and where $\phi$ is the same kaon distribution amplitude we have encountered earlier. At the level of the amplitudes (\[amp-p\]) and (\[amp-n\]) for $K$ production one finds $$\begin{aligned} \label{amplitude-pole} \mathcal{A}_{\gamma^* p\to \bar{K}^0\, \Theta}^{\mathrm{pole}} &=& ie\, \bar{u}(p') \gamma_5\, {u}(p)\, \frac{g_{\Theta NK}}{m_K^2 - t}\, Q F_{\bar{K}^0}(Q^2) , \nonumber \\ \mathcal{A}_{\gamma^* n\to {K}^-\, \Theta}^{\mathrm{pole}} &=& -ie\, \bar{u}(p') \gamma_5\, {u}(p)\, \frac{g_{\Theta NK}}{m_K^2 - t}\, Q F_{K^-}(Q^2)\end{aligned}$$ for ${\eta_\Theta}= 1$, whereas for ${\eta_\Theta}= -1$ one simply has to replace $\bar{u}(p') \gamma_5\, {u}(p)$ with $\bar{u}(p') {u}(p)$ in both relations. Here $$\begin{aligned} \label{kaon-ff} F_{\bar{K}^0}(Q^2) &=& - \frac{2\pi \alpha_s}{9}\, \frac{f_K^2}{Q^2}\, \frac{4}{3} I_K J_K \nonumber \\ F_{K^-}(Q^2) &=& - \frac{2\pi \alpha_s}{9}\, \frac{f_K^2}{Q^2}\, \Big( I_K^2 - \frac{2}{3} I_K J_K + J_K^2 \Big)\end{aligned}$$ are the elastic kaon form factors at leading accuracy in $1/Q^2$ and $\alpha_s$. We note that the relations (\[amplitude-pole\]) remain valid beyond this approximation, which in analogy to the pion form factor we expect to receive important corrections at moderate $Q^2$, see [@Diehl:2003ny] for references. The form factors are normalized as $F_{K^-}(0) = -1$ and $F_{\bar{K}^0}(0) = 0$, and at nonzero $t$ the neutral kaon form factor is only nonzero thanks to flavor SU(3) breaking. The contribution of the squared kaon pole amplitude to the $\gamma^* p \to \bar{K}^0 \Theta$ or $\gamma^* n \to K^- \Theta$ cross section finally reads $$\label{X-section-pole} \frac{d\sigma_L}{dt} \Bigg|_{\mathrm{pole}} = \alpha_{\mathit{em}}^{\phantom{2}} \frac{F_K^2(Q^2)}{Q^2}\, \frac{x_B^2}{4 (1-x_B)}\; g_{\Theta NK}^2 \, \frac{({m_\Theta}- {\eta_\Theta}m_N)^2 - t}{(m_K^2-t)^2} ,$$ where $F_K$ is the appropriate form factor for the $\bar{K}^0$ or the $K^-$. Of course, the pole contribution (\[amplitude-pole\]) also appears in the cross section via its interference with the non-pole parts of the amplitude, which we cannot estimate at this point. For kinematical reasons $\Theta\to K^0 p$ and $\Theta \to K^+ n$ are the only strong decays of the $\Theta$, so that its total width $\Gamma_\Theta$ translates to a good accuracy into a value of $g^2_{\Theta NK}$, $$\begin{aligned} \label{theta-decay} \Gamma_\Theta = \frac{g^2_{\Theta NK}}{4\pi}\, k\; \frac{({m_\Theta}- {\eta_\Theta}m_N)^2 - m_K^2}{{m_\Theta}^2} ,\end{aligned}$$ where $k \approx 268 {\,{\rm MeV}}$ is the momentum of the decay nucleon in the $\Theta$ rest frame. Taking an indicative value of $\Gamma_\Theta = 10 {\,{\rm MeV}}$ we obtain $g_{\Theta NK}^2 /(4\pi) = 0.77$ for ${\eta_\Theta}= 1$ and $g_{\Theta NK}^2 /(4\pi) = 0.015$ for ${\eta_\Theta}= -1$. The squared couplings corresponding to different values of $\Gamma_\Theta$ are readily obtained by simple rescaling. To be insensitive to the theoretical uncertainties in evaluating the kaon form factors, we compare in the following the kaon pole contributions to different baryon transitions. In Fig. \[fig:pole\] we show the factor $$\label{pole-factor} G(t) = g_{\Theta NK}^2 \, \frac{({m_\Theta}- {\eta_\Theta}m_N)^2 - t}{(m_K^2-t)^2}$$ appearing in the pole contribution (\[X-section-pole\]) to the $\gamma^* n\to K^- \Theta$ cross section, as well as its analogs for the pole contributions to $\gamma^* p \to K^+ \Sigma^0$ and to $\gamma^* p \to K^+ \Lambda$. Due to isospin invariance the corresponding factor for $\gamma^* n \to K^+ \Sigma^-$ is twice as large as for $\gamma^* p \to K^+ \Sigma^0$. Following [@Frankfurt:1999xe] we take $g_{\Sigma NK}^2 /(4\pi) = 1.2$ and $g_{\Lambda NK}^2 /(4\pi) = 14$ for the couplings between the proton and the neutral hyperons. As an indication of their uncertainties one may compare these values with those given in [@Guidal:2003qs], namely $g_{\Sigma NK}^2 /(4\pi) = 1.6$ and $g_{\Lambda NK}^2 /(4\pi) = 10.6$. We remark that according to the estimates of [@Frankfurt:1999xe], the overall cross section for $\gamma^* p\to K^+ \Lambda$ is comparable in size to the one for $\gamma^* p \to \pi^+ n$ in kinematics where both processes receive substantial contributions from the kaon or pion pole. Note that the much smaller coupling for a negative-parity $\Theta$ is partially compensated in the kaon pole contribution to the cross section by a larger kinematic factor in the numerator of (\[X-section-pole\]). The ratio of the factors $G(t)$ for ${\eta_\Theta}=-1$ and for ${\eta_\Theta}=1$ is shown in Fig. \[fig:pole-ratio\]. Given the presence of contributions not due to the kaon pole it is not clear whether one could use the measured size and $t$ dependence of the cross section to infer on the parity of the $\Theta$. The factors $G(t)$ shown in Fig. \[fig:pole\] also describe the neutral kaon pole contributions in $\gamma^* p \to \bar{K}^0 \Theta$, $\gamma^* n \to K^0 \Sigma^0$ and $\gamma^* n \to K^0 \Lambda$. Compared with the respective charged kaon pole contributions in $\gamma^* n \to {K}^- \Theta$, $\gamma^* p \to K^+ \Sigma^0$ and $\gamma^* p \to K^+ \Lambda$, they are significantly suppressed by a factor $(F_{\bar{K}^0} /F_{K^-})^2$ at cross section level. This factor is about 0.03 if we take the leading-order expressions (\[kaon-ff\]) of the form factors together with the estimates of [@Ball:2003sc] for the Gegenbauer coefficients $a_1$ and $a_2$ in the kaon DA, given below (\[Gegenbauer\]). Conclusions {#sec:concl} =========== We have investigated exclusive electroproduction of a $\Theta^+$ pentaquark on the nucleon at large $Q^2$, large $W^2$ and small $t$. Such a process provides a rather clean environment to study the structure of pentaquark at parton level, in the form of well defined hadronic matrix elements of quark vector or axial vector currents. In parton language, these matrix elements describe how well parton configurations in the $\Theta$ match with appropriate configurations in the nucleon (see Fig. \[fig:partons\]). Their dependence on $t$ gives information about the size of the pentaquark. Channels with production of pseudoscalar or vector kaons and with a proton or neutron target carry complementary information. The transition to the $\Theta$ requires sea quark degrees of freedom in the nucleon, and we hope that theoretical approaches including such degrees of freedom will be able to evaluate the matrix elements given in (\[matrix-elements\]). Candidates for this may for instance be the chiral quark-soliton model or lattice QCD, both of which have been used to calculate the corresponding matrix elements for elastic nucleon transitions, see [@Petrov:1998kf] and [@Gockeler:2003jf]. In order to obtain observably large cross sections one may be required to go to rather modest values of $Q^2$, where the leading approximation in powers of $1/Q^2$ and of $\alpha_s$ on which we based our analysis receives considerable corrections. The associated theoretical uncertainties should be alleviated by comparing $\Theta$ production to the production of $\Sigma$ or $\Lambda$ hyperons as reference channels. In any case, even a qualitative picture of the overall magnitude and relative size of the different hadronic matrix elements accessible in the processes we propose would give information about the structure of pentaquarks well beyond the little we presently know about these intriguing members of the QCD spectrum. Acknowledgments {#acknowledgments .unnumbered} =============== We thank E. C. Aschenauer, M. Garçon, D. Hasch, and G. van der Steenhoven for helpful discussions. The work of B. P. and L. Sz. is partially supported by the French-Polish scientific agreement Polonium. CPHT is Unit[é]{} mixte C7644 du CNRS. [99]{} T. Nakano [*et al.*]{} \[LEPS Collaboration\], Phys. Rev. Lett.  [**91**]{}, 012002 (2003) \[hep-ex/0301020\];\ S. Stepanyan [*et al.*]{} \[CLAS Collaboration\], hep-ex/0307018;\ V. V. Barmin [*et al.*]{} \[DIANA Collaboration\], Phys. Atom. Nucl.  [**66**]{}, 1715 (2003) \[hep-ex/0304040\];\ A. E. Asratyan, A. G. Dolgolenko and M. A. Kubantsev, hep-ex/0309042. J. Barth [*et al.*]{} \[SAPHIR Collaboration\], hep-ex/0307083;\ V. Kubarovsky [*et al.*]{} \[CLAS Collaboration\], hep-ex/0311046;\ A. Airapetian [*et al.*]{} \[HERMES Collaboration\], hep-ex/0312044. D. Diakonov, V. Petrov and M. V. Polyakov, Z. Phys. A [**359**]{}, 305 (1997) \[hep-ph/9703373\]. M. Praszalowicz, in: M. Jezabek and M. Praszalowicz (Eds.), *Skyrmions and Anomalies*, World Scientific, Singapore, 1987, p. 112; Phys. Lett. B [**575**]{}, 234 (2003) \[hep-ph/0308114\];\ H. Weigel, Eur. Phys. J. A [**2**]{}, 391 (1998) \[hep-ph/9804260\]. M. Karliner and H. J. Lipkin, hep-ph/0307243;\ R. L. Jaffe and F. Wilczek, hep-ph/0307341;\ C. E. Carlson, C. D. Carone, H. J. Kwee and V. Nazaryan, Phys. Lett. B [**573**]{}, 101 (2003) \[hep-ph/0307396\]. F. Csikor, Z. Fodor, S. D. Katz and T. G. Kovacs, hep-lat/0309090;\ S. Sasaki, hep-lat/0310014. D. M[ü]{}ller, D. Robaschik, B. Geyer, F. M. Dittes and J. Hořejši, Fortschr. Phys. [**42**]{}, 101 (1994) \[hep-ph/9812448\];\ X. D. Ji, Phys. Rev. Lett.  [**78**]{}, 610 (1997) \[hep-ph/9603249\];\ A. V. Radyushkin, Phys. Rev. [**D56**]{}, 5524 (1997) \[hep-ph/9704207\]. K. Goeke, M. V. Polyakov and M. Vanderhaeghen, Prog. Part. Nucl. Phys. [**47**]{}, 401 (2001) \[hep-ph/0106012\]. M. Diehl, Phys. Rept.  [**388**]{}, 41 (2003) \[hep-ph/0307382\]. E. R. Berger, M. Diehl and B. Pire, Phys. Lett. B [**523**]{}, 265 (2001) \[hep-ph/0110080\]. J. C. Collins, L. Frankfurt and M. Strikman, Phys. Rev. D [**56**]{}, 2982 (1997) \[hep-ph/9611433\]. C. Alt [*et al.*]{} \[NA49 Collaboration\], hep-ex/0310014. S. Capstick, P. R. Page and W. Roberts, Phys. Lett. B [**570**]{}, 185 (2003) \[hep-ph/0307019\]. A. R. Dzierba, D. Krop, M. Swat, S. Teige and A. P. Szczepaniak, hep-ph/0311125. J. Bl[ü]{}mlein, B. Geyer and D. Robaschik, Phys. Lett. B [**406**]{}, 161 (1997) \[hep-ph/9705264\];\ A. V. Belitsky, A. Freund and D. M[ü]{}ller, Nucl. Phys. B [**574**]{}, 347 (2000) \[hep-ph/9912379\]. M. Burkardt, Phys. Rev. D [**62**]{}, 071503 (2000), Erratum-ibid. D [**66**]{}, 119903 (2002) \[hep-ph/0005108\];\ J. P. Ralston and B. Pire, Phys. Rev. D [**66**]{}, 111501 (2002). M. Diehl, Eur. Phys. J. C [**25**]{}, 223 (2002), Erratum-ibid. C [**31**]{}, 277 (2003) \[hep-ph/0205208\]. M. Beneke and M. Neubert, Nucl. Phys. B [**675**]{}, 333 (2003) \[hep-ph/0308039\]. P. Ball and M. Boglione, Phys. Rev. D [**68**]{}, 094006 (2003) \[hep-ph/0307337\]. J. Bolz, P. Kroll and G. A. Schuler, Eur. Phys. J. C [**2**]{}, 705 (1998) \[hep-ph/9704378\]. L. N. Hand, Phys. Rev.  [**129**]{}, 1834 (1963). L. L. Frankfurt, P. V. Pobylitsa, M. V. Polyakov and M. Strikman, Phys. Rev. D [**60**]{}, 014010 (1999) \[hep-ph/9901429\]. L. Mankiewicz, G. Piller and A. Radyushkin, Eur. Phys. J. C [**10**]{}, 307 (1999) \[hep-ph/9812467\]. L. L. Frankfurt, M. V. Polyakov, M. Strikman and M. Vanderhaeghen, Phys. Rev. Lett.  [**84**]{}, 2589 (2000) \[hep-ph/9911381\]. M. Guidal, J.-M. Laget and M. Vanderhaeghen, Phys. Rev. C [**68**]{}, 058201 (2003) \[hep-ph/0308131\]. V. Y. Petrov, P. V. Pobylitsa, M. V. Polyakov, I. B[ö]{}rnig, K. Goeke and C. Weiss, Phys. Rev. D [**57**]{}, 4325 (1998) \[hep-ph/9710270\];\ M. Penttinen, M. V. Polyakov and K. Goeke, Phys. Rev. D [**62**]{}, 014024 (2000) \[hep-ph/9909489\]. QCDSF Collaboration, M. G[ö]{}ckeler [*et al.*]{}, hep-ph/0304249;\ LHPC Collaboration, P. H[ä]{}gler [*et al.*]{}, Phys. Rev. D [**68**]{}, 034505 (2003) \[hep-lat/0304018\]. [^1]: Our convention for $\beta$ differs from the one in [@Goeke:2001tz; @Frankfurt:1999fp], with $(\sin\beta)_{\mathrm{here}} = - (\sin\beta)_{[8],[22]}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We use Langevin dynamics simulations to study dense 2d systems of particles with both size and energy polydispersity. We compare two types of bidisperse systems which differ in the correlation between particle size and interaction parameters: in one system big particles have high interaction parameters and small particles have low interaction parameters, while in the other system the situation is reversed. We study the different phases of the two systems and compare them to those of a system with size but not energy bidispersity. We show that, depending on the strength of interaction between big and small particles, cooling to low temperatures yields either homogeneous glasses or mosaic crystals.' author: - Itay Azizi - Yitzhak Rabin bibliography: - 'references.bib' title: 'Systems with size and energy polydispersity: from glasses to mosaic crystals' --- Introduction ============ Multicomponent systems of particles in which at least one of the parameters (e.g. size, interaction, etc.) varies from particle to particle exhibit rich phenomenology compared to systems in which all particles are identical. In particular, their thermodynamic phases are quite different from one-component systems [@Frenkel2013; @Evans2001; @Sollich2002]: if polydispersity is sufficiently high, there is phase separation into phases whose compositions are different from that of the parent phase, the phenomenon of fractionation [@Evans1999]. Size polydisperse systems of particles with sizes which are randomly selected from various distributions (e.g. Schultz, Gaussian, uniform) were studied using molecular dynamics simulations [@Tildesley1990; @Kofke1996; @Frenkel1998; @Tanaka2007; @Tanaka2013; @Tanaka2015; @Ingebrigtsen2015]. For example, it was shown that in a size polydisperse system with Lennard-Jones interactions on the liquid-gas coexistence line, the average particle size in the liquid phase is greater than in the gas phase [@Tildesley1990]. Another study of a size polydisperse system with a uniform size distribution has shown that in the liquid phase at constant density and temperature, increasing the polydispersity leads to slowing down of the dynamics and diminishing of the structural order [@Ingebrigtsen2015]. Binary size mixtures with a single interaction parameter at high density were studied in Refs. [@Onuki2006; @Onuki2007] where both the size ratio and composition (fraction of big particles) were varied; as the size ratio is increased at low temperature, hexatic order decreases and the system undergoes a transition from a mosaic crystal to a glass (the details of the transition depend on the composition). Previous studies by our group [@Lenin2015; @Lenin2016; @Dino2016; @Azizi2018; @Azizi2019; @Singh2019] have focused on energy polydisperse systems in which the parameters that characterize the strength of interactions between particles are randomly chosen from a geometric mean [@Lenin2015; @Lenin2016; @Azizi2018], uniform [@Lenin2015; @Dino2016; @Singh2019] or exponential [@Azizi2019] distributions. Using computer simulations we have shown that upon cooling, there is ordering not only of the centers of mass of the particles, but also of the identities of neighboring particles: as temperature is decreased, the system lowers its energy by arranging neighboring particles in a non-random fashion that depends on the distribution of interaction strengths and on the temperature (neighborhood identity ordering). We have also demonstrated the existence of fractionation in dilute energy polydisperse systems [@Azizi2018]: cooling from a gas phase results in liquid-gas coexistence (and at yet lower temperatures in solid-gas coexistence), where droplets of the condensed phase are enriched in highly interacting particles whereas the gas phase is enriched in weakly interacting ones. As described above, size polydisperse and energy polydisperse systems exhibit some similar phenomena such as fractionation. Other properties of these types of systems are quite different; for example, while crystallization is suppressed in size polydisperse systems, energy polydisperse systems crystallize into periodic structures similarly to systems of identical particles. Also, neighborhood identity ordering that was shown to exist in energy polydisperse systems has no counterpart in size polydisperse ones. In view of the above, it is interesting to explore the possibility of observing new effects in a system which combines both energy and size polydispersity. In the following we report the results of a study of a simple model of a system in which particles can have two sizes (big and small) and three interaction parameters (big-big, small-small and big-small). While models of such binary mixtures are commonly used to simulate low temperature glasses and amorphous solids (see e.g., the Kob-Andersen model [@Kob1994; @Kob1995; @Kob2008]), other aspects of their behavior such as liquid-liquid phase separation and formation of mosaic crystals, have not been explored so far. Elucidating the qualitative features of this behavior is the objective of the present study. The organization of this paper is as follows. In Sec. 2, we present the computational model and discuss the simulation algorithm. In Sec. 3, we present the results of our computer simulations and compare the behaviors of binary systems of large and small particles, with different choices of interaction parameters. In Sec. 4, we discuss the main results of this work and the new insights obtained about the physics of systems with size and energy polydispersity. Methods ======= We performed Langevin dynamics simulations in LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) of two-dimensional systems of N=2422 particles in a square box of dimensions $L_x=L_y=L=55$ (this corresponds to number density $\rho=N/L^{2}$=0.80), with periodic boundary conditions in x and y directions (in NVT ensemble). Particles i and j interact via Lennard-Jones (LJ) potential: $$\label{eqn:LJ} V_{ij}(r)=4\epsilon_{ij}((\sigma_{ij}/r)^{12}-(\sigma_{ij}/r)^{6})$$ where r is the interparticle distance between particles i and j and $\sigma_{ij}=(\sigma_{i}+\sigma_{j})/2$. The potential is truncated and shifted to zero at $r=2.5\sigma_{ij}$ (the small discontinuity of the force at the cutoff distance does not affect our results since its magnitude is very small compared to the thermal force). The motion of the particles is described by the Langevin equation: $$\label{eqn:Langevin} m\frac{d^2r_i}{dt^2}+\zeta\frac{dr_i}{dt}=-\frac{\partial V}{\partial r_i}+f_i$$ in which we accounted for non-hydrodynamic interactions between the particles, random thermal forces and friction against the solvent. Here $\zeta$ is the friction coefficient which we assumed to be the same for all particles independent of their size (strictly speaking, the Stokes friction coefficient increases linearly with particle radius, but since friction against the solvent is negligible compared to that due to interparticle interactions, this assumption does not significantly affect our results), V the sum of all the pair potentials $V_{ij}$ and $f_i$ a random force with zero mean and second moment proportional to T$\zeta$ (the temperature $T$ is given in energy units, with Boltzmann constant $k_B=1$). All physical quantities are expressed in LJ reduced units and the simulation timestep is 0.005$\tau_{LJ}$ where $\tau_{LJ} = (m\sigma^2/\epsilon)^{1/2}$ (in the following we take $\sigma=\epsilon=\tau_{LJ}=1$). The friction coefficient is taken to be $\zeta=0.02$ which corresponds to viscous damping time $\tau_d=1/\zeta=50\tau_{LJ}$ that determines the characteristic transition time from inertial to overdamped motion (due to collisions with molecules of the implicit “solvent”). In all the models studied in this work we consider a mixture of two particle sizes (effective diameters), $\sigma_b=1.3$ and $\sigma_s=0.87$ ($b$ stands for big and $s$ stands for small). We verified that taking the same mass ($m_b=m_s=1$) or the same mass density ($m_b=2.25$, $m_s=1.00$) of big and small particles does not affect the statistical properties of our results (not shown) and took $m_b=m_s=1$. The numbers of particles are chosen such that each type of particles has the same surface fraction, this corresponds to $N_{b}=745$ and $N_{s}=1677$. The total surface fraction is $\phi=0.66$ and therefore the partial surface fraction of each component in the mixture is $\phi_{ b}=\phi_{ s}=\phi/2=0.33$. The difference between the models is in the assignment of the interaction parameters. In our reference system, model R, particles of both sizes have the same interaction parameter independent of their size, $\epsilon_{bb}=\epsilon_{ss}=2$. In model A big particles have a higher interaction parameter than small particles, $\epsilon_{bb}=3$, $\epsilon_{ss}=1$. In model B big particles have lower interaction parameters than small ones, $\epsilon_{bb}=1$, $\epsilon_{ss}=3$. The values are chosen so that the unweighted average of $bb$ and $ss$ interactions is the same (2) in all the models. The mixing parameter $\epsilon_{mix}=\epsilon_{bs}$ that controls the interaction between big and small particles takes the same value (in the range $1\leq\epsilon_{mix}\leq3$) in the three models. Note that large values of $\epsilon_{mix}$ are expected to promote mixing and conversely, small values of this parameter promote demixing. The models were simulated as follows: first, particles were placed on a square lattice and the system was equilibrated at high temperature ($T=10$) compared to the largest value ($3$) of the interaction parameter during a sufficiently long time (2000$\tau_{LJ}$) in order to ensure that the fluid is completely disordered (particle positions are randomized). Then, the fluid was cooled in two steps: (1) at rate $10^{-4} 1/\tau_{LJ}$ from $T=10$ to $T=2$, (2) at rate $10^{-6} 1/\tau_{LJ}$ from $T=2$ to $T=0$ and measurements were performed at intermediate temperatures (this 2-step cooling was used in order to ensure structural relaxation of the system in the range $T=2$ to $T=0$). Results ======= In order to study the behavior of the systems for different mixing parameters in the range $1\leq\epsilon_{mix}\leq3$, we begin with the upper limit of this range, $\epsilon_{mix}=3$, for which strong mixing of big and small particles is expected. ![\[fig:Snapshots2S\] Snapshots of systems R, A and B with $\epsilon_{mix}=3.0, 2.5, 2.0$ and $1.0$, at T=0, produced by 2-step cooling. The potential energy per particle is presented on each snapshot.](Snapshots2S){width="40.00000%"} The R, A and B systems with $\epsilon_{mix}=3$ undergo a transition from a homogeneous (on length scales larger than molecular size) fluid mixture (at T=10) to a homogenous glass (at T=0) that contains large voids (see the top panels in Fig. \[fig:Snapshots2S\] and movies 1(a), 1(b) and 1(c) in SM). In agreement with the expectation that the shape and composition of the interface between the condensed and the gas (void) phases are determined by surface tension minimization, in systems A and B the interfaces are enriched in weakly interacting components, i.e. small particles in the A system and big ones in the B system. Visual inspection of snapshots of the three systems taken during the process of cooling shows that the transition from a homogeneous fluid to a homogeneous glass can be characterized by the appearance of large voids that must accompany the increase of the density as the system solidifies at constant volume (not shown). Based on this criterion and on the observed change of slopes in the potential energy vs. temperature curves in Fig. \[fig:PotentialCurves\] (for $\epsilon_{mix}=3$), the glass transition temperature falls in the range $0.5-0.6$ in the three systems. ![\[fig:PotentialCurves\] Plots of the potential energy per particle as a function of temperature, for systems R, A and B, obtained by 2-step cooling, from T=2 to T=0 at $10^{-6} 1/\tau_{LJ}$.](PotentialCurves){width="40.00000%"} ![\[fig:PotentialComponents\] The potential energy components (mixed interactions and single-size interactions) in systems R, A and B as a function of $\epsilon_{mix}$ at T=0. Absolute average values per particle.](PotentialComponents){width="50.00000%"} Inspection of other T=0 snapshots in Fig. \[fig:Snapshots2S\] shows that as $\epsilon_{mix}$ is decreased below $3$, a gradual transition from a homogeneous glass to a solid phase in which big and small particles are progressively segregated, is observed in the 3 systems. Demixing via formation of nanocrystals of small particles, embedded in a percolating disordered network of big particles is observed at $\epsilon_{mix}=2.5$ in system B and at $\epsilon_{mix}=2$ in system R (see Fig. \[fig:Snapshots2S\]). At yet lower values of the mixing parameter, this partially-ordered state is replaced by a completely ordered mosaic of big and small particle crystals (see snapshots corresponding to $\epsilon_{mix}=2$, system B and to $\epsilon_{mix}=1$, system R). The A system remains a homogeneous glass at $\epsilon_{mix}=2$ and forms a mosaic crystal at $\epsilon_{mix}=1$. In order to understand the low temperature behavior of the three systems we note that since entropy plays no role at $T=0$, the above structures minimize the potential energy of the system. Thus, for sufficiently high values of $\epsilon_{mix}$, the system minimizes its energy by maximizing the number of contacts between big and small particles; conversely, for small values of $\epsilon_{mix}$ energy minimization favors maximizing the number of big-big and small-small particle contacts. A more quantitative demonstration of this effect is shown in Fig. \[fig:PotentialComponents\] where we plot the interfacial ($e_{bs}$) and pure system ($e_{bb}+e_{ss}$) contributions to the total potential energy ($e_p=e_{bs}+ e_{bb}+e_{ss}$) as a function of $\epsilon_{mix}$, for each of the 3 systems (the $e_p$ value of each configuration is indicated on the snapshots in Fig. \[fig:Snapshots2S\]). As expected, $|e_{bs}|$ decreases and $|e_{bb}+e_{ss}|$ increases with $\epsilon_{mix}$, as the system changes from homogeneous glass to mosaic crystal. The transition between the two states can be defined as the value of mixing parameter at which the two curves cross; this corresponds to $\epsilon_{mix}$ slightly higher (slightly lower) than $2$ for R (and A) systems respectively, and to $\epsilon_{mix}\approx 2.5$ for the B system. This explains the sequence of transitions in the different systems observed in Fig. \[fig:Snapshots2S\]. Note that the higher value of $|e_{bb}+e_{ss}|$ in system B compared to system A results from our choice of equal volume fraction of big and small particles and from the definition of systems A and B (consequently, the number of particles with higher value of interaction parameter is larger in the B system than in the A system). We proceed to examine the $\epsilon_{mix}=1$ case where in all systems there is a gradual demixing process from a homogenous liquid into clusters of small and big particles that begins already during the first step of cooling at rate $10^{-4}$ to $T=2$ (not shown). As temperature is further decreased (see movies 2(a), 2(b) and 2(c) in SM), these clusters grow in size and eventually only two segregated large clusters of big and of small particles remain. At yet lower temperature, the three systems undergo partial freezing in which one of the components freezes while the other component remains in the liquid phase and freezes upon further cooling (complete freezing). As shown in Fig. \[fig:SnapshotsMix=1\], in systems A and B partial freezing (freezing of big and of small particles, respectively) takes place at $T^*=1.15$, which is the freezing temperature of a one-component system with $\epsilon_{ij}=3$. Complete freezing of both components (freezing of small and of big particles in systems A and B, respectively) occurs at $T^*=0.35$ which is the freezing temperature of a one-component system with $\epsilon_{ij}=1$ [@Azizi2019]. ![\[fig:SnapshotsMix=1\] Snapshots of systems R, A and B with $\epsilon_{mix}=1$ at their partial (upper panels) and complete (lower panels) freezing temperatures. The transition temperatures are shown on the snapshots.](SnapshotsMix=1){width="40.00000%"} While the origin or partial freezing in systems A and B is clear (the particles with the higher value of the interaction parameter freeze at a higher temperature), the observation that big particles freeze before small ones in the R system, is surprising since both types of particles have $\epsilon_{ij}=2$. A possible explanation may be related to the fact that system R contains more than 2 times small particles than big ones. Since entropy is proportional to the number of particles, we expect the entropy of smaller particles to dominate and to favor condensation of large particles since the resulting decrease of the entropy of the large particles is more than compensated by the increase of free space and, therefore, the entropy of the small ones. Similar entropic mechanisms are responsible for the appearance of depletion forces between colloids in colloid-polymer mixtures [@Oosawa1954] and were shown to lead to phase separation in lattice models of hard-core binary mixtures of small and large particles [@Frenkel1992; @Eldridge1995]. Complete freezing in the R system takes place at the freezing temperature of a one-component system of small particles with $\epsilon_{ij}=2$ ($T^*_b=0.75$) [@Azizi2019]. ![\[fig:SnapshotsMix=1Fast\] Snapshots of systems R, A and B with $\epsilon_{mix}=1$ at T=0 produced by fast cooling ($10^{-3} 1/\tau_{LJ}$). The potential energy per particle is presented on each snapshot.](SnapshotsMix=1Fast){width="40.00000%"} In addition, we checked the dependence of the low temperature configurations of the three systems on the cooling method by comparing our two step cooling to fast single step cooling from $T=10$ to $T=0$ (at a rate $10^{-3} 1/\tau_{LJ}$). As shown in Fig. \[fig:SnapshotsMix=1Fast\], fast cooling results in low temperature configurations with ramified interfaces between big and small particle crystals. The crystals contain defects i.e., isolated big particles or nanocrystals of big particles inside crystals of small particles, and vice versa. This concurs with the expectation that relaxation on length scales comparable to the size of the system is suppressed during fast cooling and that the systems become kinetically trapped in high energy states (compare the potential energy values in Fig. \[fig:SnapshotsMix=1Fast\] to those in Fig. \[fig:Snapshots2S\]). Discussion ========== In this work we used computer simulations to study dense 2d systems of particles with both size and energy polydispersity, using a simple model of a mixture of equal surface fractions of particles of two sizes, big and small, and three interaction parameters that characterize the strength of big-big, small-small and big-small interactions. We considered three representative cases: system A in which the interaction between big particles is stronger than between small particles, system B in which this situation is reversed, and a reference (R) system in which there is only size but no energy polydispersity. Our goal was to find out (a) what types of phases and structures appear in those systems as they are cooled from a high temperature homogeneous fluid state down to low temperatures, and (b) how these results depend on the mixing parameter (strength of interaction between big and small particles). In agreement with previous studies [@Kob1994; @Kob1995; @Kob2008], we found that at high values of the mixing parameter, the three systems remain homogeneous at all temperatures and undergo a direct liquid to glass transition. At small values of the mixing parameter, lowering the temperature resulted in segregation between the two components (big and small particles), first into two liquid phases, then into one solid and one liquid phase (partial freezing) and eventually into a mosaic crystal (complete freezing). Surprisingly, all the above mentioned transitions, including partial freezing, take place not only in A and B systems where partial freezing is energy-driven (the component with larger interaction parameter freezes first), but also in system R where the mechanism is entropic. The transition between complete mixing and demixing behavior takes place at at $\epsilon_{mix}\simeq 2$ in systems R and A and at $\epsilon_{mix}\simeq 2.5$ in system B. Finally, we would like to address some of the limitations of our work. The present study was carried out on a relatively small (periodic) system (2422 particles) whose size was chosen as a compromise between finite size and run time considerations. This choice of system size allowed us to do relatively short runs (10-15 hours) and to explore the qualitative features of the low temperature configurations and of the different phases of the three systems, for different values of the mixing parameter. We believe that even though our simulations were done in two dimensions, many of our qualitative results will carry over to three dimensions as well. Acknowledgments =============== We would like to thank Kulveer Singh for helpful discussions. This work was supported by grants from the Israel Science Foundation and from the Israeli Centers for Research Excellence program of the Planning and Budgeting Committee.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The effects of laser-induced prealignment on the deflection of paramagnetic molecules by inhomogeneous static magnetic field are studied. Depending on the relevant Hund’s coupling case of the molecule, two different effects were identified: either suppression of the deflection by laser pulses (Hund’s coupling case (a) molecules, such as $ClO$), or a dramatic reconstruction of the broad distribution of the scattering angles into several narrow peaks (for Hund’s coupling case (b) molecules, such as $O_2$ or $NH$). These findings are important for various applications using molecular guiding, focusing and trapping with the help of magnetic fields.' author: - 'E. Gershnabel' - 'M. Shapiro' - 'I.Sh. Averbukh' title: 'Stern-Gerlach deflection of field-free aligned paramagnetic molecules' --- Introduction {#Introduction_magnetic} ============ Manipulating the translational motion of atoms and molecules by means of inhomogeneous external fields has been studied intensively for many years. Since the pioneering work of Stern and Gerlach that demonstrated quantization of atomic trajectories in inhomogeneous magnetic field [@SG], the dynamics of many other systems has been studied both in electric and magnetic fields. An important milestone was, for instance, separation of molecules in different quantum states in order to create a maser, a molecular amplifier of photons [@Townes]. Nowadays, the physics of the deflection of atoms and molecules by inhomogeneous fields is as hot as ever, including studies focused on the motion in the static inhomogeneous electric [@McCarthy; @Benichou; @Loesch; @Antoine; @reduction] and magnetic [@McCarthy; @Kuebler; @even] fields, and even laser fields [@Stapelfeldt; @Zhao1; @Zhao2; @Purcell]. In the case of laser deflection, some novel applications in molecular optics have recently appeared, such as molecular lens [@Stapelfeldt; @Zhao1] and molecular prism [@Zhao2; @Purcell]. The interaction between a molecule and an external field depends upon the orientation of the molecule. The field-molecule interactions become intensity-dependent for strong enough fields due to the field-induced modification of the molecular angular motion [@Zon; @Friedrich]. It was lately shown that the intensity-dependent molecular polarizability-anisotropy provides means for tailoring the dipole force felt by molecules in the laser field [@Purcell]. More recently, a method for controlling the scattering of molecules in external fields by additional ultrashort laser pulses inducing field-free molecular alignment was suggested [@gershnabel1; @gershnabel5; @gershnabel4]. In this work we return to the basics, and study the prospects of the ultrafast laser control of molecular deflection in the Stern-Gerlach (SG) arrangement. It was shown in the past that molecular scattering in magnetic fields is affected by rotational alignment caused, for example, by collisions in seeded supersonic beams [@Aqullanti]. Here we demonstrate that this process can be efficiently and flexibly controlled by novel ultafast optical tools allowing for preshaping the molecular angular distribution before the molecules enter the SG apparatus. This can be done with the help of numerous recent techniques for laser molecular alignment, which use single or multiple short laser pulses (transform limited, or shaped) to temporarily align molecular axes along certain directions (for introduction to the rich physics of laser molecular alignment, see, e.g. [@Zon; @Friedrich; @Stapelfeldt2; @Kumarappan; @Stolow; @rich]). Short laser pulses excite rotational wavepackets, which results in a considerable transient molecular alignment after the laser pulse is over, i.e., at field-free conditions. In the present paper, we will consider only molecules with a permanent magnetic dipole moment, i.e., open shell molecules. The open shell molecules are classified into Hund’s coupling cases according to their angular momenta coupling [@Townes; @Carington]. In the Hund’s coupling case (a), the angular momentum of electrons and their spin are coupled to the internuclear axis, while in the Hund’s coupling case (b), the electronic spin and internuclear axis are not strongly coupled. We will consider magnetic deflection of different paramagnetic molecules subject to a short prealigning laser pulse. In the Hund’s coupling case (a), the magnetic moment is coupled to the internuclear axis, and by rotating the molecule, one rotates the magnetic moment as well. This reduces substantially the Zeeman effect and effectively turns off the interaction between the molecule and magnetic field (like a rotating electric dipole that becomes decoupled from a static electric field [@gershnabel5]). In the Hund’s coupling case (b), the magnetic moment is barely coupled to the internuclear axis. However, laser-induced molecular rotation creates an effective magnetic field which adds to the SG field and modifies the deflection dynamics. As a result, as we show below, a broad and sparse distribution of the scattering angles of deflected molecules collapses into several narrow peaks with controllable positions. The paper is organized as following. In Sec. \[General Theory\] we outline the general theorical framefork: first, we briefly discuss the SG deflection mechanism (Sec. \[Stern-Gerlach deflection\]), and provide several needed facts on the laser-induced field-free alignment in Sec. \[Prealignment\]. Then, the interaction details for the Hund’s coupling case (a) and Hund’s coupling case (b) are given in Sec. \[Hund case A\] and \[Hund case B\], respectively. Further discussion of the Hund’s coupling case (b) and hyperfine structure appears in the Appendix in Sec. \[NH HFS\]. In Sec. \[Applications to Molecules\] we apply the above theoretical tools to the laser-controlled magnetic scattering of $ClO$ (Sec. \[ClO\]), $O_2$ (Sec. \[$O_2$\]) and $NH$ (Sec. \[NH\]) molecules. Discussion followed by the conclusions, are presented in Sec. \[Discussions\] and \[Summary\], respectively. General Theory {#General Theory} ============== Stern-Gerlach deflection {#Stern-Gerlach deflection} ------------------------ Once a beam of molecules enters into a SG magnetic field, the initial eigenstates of the system adiabatically become $|\Psi_i(B)\rangle$: $$|\Psi_i(B)\rangle=\sum_{j}a_{j}(B)|\Psi_j\rangle, \label{Hund case b diagonalized}$$ where the coefficients $a_{j}(B)$ depend on the magnetic field B as a parameter, and $|\Psi_j\rangle$ is a basis for the free molecule. In this work we consider the magnetic field to be: $\textbf{B}=B(z)\hat{z}$, i.e., pointing in the $z$ direction, with a practically constant gradient along the $z$ direction in the relevant interaction region. The force acting on the molecule is given by: $$Force=-\nabla E=-\frac{\partial E}{ \partial B}\frac{\partial B}{ \partial z}, \label{Force}$$ where $E$ is the energy of the molecule. The derivative ${\partial E}/{ \partial B}$ may be obtained by means of the Hellman-Feynman theorem, that is, for a system being in the $i$-th energy eigenstate of the system, the force is proportional to: $$\begin{aligned} \frac{\partial E_i}{\partial B}&=&\langle\Psi_i(B)|\frac{\partial H}{\partial B}|\Psi_i(B)\rangle\nonumber\\ &=&\langle\Psi_i(B)|\frac{\partial H_z}{\partial B}|\Psi_i(B)\rangle, \label{Hellman_Feyman}\end{aligned}$$ where $H_z$ is the Zeeman term of the Hamiltonian. Since $H_z$ is proportional to $B$, we conclude that a molecule in an energy eigenstate will be deflected by a force that is proportional to: $$\begin{aligned} {\cal A}_{i}\equiv\langle\Psi_i(B)|\frac{H_z}{B}|\Psi_i(B)\rangle.\label{A_force}\end{aligned}$$ Eq. \[A\_force\] will allow us to consider the distribution of forces. In order to take into account the absolute amount of deflection, though, one has to consider the field gradient as well (Eq. \[Force\]). For more details, see, e.g. [@McCarthy]. Laser-induced field-free alignment {#Prealignment} ---------------------------------- If the molecules are subject to a strong linearly polarized femtosecond laser pulse, the corresponding molecule-laser interaction potential is given by: $$H_{ML}=-\frac{1}{4}\epsilon^2\left [ (\alpha_{\parallel}-\alpha_{\perp}) \cos^2\theta+\alpha_{\perp}\right ],\label{pre alignment interaction}$$ where $\theta$ is the angle between the molecular axis and the polarization direction of the pulse, $\alpha_{\parallel},\alpha_{\perp}$ are the parallel and perpendicular polarizability components, and $\epsilon$ is the femtosecond pulse envelope. Since the aligning pulse is short compared to the typical periods of molecular rotation, it may be considered as a delta-pulse. In the impulsive approximation, one obtains the following relationship between the wavefunction before and after the pulse applied at $t=0$ (see e.g. [@Gershnabel3], and references therein): $$\Psi(t=0^+)=\exp{[iP\cos^2\theta]}\Psi(t=0^-),\label{prealignment operator}$$ where the kick strength, $P$ is given by: $$P=(1/4\hbar)\cdot (\alpha_{\parallel}-\alpha_{\perp})\int_{-\infty}^{\infty}\epsilon^2(t)dt.\label{kick strength}$$ We assume the vertical polarization of the pulse (along the $z$-axis, and parallel to the SG magnetic field). Physically, the dimensionless kick strength $P$, is equal to the typical amount of angular momentum (in the units of $\hbar$) supplied by the pulse to the molecule. In order to find $\Psi(t=0^+)$ for any initial state, we introduce an artificial parameter $\xi$ that will be assigned the value $\xi=1$ at the end of the calculations, and define: $$\begin{aligned} \Psi_{\xi}=\exp{\left [ (iP\cos^2\theta)\xi \right ]}\Psi(t=0^-) =\sum_{i} c_{i}(\xi)|\Psi_i\rangle.\label{artificial}\end{aligned}$$ By differentiating both sides of Eq. \[artificial\] with respect to $\xi$, we obtain the following set of differential equations for the coefficients $c_{i}$: $$\dot{c}_{i'}=iP\sum_{i}c_{i}\langle \Psi_{i'}|\cos^2\theta|\Psi_i\rangle, \label{Pre-Alignment Coefficients}$$ where $\dot{c}=dc/d\xi$. Evaluation of the matrix elements in Eq. \[Pre-Alignment Coefficients\] is easily obtained by means of the relationship: $\cos^2\theta=(2D^2_{00}+1)/3$, where $D^k_{pq}$ is the rotational matrix. Since $\Psi_{\xi=0}=\xi(t=0^-)$ and $\Psi_{\xi=1}=\Psi(t=0^+)$ (see Eq. \[artificial\]), we solve numerically this set of equations from $\xi=0$ to $\xi=1$, and find $\Psi(t=0^+)$. It turns out that the population of rotational levels of the kicked molecules has a maximum at around $\hbar P$. Finally, we derive the distribution of forces acting on a thermal ensemble of molecules pre-aligned by a laser pulse. For this, we start from a single eigenstate of a free system, apply an alignment pulse in the $z$ direction, and then adiabatically increase the magnetic field (in order to imitate a smooth process of the molecular beam injection into the SG deflector). The distribution will be proportional to: $$\begin{aligned} f({\cal A})&=&\sum_{i,j}\frac{\exp\left(-\frac{E_{i}}{k_B T}\right)}{Q_{rot}}\nonumber\\ &\times&|c_{j}|^2\delta_{{\cal A},{\cal A}_{j}}, \label{distribution of forces}\end{aligned}$$ where $k_B$ is the Boltzmann’s constant, $Q_{rot}$ is the partition function, $i$ denotes the quantum numbers associated with the initial eigenstates of free molecules, $c_{j}$ denotes the coefficients of the free eigenstates that were excited by the laser pulse applied to the initial eigenstate $i$, and ${\cal A}_{j}$ are the associated matrix elements given in Eq. \[A\_force\] (proportional to the force), between the states adiabatically correlated with the free states $j$. Hund’s coupling case (a) {#Hund case A} ------------------------ In this subsection we concentrate on the $^{35}ClO$ molecule, which presents a good example for the Hund’s coupling case (a). Denoted as $^2\Pi$ in its electronic ground state, it has a nuclear spin $I=3/2$, and it was studied well in the past [@Carrington1; @Kakar; @Brian]. In the Hund’s coupling case (a), the electronic angular momentum and spin are strongly coupled to the internuclear axis, and in the case of $ClO$, its effective Hamiltonian is given by [@Carington]: $$H_{eff}=H_{rso}+H_{hf}+H_Q,\label{ClO molecule Hamiltonian}$$ where $H_{rso}$ is the rotation and spin-orbit coupling, $H_{hf}$ is the magnetic hyperfine interaction, and $H_Q$ is the electric quadrupole interaction. Here $$H_{rso}=B_r\left\{T^1(\textbf{J})-T^1(\textbf{L})-T^1(\textbf{S})\right\}^2+AT^1(\textbf{L})\cdot T^1(\textbf{S}),\label{rso Hamiltonian}$$ where $T^1()$ is a spherical tensor of rank $1$, $B_r$ is the rotational constant in the lowest vibrational level, and $A$ is the spin-orbit coupling constant. $\textbf{L}$ and $\textbf{S}$ are the electronic angular momentum and spin operators, respectively. The total angular momentum is $\textbf{J}=\textbf{N}+\textbf{L}+\textbf{S}$, where $\textbf{N}$ is the nuclei angular momentum operator. The Hund’s coupling case (a) basis looks like this: $$|\eta,\Lambda;S,\Sigma;J,\Omega,I,F,M_F\rangle, \label{Hund case A basis}$$ where $\eta$ represents some additional electronic and vibrational quantum numbers, $\Sigma$ and $\Lambda$ are the projections of the electronic spin and angular momentum on the internuclear axis, respectively. For $ClO$ molecule, $S=1/2$, so that $\Sigma=\pm 1/2$ and $\Lambda=1$. The quantity $\Omega$ is $\Omega\equiv\Sigma+\Lambda$ ($\Omega=3/2,1/2$, the $3/2$-state has a lower energy), and $\textbf{F}=\textbf{J}+\textbf{I}$. The $H_{hf}$ Hamiltonian is given by: $$\begin{aligned} &&H_{hf}=H_{IL}+H_F+H_{dip}\nonumber\\ &=&aT^1(\textbf{I})\cdot T^1(\textbf{L})+b_FT^1(\textbf{I})\cdot T^1(\textbf{S})\nonumber\\ &-&\sqrt{10}g_S\mu_B g_N \mu_N (\mu_0 /4\pi)T^1(\textbf{I})\cdot T^1(\textbf{S},\textbf{C}^2).\label{H_hf_ClO}\end{aligned}$$ The first term represents the orbital interaction, the second one accounts for the Fermi contact interaction, and the third term describes the dipolar hyperfine interaction. Here $a$ and $b_F$ are constants, $g_N$ and $g_S$ are the nuclear and electron $g$ factors, respectively, $\mu_N$ and $\mu_B$ are the nuclear and electron Bohr magnetons, respectively, and $\mu_0$ is the vacuum permeability. All the matrix elements for the Hund’s coupling case (a), including those for the quadrupole interaction, are given in [@Carington]. The $ClO$ constants were taken from [@Carington; @Kakar; @Brian]. When considering the Zeeman Hamiltonian, we will concentrate only on the two major terms related with electronic angular momentum and spin: $$H_Z=\mu_B T^1(\textbf{B})\cdot T^1(\textbf{L})+g_S\mu_B T^1(\textbf{B})\cdot T^1(\textbf{S}).\label{ClO Zeeman}$$ The corresponding matrix elements (see Eq. \[A\_force\]) are given in [@Carington]. In order to consider the effect of laser-induced alignment (see Sec. \[Prealignment\], Eq. \[Pre-Alignment Coefficients\]), we have derived the following matrix elements: $$\begin{aligned} &\langle& \eta,\Lambda; S,\Sigma;J,\Omega,I,F,M_F|D^{2*}_{00}|\eta,\Lambda;S,\Sigma;J',\Omega,I,F',M_F\rangle\nonumber\\ &=&(-1)^{F-M_F}\left( \begin{array}{ccc} F & 2 & F' \\ -M_F & 0 & M_F \end{array} \right)(-1)^{F'+J+I+2}\nonumber\\ &\times& \sqrt{(2F'+1)(2F+1)}\left\{ \begin{array}{ccc} J' & F' & I \\ F & J & 2 \end{array} \right\}(-1)^{J-\Omega}\nonumber\\ &\times&\left( \begin{array}{ccc} J & 2 & J' \\ -\Omega & 0 & \Omega \end{array} \right)\sqrt{(2J+1)(2J'+1)}.\nonumber\\\label{ClOPulse}\end{aligned}$$ Hund’s coupling case (b) {#Hund case B} ------------------------ We will continue by discussing the Hund’s coupling case (b), and consider the Oxygen molecule, in its predominant isotopomer $^{16}O^{16}O$. This molecule is probably the most important species among $^3\Sigma$ ground state molecules, and it was one of the first molecules studied in detail [@Kuebler; @Tinkham]. It is a homonuclear diatomic molecule, where only odd N’s appear because of the Pauli’s principle and symmetry. This molecule is described well by the Hund’s coupling case (b), with the effective Hamiltonian [@Carington]: $$H_{eff}=H_{rot}+H_{ss}+H_{sr}.\label{Effective_Hamiltonian}$$ Let us describe separately each term in Eq. \[Effective\_Hamiltonian\]. Here $$H_{rot}=B_r\textbf{N}^2-D\textbf{N}^4,\label{Rotational}$$ is the energy of the nuclei rotation, where $D$ is the centrifugal distortion coefficient. In addition, $$H_{ss}=-g_s^2\mu_B^2(\mu_0/4\pi)\sqrt{6}T^2(\textbf{C})\cdot T^2(\textbf{S}_1,\textbf{S}_2),\label{SpinSpin}$$ is the electornic spin-spin dipolar interaction. $T^2()$ is a spherical tensor of rank $2$. $\textbf{S}_1,\textbf{S}_2$ are electronic spin operators. $T^2_q(\textbf{C})=\langle C^2_q(\theta,\phi)R^{-3}\rangle$, where $C^2_q$ is the spherical harmonics, and $$H_{sr}=\gamma T^1(\textbf{N})\cdot T^1(\textbf{S}),\label{SpinRotation}$$ is the electronic-spin rotation interaction. The Hund’s coupling case (b) basis looks like this: $$|\eta,\Lambda;N,\Lambda;N,S,J,M_J\rangle. \label{Hund case b}$$ Here $N$ is the nuclei rotational quantum number, $\Lambda=0$ in our case, $\textbf{S}$ is the electronic spin, which is $1$ in our case, $\textbf{J}=\textbf{N}+\textbf{S}$, and $M_J$ is the projection of $\textbf{J}$ onto a fixed $z$-direction in space. All the needed matrix elements and constants are given in [@Carington]. The Zeeman Hamiltonian is given by: $$H_Z=g_S \mu_B T^1(\textbf{B})\cdot T^1(\textbf{S}).\label{Zeeman Oxygen}$$ Its matrix elements (Eq. \[A\_force\]) are given by: $$\begin{aligned} &d \langle& \eta ,\Lambda;N,\Lambda;N,S,J,M_J|T^1_0(\textbf{S})|\eta,\Lambda;N',\Lambda;N',S,J',M_J\rangle\nonumber\\ &=& d (-1)^{J-M_J}\left( \begin{array}{ccc} J & 1 & J' \\ -M_J & 0 & M_J \end{array} \right) \delta_{N,N'}(-1)^{J'+S+1+N}\nonumber\\ &\times&\sqrt{(2J'+1)(2J+1)}\left\{ \begin{array}{ccc} S & J' & N \\ J & S & 1 \end{array} \right\}\nonumber\\ &\times& \sqrt{S(S+1)(2S+1)}, \label{Zeeman Deivation}\end{aligned}$$ where $d\equiv g_S \mu_B$. Finally, in order to account for the laser-induced prealignment, we derived the following relation (to be used in Eq. \[Pre-Alignment Coefficients\]): $$\begin{aligned} &&\langle \eta ,\Lambda;N,\Lambda;N,S,J,M_J|D^{2*}_{00}|\eta,\Lambda;N',\Lambda;N',S',J',M_J\rangle\nonumber\\ &=& (-1)^{J-M_J}\left( \begin{array}{ccc} J & 2 & J' \\ -M_J & 0 & M_J \end{array} \right) \delta_{S,S'}(-1)^{J'+S+2+N}\nonumber\\ &\times&\sqrt{(2J'+1)(2J+1)}\left\{ \begin{array}{ccc} N' & J' & S \\ J & N & 2 \end{array} \right\}(-1)^{N-\Lambda}\nonumber\\ &\times&\left( \begin{array}{ccc} N & 2 & N' \\ -\Lambda & 0 & \Lambda \end{array} \right)\sqrt{(2N+1)(2N'+1)}. \label{COS2Derivation}\end{aligned}$$ In the case of the oxygen molecule, there is a relatively strong effect of the spin-spin interaction, which complicates our analysis. Therefore, we have also chosen an additional $^3\Sigma$ molecule, $^{14}NH$ for our study. For this molecule the ratio between the spin-spin and spin-rotation interactions is reduced (compared to the $O_2$ case). This makes $NH$ a simpler candidate to test our rotational effects. The $NH$ molecule was thoroughly studied in the past [@Wayne; @Klaus; @Jesus; @Lewen], and its effective Hamiltonian is: $$H_{eff}=H_{rot}+H_{ss}+H_{sr}+H_{HFS}, \label{Hamiltonian_NH}$$ where $H_{rot}$, $H_{ss}$ and $H_{sr}$ were defined in Eq. \[Rotational\], \[SpinSpin\] and \[SpinRotation\]. Since $NH$ has non-zero nuclei spin ($N$ has nuclear spin $I=1$, $H$ has $I=1/2$), then it has a hyperfine structure described by the Hamiltonian $H_{HFS}$. Further elaboration on the hyperfine structure of NH (including details on the Zeeman term, and the matrix elements related to Eq. \[Pre-Alignment Coefficients\]), is given in the Appendix in Sec. \[NH HFS\]. Laser control of the Stern-Gerlach scattering {#Applications to Molecules} ============================================= $ClO$ {#ClO} ----- In this part of the work, we apply the theoretical tools that were presented in the previous sections to the SG scattering of the $ClO$ molecule. This molecule exhibits a good Hund’s coupling case (a), and details about it were already given in Sec. \[Hund case A\]. We will consider here its ground state ($T=0K$), for which $\Lambda=1,\Sigma=1/2,\Omega=3/2,J=3/2,F=0,M_F=0$. In Fig. \[ClO Distribution no kick\] we present the force distribution (Eq. \[distribution of forces\]) for a $ClO$ molecule in the ground state that is deflected by a SG magnetic field. As only single molecular state is occupied, the force has a well defined single value. ![The force distribution for a beam of $ClO$ molecules that are deflected by a magnetic field of $1T$. The temperature is $0K$, therefore only the ground state is considered, and the distribution reduces to a single-value peak.[]{data-label="ClO Distribution no kick"}](ClO_No_Align1.eps){width="70mm"} As the next step, we assume that the molecules are subject to a short laser pulse with a kick strength of $P=30$ (Eq. \[kick strength\]) before they enter the SG magnetic field. The new force distribution is given in Fig. \[ClO Distribution P30\]. ![The force distribution for a beam of prealigned $ClO$ molecules. The temperature is $0K$, and the kick strength of the laser is $P=30$. The prealigned molecule is deflected by a magnetic field of $1T$. This distribution should be compared to the one from Fig. \[ClO Distribution no kick\].[]{data-label="ClO Distribution P30"}](ClO_Align1.eps){width="70mm"} By comparing Fig. \[ClO Distribution no kick\] to Fig. \[ClO Distribution P30\], it can be observed that the effect of the laser-induced field-free alignment is to effectively turn-off the interaction between the molecule and the magnetic field. This effect is similar to the one discussed by us recently in connection with the scattering of polar molecules by inhomogeneous static electric fields [@gershnabel5]. Moreover, rotation-induced dispersion in molecular scattering by static electric fields was used as a selection tool in recent experiments on laser molecular alignment [@reduction]. A related phenomenon of the reduction of the electric dipole interaction in highly excited stationary molecular rotational states was observed there. Further details and discussion about the $ClO$ magnetic deflection is provided in Sec. \[Discussions\]. $O_2$ {#$O_2$} ----- In this sub-section we consider the $O_2$ molecule. The $O_2$ molecule is described well by Hund’s coupling case (b) scheme, and the details about it were given in Sec. \[Hund case B\]. First, we consider a beam of $O_2$ molecules at $0K$, i.e., in the ground state ($N=1$, $J=0$, and $M_J=0$). These molecules enter a magnetic field of $1T$, and are deflected by this field. The force distribution for these molecules is given in Fig. \[Results1\]. ![The force distribution for a beam of $O_2$ molecules that are deflected by a $1T$ magnetic field. The temperature is $0K$, i.e., only the ground state is populated and therefore the distribution reduces to a single-value peak.[]{data-label="Results1"}](Oxygen0T_No_Alignment1.eps){width="70mm"} Second, we consider the action of the prealignment pulses of different kick strengths ($P=10, 30, 70$) before the molecules enter the deflecting field. The distribution of forces at $0K$ is given in Fig. \[FinalPlot1\], where two major peaks are observed. As the strength of the pulses is increased, higher rotational states are excited, and the peaks become closer to each other. ![The force distribution for a beam of prealigned $O_2$ molecules. Different kick strengths ($P=10,30,70$) are considered and the magnetic field is $1T$ (temperature is $0K$). As the excitation is increased, the two major peaks become closer to each other. This distribution should be compared to the one from Fig. \[Results1\].[]{data-label="FinalPlot1"}](Oxygen0K1T_Alignment1.eps){width="80mm"} Third, we consider deflection of thermal molecules without and with prealignment, in Figs \[Results3\] and \[Results4\], respectively. By comparing Fig. \[FinalPlot1\] to Fig. \[Results4\], we observe an additional peak in Fig. \[Results4\]. As the strength of the prealignment pulses is increased, the peaks in Fig. \[Results4\] are changed: they become narrower and the two left peaks become closer to each other. Further discussion on $O_2$ will be provided in Sec. \[Discussions\]. ![The force distribution for $O_2$ molecules. The temperature is $5K$ and the magnetic field is $1T$.[]{data-label="Results3"}](Oxygen5K_1T_No_Align1.eps){width="80mm"} ![The force distribution for a beam of $O_2$ molecules, prealigned by a laser field ($P=10,30,70$). The temperature is $5K$, and the magnetic field is $1T$. Here we observe three major peaks. As the laser excitation strength is increased, the peaks become narrower, and the two left peaks become closer to each other.[]{data-label="Results4"}](Oxygen5K1T_Align1.eps){width="60mm"} $NH$ {#NH} ---- Finally, we consider the $NH$ molecule. This molecule is described well by a Hund’s coupling case (b) scheme, similar to the $O_2$ molecule, however it has a reduced spin-spin to spin-rotation interaction ratio. This makes the $NH$ molecule a simpler candidate for the theoretical analysis. In Fig. \[NH\_distribution\] we plot the force distribution for the ground state $N=0,J=1,F_1=3/2,F=1/2$ molecules that were prealigned by laser pulses of different intensity. $M_F$ was taken to be $1/2$ for certainty, and the case of $M_F=-1/2$ may be considered similarly (with similar consequences, as will be described in Sec. \[Discussions\]). ![The distribution of forces for a beam of $NH$ molecules, that were prealigned (starting from the lowest state $N=0,J=1,F_1=3/2,F=1/2$) by means of laser pulses of different strengths: $P=10$ (green), $P=30$ (blue) and $P=40$ (red). Only $M_F=1/2$ is considered, at a $2T$ magnetic field.[]{data-label="NH_distribution"}](NH_Distribution1.eps){width="80mm"} One may observe the presence of three major peaks now (for the $O_2$ molecules in the ground state there were only two peaks). As the strength of the prealignment pulse is increased, the major peaks are shifted in position. An additional difference between the $NH$ and the $O_2$ molecules is that now the peak to the right is also shifted with increasing the strength of the prealignment laser pulse. Further elaboration about this molecule, and the difference between it and $O_2$, is given in Sec. \[Discussions\]. Discussion {#Discussions} ========== $ClO$ {#Discussion, ClO} ----- First we will discuss the $ClO$ molecule, which exhibits a good Hund’s coupling case (a). Having both electronic angular momentum and spin coupled to the internuclear axis, rotation of the molecule by means of short laser pulses leads to the rotation of the molecular magnetic moment as well. The interaction between the SG magnetic field and the rapidly rotating magnetic moment of the molecule will be thus averaged to zero, leading to the negligible magnetic forces. $O_2$ {#Discussion, O2} ----- In Fig. \[RegularLamda1\] we plot the forces vs. magnetic field, for several values of $J$. First, we observe that for a high magnetic field all the curves are separated to form a three SG splitting pattern [@Kuebler]. In the limit of the low magnetic field (and slow rotations), the energy spectrum of the molecule is rather complex due to the spin-spin interaction [@Tinkham]. At around $1T$, though, we are in the regime where the spin-rotation ($H_{sr}$) interaction has a rather strong dynamic control: as $N$ is increased (by the means of prealignment, for instance) then a sizable shift of the force magnitude is observed. This is the origin of the behavior of the distributions of Fig. \[FinalPlot1\] and Fig. \[Results4\]. It can also be observed in Fig. \[RegularLamda1\] that the spin-spin term is larger than the spin-rotation one, and it shifts the curve for $J=N$ from the two other curves. We find also that in the case of $J=N+1,N-1$, the forces are more susceptible to different $N$s, which is observed in the distribution of forces in Fig. \[FinalPlot1\] and Fig. \[Results4\]. ![Forces vs. magnetic field for the $O_2$ molecule. The $y$ axis is given in arbitrary units, the $x$ axis is given in the units of Tesla. Blue, green, and red (solid lines) correspond to $N=31$, $J=30,31,32$ ($M_J=0$), respectively. Blue, green, and red (dashed lines) correspond to $N=71$ and $J=70,71,72$ ($M_J=0$), respectively. The effects of the spin-spin interaction reveal themselves in the fact that the upper level $J=N$ is well separated from two almost degenerate levels with $J=N+1,N-1$. Magnetic field near $1T$ is optimal for observing the sensitivity of the deflecting force to the $N$ variation.[]{data-label="RegularLamda1"}](OxygenLevels1.eps){width="60mm"} Fig. \[RegularLamda1\] also allows us to understand the position of the peaks in Fig. \[FinalPlot1\] and Fig. \[Results4\]. The right peak that appears in Fig. \[Results4\] and does not appear in Fig. \[FinalPlot1\] corresponds to the $J=N$ states. Due to selection rules (Eq. \[COS2Derivation\]), the odd $J$s, i.e., the $J=N$ states, are never excited (if we start from $J=0$, and $M_J=0$ at $0K$). This is why we observe only two peaks, i.e., the $J=N\pm1$ peaks, in Fig. \[FinalPlot1\]. Considering a deflection of the molecules in the ground state alone is important experimentally. Even if one considers an experiment at $T=1K$ ($k_BT=20837MHz$), then the difference between ($N=1, J=0$) and ($N=1, J=2$) (the next energy) is $62486MHz$, which is large enough. Though, for $1K$ we should expect a small peak in the distribution of forces for $J=N$ states. In the case of larger temperature (Fig. \[Results4\]), we start from different $M$s, and also the odd $J$s are present, therefore, we observe the right peak at Fig \[Results4\]. As the prealignment becomes stronger, the distribution transforms into three peaks, each correspond to either $J=N$, $J=N-1$ or $J=N+1$ states. $NH$ (and the imaginary $\widetilde{O_2}$ molecule!) {#Discussion, NH} ---------------------------------------------------- Before we start with the $NH$ molecule, we consider an imaginary $\widetilde{O_2}$ molecule (!). This molecule is similar to the $O_2$ molecule, only with a spin-spin interaction that is reduced by a factor of $100$. The forces vs. magnetic field for the imaginary $\widetilde{O_2}$ molecule are plotted in Fig. \[LamdaSmall\], where we get a symmetric splitting of the $N$ level into $J=N,N\pm1$ ($J=N$ is in the middle, as is intuitively expected for SG splittings, unlike in the $O_2$ case). Such behavior corresponds to a spin-spin interaction that is negligible compared with the spin-rotation. ![Forces vs. magnetic field for the imaginary $\widetilde{O_2}$ molecule (details in the text). The $y$ axis is given in arbitrary units, the $x$ axis is given in the units of Tesla. Blue, green, and red (solid lines) correspond to $N=31$, $J=30,31,32$ ($M_J=0$), respectively. Blue, green, and red (dashed lines) correspond to $N=71$ and $J=70,71,72$ ($M_J=0$), respectively. At about $1T$ for this molecule, we observe approximately a symmetric splitting into three graphs for $J=N$ and $N\pm1$, where $J=N$ is in the middle (unlike in the $O_2$ case). []{data-label="LamdaSmall"}](ImaginaryMoleculeCurves1.eps){width="70mm"} As we intuitively suggested in Sec. \[Introduction\_magnetic\], when one applies prealignment to molecules belonging to the Hund’s coupling case (b), the electronic spin feels the SG field combined with the effective magnetic field due to nuclei rotations. The latter field is along the $N$-vector, i.e. perpendicular to the molecular axis. A strong enough vertically polarized laser pulse excites molecular rotations in the vertical planes containing the $z$-axis. As a result, the rotation-induced effective magnetic field is perpendicular to the vertical SG field. Therefore, the force felt by the molecules is given by $$Force=\frac{K_0B}{\sqrt{B^2+K_1^2}},\label{Fit to curve}$$ where $K_0$ and $K_1$ are constants (the latter is proportional to $N$ or $P$). Fig. \[LamdaSmall\] presents results of the exact quantum-mechanical calculation of the SG force for our imaginary $\widetilde{O_2}$ molecule. We considered the upper curves in this figure, and tried to fit them to the above analytical expression. We find an excellent agreement between the original data and the fitted curves, and the results of the fit are $K_1=0.61,0.25$ for $N=71,31$, respectively. We also find a good agreement between the ratio of $N's$ (i.e., $71/31=2.3$) and of $K_1's$ (i.e., $0.61/0.25=2.4$). As we have mentioned before, the $NH$ molecule is also characterized by a reduced value of the spin-spin interaction compared to the spin-rotation interaction. Therefore, its dynamics should be closer to the imaginary $\widetilde{O_2}$ molecule than to the real $O_2$ molecule considered above. In Fig. \[NH\_forces\] we plot some forces vs. the magnetic field for the $NH$ molecule, and indeed, observe a triplet-like structure similar to that one in Fig. \[LamdaSmall\] (with the curve for $J=N$ being in the middle for large enough $N$). ![Forces vs magnetic field for the $NH$ molecule (including the fine and hyperfine details), for $N=10$ (green), $N=30$ (blue) and $N=41$ (red). Only $M_F=1/2$ is considered here, but $M_F=-1/2$ gives the same results (only higher $M_F$’s will modify the spectrum). For $N=10$ the upper/middle/lower curves correspond to $J=N,J=N-1,J=N+1$, respectively, as in the Oxygen case. For $N=30,41$ the upper/middle/lower curves correspond to $J=N-1,J=N,J=N+1$, respectively, as in Fig. \[LamdaSmall\].[]{data-label="NH_forces"}](NH_Curves1.eps){width="70mm"} By analyzing the results shown in Fig. \[NH\_forces\] we conclude that the hyperfine structure details in $NH$ are not crucially important for our considerations, but the reduced spin-spin to spin-rotation interaction ratio for $NH$ defines the major difference of the deflection dynamics compared to the case of $O_2$. Also one notices the scaling with the magnetic field: here the higher values of the magnetic field are required ($2T$) to observe the collapse of the broad distribution of forces into three narrow groups. This is due to the increased spin-rotation interaction for $NH$ (as compared to $O_2$). Fig. \[NH\_forces\] explains the behavior of the distribution in Fig. \[NH\_distribution\], where we have noticed that the three peaks are shifted as one increases the laser pulse strength. Conclusions {#Summary} =========== We considered scattering of paramagnetic molecules by inhomogeneous magnetic field in a Stern-Gerlach-type experiment. We showed that by prealigning the molecules before they interact with the magnetic field, one obtains an efficient control over the scattering process. Two qualitatively different effects were found, depending on the Hund’s coupling case of the molecule. For molecules that belong to the Hund’s coupling case (a), we showed that the deflection process may be strongly suppressed by laser pulses. This may be implemented as an optical switch in the molecular magnetic deceleration techniques [@even]. Furthermore, for the Hund’s coupling case (b) molecules, a sparse distribution of the scattering angles is transformed into a distribution with several compact deflection peaks having controllable positions. Each peak corresponds to a scattered molecular sub-beam with increased brightness. The molecular deflection is considered as a promising route to the separation of molecular mixtures. Narrowing and displacing scattering peaks may substantially increase the efficiency of separating multi-component beams, especially when the prealignment is applied selectively to certain molecular species, such as specific isotopes [@isotopes], or nuclear spin isomers [@isomers1; @isomers2]. One may envision more sophisticated schemes for controlling molecular scattering, which involve multiple pulses with variable polarization for preshaping molecular angular distribution. In particular, molecular rotation may be confined to a certain plane by using the ”optical molecular centrifuge” approach [@centrifuge; @Mullin], double-pulse ignited ”molecular propeller” [@propeller], or permanent planar alignment induced by a pair of delayed perpendicularly polarized short laser pulses [@France1; @France2]. If molecules are prepared like this, a narrow angular peak is expected in their scattering distribution from a magnetic field. The position of the peak is controllable by inclination of the plane of rotation with respect to the deflecting field, similar to a related effect for molecular scattering in inhomogeneous electric fields see [@gershnabel4]). Moreover, further manipulations of the deflection process may be considered, e.g., by using several SG fields with varying directions. Magnetic deflection of $O_2$ molecules subject to laser-induced field-free manipulations, is currently a subject of an ongoing experimental effort. ACKNOWLEDGMENT {#acknowledgment .unnumbered} ============== We enjoyed many stimulating discussions with Valery Milner and Sergey Zhdanovich. One of us (IA) appreciates the kind hospitality and support at the University of British Columbia (Vancouver). This work is supported in part by grants from the Israel Science Foundation, and DFG (German Research Foundation). Our research is made possible in part by the historic generosity of the Harold Perlman Family. IA is an incumbent of the Patricia Elman Bildner Professorial Chair. Appendix: $NH$ (Hund’s coupling case (b)) Hyperfine structure {#NH HFS} ============================================================= The hyperfine structure for the $NH$ molecule is described by the following Hamiltonian [@Carington; @Klaus; @Jesus; @Lewen]: $$\begin{aligned} H_{HFS}&=&\sum_k {b_{F_k} T^1(\textbf{I}_k)\cdot T^1(\textbf{S})}\nonumber\\ &-&\sum_k{t_k\sqrt{10}T^1(\textbf{I}_k)\cdot T^1(\textbf{S},\textbf{C}^2(\omega))}\nonumber\\ &-& eT^2(\nabla\textbf{E}_2)\cdot T^2(\textbf{Q}_2)\nonumber\\ &+&\sum_k c_I(k) T^1(\textbf{N})\cdot T^1(\textbf{I}_k), \label{hfs hamiltonian}\end{aligned}$$ where the sum over $k=1,2$ represents the terms for both nuclei. The first term is the Fermi contact interaction, the second term is the dipolar interaction, the third one is the quadrupole term (this term exists only for the $^{14}N$), and the last term accounts for the nuclei spin-rotation interaction. In the calculation of matrix elements we first coupled $\textbf{J}=\textbf{S}+\textbf{N}$, $\textbf{F}_1=\textbf{I}_H+\textbf{J}$ and only then $\textbf{F}=\textbf{I}_N+\textbf{F}_1$. All the matrix elements are diagonal in $F$, and the first three terms are given in [@Carington]. The nuclear spin-rotation interactions are given by: $$\begin{aligned} &&\langle \eta,\Lambda,N,S,J,I_1,F_1,I_2,F,M_F|T^1(\textbf{N})\cdot T^1(\textbf{I}_1) \nonumber\\ &&|\eta,\Lambda,N',S,J',I_1,F_1',I_2,F,M_F\rangle\nonumber\\ &=& (-1)^{J'+F_1+I_1}\delta_{F_1,F_1'}\left\{ \begin{array}{ccc} I_1 & J' & F_1 \\ J & I_1 & 1 \end{array} \right\}\nonumber\\ &\times&\sqrt{I_1(I_1+1)(2I_1+1)}\delta_{N,N'}(-1)^{J'+N+1+S}\nonumber\\ &\times&\sqrt{(2J+1)(2J'+1)}\left\{ \begin{array}{ccc} N' & J' & I_1 \\ J & N & 1 \end{array} \right\}\nonumber\\ &\times&\sqrt{N(N+1)(2N+1)},\label{spin-rotation1}\end{aligned}$$ where $I_1\equiv I_H$, and $$\begin{aligned} &&\langle \eta,\Lambda,N,S,J,I_1,F_1,I_2,F,M_F|T^1(\textbf{N})\cdot T^1(\textbf{I}_2) \nonumber\\ &&|\eta,\Lambda,N',S,J',I_1,F_1',I_2,F,M_F\rangle\nonumber\\ &=& (-1)^{F_1'+F+I_2}\left\{ \begin{array}{ccc} I_2 & F_1' & F \\ F_1 & I_2 & 1 \end{array} \right\}\nonumber\\ &\times&\sqrt{I_2(I_2+1)(2I_2+1)}(-1)^{F_1'+J+1+I_1}\nonumber\\ &\times&\sqrt{(2F_1+1)(2F_1'+1)}\left\{ \begin{array}{ccc} J' & F_1' & I_1 \\ F_1 & J & 1 \end{array} \right\}\nonumber\\ &\times&(-1)^{J'+N+1+S}\sqrt{(2J+1)(2J'+1)}\left\{ \begin{array}{ccc} N' & J' & S \\ J & N & 1 \end{array} \right\}\nonumber\\ &\times&\delta_{N,N'}\sqrt{N(N+1)(2N+1)},\label{spin-rotation2}\end{aligned}$$ where $I_2\equiv I_N$. The constants were taken from [@Jesus]. In the Zeeman Hamiltonian we consider only the contribution due to electronic spin: $$H_{Z}=\mu_B g_s T^1(\textbf{B})\cdot T^1(\textbf{S}), \label{ZeemanHfs}$$ and we neglect other small contributions coming from the nuclei’s rotation and spin, and electronic anisotropy. The Zeeman matrix element is proportional to: $$\begin{aligned} &&\langle \eta,\Delta,N,S,J,I_1,F_1,I_2,F,M_F|T^1_0(\textbf{S})\nonumber\\ &&|\eta,\Delta,N,S,J',I_1,F_1',I_2,F',M_F\rangle\nonumber\\ &=& (-1)^{F-M_F} \left( \begin{array}{ccc} F & 1 & F' \\ -M_F & 0 & M_F \end{array} \right)\nonumber\\ &\times& (-1)^{F'+F_1+1+I_2}\sqrt{(2F+1)(2F'+1)}\left\{ \begin{array}{ccc} F_1' & F' & I_2 \\ F & F_1 & 1 \end{array} \right\}\nonumber\\ &\times& (-1)^{F_1'+J+1+I_1}\sqrt{(2F_1+1)(2F_1'+1)}\left\{ \begin{array}{ccc} J' & F_1' & I_1 \\ F_1 & J & 1 \end{array} \right\}\nonumber\\ &\times&(-1)^{J'+S+1+N}\sqrt{(2J'+1)(2J+1)}\left\{ \begin{array}{ccc} S & J' & N \\ J & S & 1 \end{array} \right\}\nonumber\\ &\times&\sqrt{S(S+1)(2S+1)},\label{hfs Zeeman}\end{aligned}$$ where it is no more diagonal in F. Finally, for the alignment calculations the following matrix element is useful: $$\begin{aligned} &&\langle \eta,\Delta,N,S,J,I_1,F_1,I_2,F,M_F|D^{2 *}_{00} \nonumber\\ &&|\eta,\Delta,N',S,J',I_1,F_1',I_2,F',M_F\rangle\nonumber\\ &=& (-1)^{F-M_F} \left( \begin{array}{ccc} F & 2 & F' \\ -M_F & 0 & M_F \end{array} \right)\nonumber\\ &\times& (-1)^{F'+F_1+2+I_2}\sqrt{(2F+1)(2F'+1)}\left\{ \begin{array}{ccc} F_1' & F' & I_2 \\ F & F_1 & 2 \end{array} \right\}\nonumber\\ &\times& (-1)^{F_1'+J+2+I_1}\sqrt{(2F_1+1)(2F_1'+1)}\left\{ \begin{array}{ccc} J' & F_1' & I_1 \\ F_1 & J & 2 \end{array} \right\}\nonumber\\ &\times&(-1)^{J'+N+2+S}\sqrt{(2J+1)(2J'+1)}\left\{ \begin{array}{ccc} N' & J' & S \\ J & N & 2 \end{array} \right\}\nonumber\\ &\times&(-1)^N \left( \begin{array}{ccc} N & 2 & N' \\ 0 & 0 & 0 \end{array} \right) \sqrt{(2N+1)(2N'+1)}.\label{Prealignment HFS}\end{aligned}$$ W. Gerlach and O. Stern, Z. Phys. **9**, 353 (1922); Ann. Phys. **74**, 673 (1924). C. H. Townes and A. L. Schawlow, Microwave Spectroscopy, 2nd ed. (Dover Publications, Inc., New York, 1975). T. J. McCarthy, M. T. Timko and D. R. Herschbach, J. Chem. Phys. **125**, 133501 (2006). E. Benichou, A. R. Allouche, R. Antoine, M. Aubert-Frecon, M. Bourgoin, M. Broyer, Ph. Dugourd, G. Hadinger and D. Rayane, Eur. Phys. J. D. **10**, 233 (2000). H. J. Loesch, Chem. Phys. **207**, 427 (1996). R. Antoine, D. Rayane, A. R. Allouche, M. Aubert-Frecon, E. Benichou, F. W. Dalby, Ph. Dugourd, M. Broyer and C. Guet, J. Chem. Phys. **110**, 5568 (1999). L. Holmegaard, J. H. Nielsen, I. Nevo, H. Stapelfeldt, F. Filsinger, J. Küpper, and G. Meijer, Phys. Rev. Lett. $\textbf{102}$, 023001 (2009); F. Filsinger, J. Küpper, G. Meijer, L. Holmegaard, J. H. Nielsen, I. Nevo, J. L. Hansen and H. Stapelfeldt, J. Chem. Phys. $\textbf{131}$, 064309 (2009). N. A. Kuebler, M. B. Robin, J. J. Yang, A. Gedanken and D. R. Herrick, Phys. Rev. A **38**, 737 (1988). E. Narevicius, C. G. Parthey, A. Libson, M. F. Riedel, U. Even and M. G. Raizen, New J. Phys., **9**, 96 (2007). H. Stapelfeldt, H. Sakai, E. Constant and P. B. Corkum, Phys. Rev. Lett. **79**, 2787 (1997); H. Sakai, A. Tarasevitch, J. Danilov, H. Stapelfeldt, R. W. Yip, C. Ellert, E. Constant and P. B. Corkum, Phys. Rev. A, **57**, 2794 (1998). B. S. Zhao, H. S. Chung, K. Cho, S. H. Lee, S. Hwang, J. Yu, Y. H. Ahn, J. Y. Sohn, D. S. Kim, W. K. Kang and D. S. Chung, Phys. Rev. Lett. **85**, 2705 (2000); H. S. Chung, B. S. Zhao, S. H. Lee, S. Hwang, K. Cho, S. H. Shim, S. M. Lim, W. K. Kang and D. S. Chung, J. Chem. Phys. **114**, 8293 (2001). B. S. Zhao, S. H. Lee, H. S. Chung, S. Hwang, W. K. Kang, B. Friedrich and D. S. Chung, J. Chem. Phys. **119**, 8905 (2003). S. M. Purcell and P.F. Barker, Phys. Rev. Lett. **103**, 153001 (2009); Phys. Rev. A **82**, 033433 (2010). B. A. Zon and B. G. Katsnelson, Zh. Eksp. Teor. Fiz. **69**, 1166 (1975) \[Sov. Phys. JETP **42**, 595 (1975)\]. B. Friedrich and D. Herschbach, Phys. Rev. Lett. **74**, 4623 (1995); J. Chem. Phys. **111**, 6157 (1999). E. Gershnabel and I. Sh. Averbukh, Phys. Rev. Lett. **104**, 153001 (2010); Phys. Rev. A **82**, 033401 (2010). E. Gershnabel and I. Sh. Averbukh, J. Chem. Phys. **134**, 054304 (2011). J. Floß, E. Gershnabel and I. Sh. Averbukh, Phys. Rev. A **83**, 025401 (2011). V. Aqullanti, D. Ascenzi, D. Cappelletti, and F. Pirani, Nature **371**, 399 (1994); V. Aqullanti, D. Ascenzi, D. Cappelletti, S. Franceschini and F. Pirani, Phys. Rev. Lett. **74**, 2929 (1995). H. Stapelfeldt and T. Seideman, Rev. Mod. Phys. **75**, 543 (2003). V. Kumarappan, S. S. Viftrup, L. Holmegaard, C. Z. Bisgaard and H. Stapelfeldt, Phys. Scr. **76**, C63 (2007). J. G. Underwood, B. J. Sussman, and A. Stolow, Phys. Rev. Lett. **94**, 143002 (2005); K. F. Lee, D. M. Villeneuve, P. B. Corkum, A. Stolow, and J. G. Underwood, Phys. Rev. Lett. **97**, 173001 (2006). D. Daems, S. Guérin, E. Hertz, H. R. Jauslin, B. Lavorel, and O. Faucher, Phys. Rev. Lett. **95**, 063005 (2005); E. Hertz, A. Rouzée, S. Guérin, B. Lavorel, and O. Faucher, Phys. Rev. A **75**, 031403(R) (2007); Guérin, A. Rouzée, and E. Hertz, Phys. Rev. A **77**, 041404(R) (2008). J. Brown and A. Carington, Rotational Spectroscopy of Diatomic Molecules, 1st ed. (Cambridge University Press, New York, 2003). E. Gershnabel, I. Sh. Averbukh and R. J. Gordon, Phys. Rev. A **74**, 053414 (2006). A. Carrington, P. N. Dyer and D. H. Levy, J. Chem. Phys. **47**, 1756 (1967). R. K. Kakar, E. A. Cohen and M. Geller, J. Mol. Spect. **70**, 243 (1978). B. J. Drouin, C. E. Miller, E. A. Cohen, G. Wagner and M. Birk, J. Mol. Spect. **207**, 4 (2001). M. Tinkham and M. W. P. Strandberg, Phys. Rev. **97**, 937 (1955); Phys. Rev. **97**, 951 (1955); M. Tinkham, Ph.D. Thesis, (Massachusetts Institute of Technology, 1954). F. D. Wayne and H. E. Radford, Mol. Phys. **32**, 1407 (1976). T. Klaus, S. Takano and G. Winnewisser, Astron. Astrophys. **322**, L1 (1997). J. Flores-Mijangos J. M. Brown, F. Matsushima, H. Odashima, K. Takagi, L. R. Zink and K. M. Evenson, J. Mol. Spect. **225**, 189 (2004). F. Lewen, S. Brünken, G. Winnewisser, M. Šimečková, and Š. Urban, J. Mol. Spect. **226**, 113 (2004). S. Fleischer, I. Sh. Averbukh and Y. Prior, Phys. Rev. A **74**, 041403(R) (2006). M. Renard, E. Hertz, B. Lavorel, and O. Faucher, Phys. Rev. A **69**, 043401 (2004). S. Fleischer, I. Sh. Averbukh, and Y. Prior, Phys. Rev. Lett. **99**, 093002 (2007); E. Gershnabel and I. Sh. Averbukh, Phys. Rev. A **78**, 063416 (2008). J. Karczmarek, J. Wright, P. Corkum and M. Ivanov, Phys. Rev. Lett. **82**, 3420 (1999); D. M. Villeneuve, S. A. Aseyev, P. Dietrich, M. Spanner, M. Yu. Ivanov, and P. B. Corkum, Phys. Rev. Lett. **85**, 542 (2000). Liwei Yuan, S. W. Teitelbaum, A. Robinson, and A. S. Mullin, PNAS **108**, 6872 (2011). S. Fleischer, Y.Khodorkovsky, Y. Prior, and I. Sh. Averbukh, New J. Phys. **11**, 105039 (2009). M. Lapert, E. Hertz, S. Guérin, and D. Sugny, Phys. Rev. A **80**, 051403(R) (2009). Md. Z. Hoque, M. Lapert, E. Hertz, F. Billard, D. Sugny, B. Lavorel, and O. Faucher, Phys. Rev. A **84**, 013409 (2011).
{ "pile_set_name": "ArXiv" }
--- abstract: | In this paper we define a new type of continued fraction expansion for a real number $x \in I_m:=[0,m-1], m\in N_+, m\geq 2$: $$x = \frac{m^{-b_1(x)}}{\displaystyle 1+\frac{m^{-b_2(x)}}{1+\ddots}}:=[b_1(x), b_2(x), \ldots]_m.$$ Then, we derive the basic properties of this continued fraction expansion, following the same steps as in the case of the regular continued fraction expansion. The main purpose of the paper is to prove the convergence of this type of expansion, i.e. we must show that $$x= \lim_{n\rightarrow\infty}[b_1(x), b_2(x), \ldots, b_n(x)]_m.$$ [**Keywords:**]{} [*continued fractions, incomplete quotients*]{} author: - | Ion COLTESCU, Dan LASCU\ “Mircea cel Batran” Naval Academy, 1 Fulgerului,\ 900218 Constanta, Romania\ : icoltescu@yahoo.com, lascudan@gmail.com title: A NEW TYPE OF CONTINUED FRACTION EXPANSION --- INTRODUCTION ============ In this section we make a brief presentation of the theory of regular continued fraction expansions. It is well-known that the regular continued fraction expansion of a real number looks as follows: $$\frac{1}{\displaystyle a_1+\frac{1}{\displaystyle a_2+\ddots + \frac{1}{a_n + \ddots}}}$$ where $a_n \in N$, $\forall n \in N_+$. We can write this expression more compactly as $$[0;a_1, a_2, \ldots, a_n, \ldots]. \label{eq11}$$ The terms $a_1, a_2, \ldots$ are called the incomplete quotients of the continued fraction. Continued fractions theory starts with the procedure known as Euclid’s algorithm for finding the greatest common divisor. To generalize Euclid’s algorithm to irrational numbers from the unit interval $I$, consider the continued fraction transformation $\tau:I \rightarrow I$ defined by $$\tau(x):=\frac{1}{x} - \left[\frac{1}{x}\right], x \neq 0, \tau(0):=0, \label{eq12}$$ where $[\cdot]$ denotes the floor (entire) function. Thus, we define $a_1=a_1(x)=\left[\frac{1}{x}\right]$ and $a_n = a_1 (\tau ^{n-1}(x))$, $\forall n \in N$, where $\tau ^0(x)=x$, and $\tau^n(x)=\tau(\tau^{n-1}(x))$. Then, from relation (\[eq12\]), we have: $$x= \frac{1}{\displaystyle a_1+\tau(x)} = \frac{1}{\displaystyle a_1+\frac{1}{\displaystyle a_2+\tau^2(x)}} = \ldots = [0;a_1,a_2,...,a_n+\tau^n(x)].$$ The metrical theory of continued fractions expansions is about the sequence $(a_n)_{n\in N}$ of its incomplete quotients, and related sequences. This theory started with Gauss’ problem. In modern notation, Gauss wrote that $$\lim_{n\rightarrow\infty} \lambda\left(\left\{x\in[0,1); \tau^n(x)\leq z \right\}\right) = \frac{\log(z+1)}{\log 2}, \ 0\leq z\leq 1, \label{eq13}$$ where $\lambda$ is the Lebesgue measure. Gauss asked Laplace to prove (\[eq13\]) and to estimate the error-term $r_n(z)$, defined by $r_n(z) := \lambda(\tau^{-n}([0,z])) - \frac{\log(z+1)}{\log 2}$, $n\geq 1$. (Note that, when we omit the logarithm base, we will consider the natural logarithm.) The first one who proved (\[eq13\]) and at the same time answered Gauss’ question was Kuzmin (1928), followed by L' evy. From that time on, a great number of such Gauss-Kuzmin theorems followed. To mention a few: F. Schweiger (1968), P. Wirsing (1974), K.I. Babenko (1978), and more recently by M. Iosifescu (1992). Apart from regular continued fractions, there are many other continued fractions expansions: Engel continued fractions, Rosen expansions, the nearest integer continued fraction, the grotesque continued fractions, etc. ANOTHER CONTINUED FRACTION EXPANSION ==================================== We start this section by showing that any $x \in I_m :=[0, m-1]$, $m\in N_+$, $m\geq 2$, can be written in the form $$\frac{m^{-b_1(x)}}{\displaystyle 1+\frac{m^{-b_2(x)}}{1+\ddots}}:=[b_1(x), b_2(x), \ldots]_m \label{eq21}$$ where $b_n = b_n(x)$ are integer values, belonging to the set $Z_{\geq-1}:=\{-1,0,1,2,\ldots\}$, for any $n\in N_+$. [**Proposition 2.1**]{} For any $x \in I_m:=[0,m-1]$, there exist integers numbers $b_n(x) \in \{-1,0,1,2,\ldots\}$ such that $$x = \frac{m^{-b_1(x)}}{\displaystyle 1+\frac{m^{-b_2(x)}}{1+\ddots}} \label{eq22}$$ [**Proof**]{}. If $x\in [0, m-1]$, then we can find an integer $b_1(x)\in Z_{\geq-1}$ such that $$\frac{1}{m^{b_1(x)+1}}<x<\frac{1}{m^{b_1(x)}}. \label{eq23}$$ Thus, there is a $p\in[0,1]$ such that $$x = (1-p)\frac{1}{m^{b_1(x)}} + p\frac{1}{m^{b_1(x)+1}} = \frac{m-mp+p}{m} m^{-b_1(x)}.$$ If we set $x_1 = \frac{mp-m}{m-mp+p}$, then we can write $x$ as $$x = \frac{m^{-b_1(x)}}{1+x_1}.$$ Since $x_1\in[0,m-1]$, we can repeat the same iteration and obtain $$x = \frac{m^{-b_1(x)}}{\displaystyle 1+\frac{m^{-b_2(x)}}{1+\ddots}}$$ which completes the proof. Next, we define on $I_m:=[0,m-1]$, $m\in N_+$, $m\geq 2$, the transformation $\tau_m$ by $$\tau_m: I_m \rightarrow I_m,$$ $$\tau_m(x):=m^{\frac{\log x^{-1}}{\log m}-\left[\frac{\log x^{-1}}{\log m}\right]}-1, x\neq 0, \tau(0):=0, \label{eq24}$$ where $[\cdot]$ denotes the floor (entire) function. For any $x \in I_m$, put $$b_n(x)=b_1\left(\tau^{n-1}_m(x)\right), n \in N_+,$$ $$b_1(x) = \left[\frac{\log x^{-1}}{\log m}\right], x \neq 0, b_1(0)=\infty.$$ Let $\Omega_m$ be the set of all irrational numbers from $I_m$. In the case when $x\in I_m\backslash \Omega_m$, we have $$b_n(x) = \infty, \forall n\geq k(x) \geq m,\mbox{ and } b_n(x) \in Z_{\geq -1}, \forall n < k(x).$$ Therefore, in the rational case, the continued fraction expansion (\[eq21\]) is finite, unlike the irrational case, when we have an infinite number of incomplete quotients from the set $\{-1,0,1,2,\ldots\}$. Let $\omega \in \Omega_m$. We have $$\omega = m^{\log_m \omega} = m ^{-\frac{\log \omega^{-1}}{\log m}} = \frac{m^{-\left[\frac{\log \omega^{-1}}{\log m}\right]}}{m^{\frac{\log \omega^{-1}}{\log m}-\left[\frac{\log \omega^{-1}}{\log m}\right]}} = \frac {m^{-b_1(\omega)}}{1+\tau_m(\omega)}.$$ Since, $$\begin{aligned} \tau_m(\omega) &=& m^{\log_m\tau_m(\omega)} = m^{-\frac{\log \tau_m^{-1}(\omega)}{\log m}} = \frac{m^{-\left[\frac{\log \tau_m^{-1}(\omega)}{\log m}\right]}}{m^{\frac{\log \tau_m^{-1}(\omega)}{\log m} - \left[\frac{\log \tau_m^{-1}(\omega)}{\log m}\right]}} \nonumber \\ &=&\frac{m^{-b_1(\tau_m(\omega))}}{1+\tau_m(\tau_m(\omega))} = \frac{m^{-b_2(\omega)}}{1+\tau^2_m(\omega)} \nonumber\end{aligned}$$ then, we have $$\omega = \frac{m^{-b_1(\omega)}}{\displaystyle 1+ \frac{m^{-b_2(\omega)}}{1+\tau^2_m(\omega)}} = \ldots = \frac{m^{-b_1(\omega)}}{\displaystyle 1+ \frac{m^{-b_2(\omega)}}{\displaystyle 1+ \ddots + \frac{m^{-b_n(\omega)}}{1+ \tau^n_m(\omega)}}} \label{eq25}$$ If $[b_1(\omega)] = m^{-b_1(\omega)}$, and $[b_1(\omega), b_2(\omega), \ldots, b_n(\omega)] = \frac{m^{-b_1(\omega)}}{1+[b_2(\omega),b_3(\omega),\ldots,b_n(\omega)]}$, $\forall n\geq 2$, then (\[eq25\]) can be written as $$\begin{aligned} \omega &=& \left[b_1(\omega) + \frac{\log(1+\tau_m(\omega))}{\log m}\right] = \left[b_1(\omega), b_2(\omega) + \frac{\log(1+\tau^2_m(\omega))}{\log m}\right] = \ldots = \nonumber\\ &=& \left[b_1(\omega), b_2(\omega), \ldots, b_{n-1}(\omega), b_n(\omega) + \frac{\log(1+\tau^n_m(\omega))}{\log m}\right].\nonumber\end{aligned}$$ It is obvious that we have the relations $$\tau_m(\omega) = \frac{m^{-b_2(\omega)}}{1+\tau^2_m(\omega)}, \ldots, \tau^{n-1}_m(\omega) = \frac{m^{-b_n(\omega)}}{1+\tau^n_m(\omega)}, \forall n\in N_+, \forall \omega \in \Omega_m, \label{eq26}$$ CONVERGENTS. BASIC PROPERTIES ============================= In this section we define and give the basic properties of the convergents of this continued fraction expansion. [**Definition 3.1**]{} A finite truncation in (\[eq21\]), i.e. $$\omega_n(\omega):=\frac{p_n(\omega)}{q_n(\omega)} = [b_1(\omega), b_2(\omega), \ldots, b_n(\omega)]_m, n\in N_+ \label{eq31}$$ is called the $n$-th convergent of $\omega$. The integer valued functions sequences $(p_n)_{n\in N}$ and $(q_n)_{n\in N}$ can be recursively defined by the formulae: $$\begin{aligned} p_n(\omega) &=& m^{b_n(\omega)}p_{n-1}(\omega) + m^{b_{n-1}(\omega)}p_{n-2}, \forall n\geq 2, \nonumber \\ q_n(\omega) &=& m^{b_n(\omega)}q_{n-1}(\omega) + m^{b_{n-1}(\omega)}q_{n-2}, \forall n\geq 2, \label{eq32}\end{aligned}$$ with $p_0(\omega)=0$, $q_0(\omega) = 1$, $p_1(\omega)=1$ and $q_1(\omega)=m^{b_1(\omega)}$. By induction, it is easy to prove that $$p_n(\omega)q_{n+1}(\omega) - p_{n+1}(\omega)q_n(\omega) = (-1)^{n+1}m^{b_1(\omega) + \ldots + b_n(\omega)}, \forall n \in N_+, \label{eq33}$$ and that $$\frac{m^{-b_1(\omega)}}{\displaystyle 1+ \frac{m^{-b_2(\omega)}}{\displaystyle 1+ \ddots + \frac{m^{-b_n(\omega)}}{1+ t}}} = \frac{p_n(\omega) + tm^{b_n(\omega)}p_{n-1}(\omega)}{q_n(\omega) + tm^{b_n(\omega)}q_{n-1}(\omega)}, \forall n \in N_+, t\geq 0. \label{eq34}$$ Now, combining the relations (\[eq25\]) and (\[eq34\]), it can be shown that, for any $\omega \in \Omega_m$, we have $$\omega = \frac{p_n(\omega) + \tau_m^n(\omega)m^{b_n(\omega)}p_{n-1}(\omega)}{q_n(\omega) + \tau_m^n(\omega)m^{b_n(\omega)}q_{n-1}(\omega)}, \forall n \in N_+. \label{eq35}$$ MAIN RESULT =========== At this moment, we are able to present the main result of the paper, which is the convergence of this new continued fraction expansion, i.e. we must show that $$\omega = \lim_{n\rightarrow\infty}[b_1(\omega), b_2(\omega), \ldots, b_n(\omega)]_m,$$ for any $\omega \in \Omega_m$. [**Theorem**]{} For any $\omega \in \Omega_m:=I_m \backslash Q$, we have $$\omega - \omega_n(\omega) = \frac{(-1)^n\tau^n_m(\omega)m^{b_1(\omega)+\ldots+ b_n(\omega)}}{q_n(\omega)\left(q_n(\omega)+\tau^n_m(\omega)m^{b_n(\omega)}q_{n-1}(\omega)\right)}, \forall n \in N_+. \label{eq41}$$ For any $\omega \in \Omega_m$, we have $$\frac{m^{b_1(\omega)+\ldots+b_n}(\omega)}{q_n(\omega)\left(q_{n+1}(\omega)+(m-1)^{n+1}m^{b_{n+1}(\omega)}q_{n}(\omega)\right)} < |\omega - \omega_n(\omega)|<$$ $$< \frac{1}{\max(F_n, m^{b_1(\omega)+\ldots+b_n(\omega)})}, \forall n\in N_+, \label{eq42}$$ and $$\lim_{n\rightarrow\infty}\omega_n(\omega) = \omega \label{eq43}$$ Here $F_n$ denotes the $n$-th Fibonacci number.\ [**Proof.**]{} Using relations (\[eq33\]) and (\[eq35\]), we obtain: $$\begin{aligned} \omega - \omega_n(\omega) &=& \frac{p_n(\omega) + \tau^n_m(\omega) m^{b_n(\omega)}p_{n-1}(\omega)}{q_n(\omega) + \tau^n_m(\omega)m^{b_n(\omega)}q_{n-1}(\omega)} - \frac{p_n(\omega)}{q_n(\omega)} \nonumber \\ &=& \frac{(-1)^n\tau^n_m(\omega)m^{b_1(\omega)+\ldots+ b_n(\omega)}}{q_n(\omega)\left(q_n(\omega)+\tau^n_m(\omega)m^{b_n(\omega)}q_{n-1}(\omega)\right)}. \nonumber\end{aligned}$$ Next, by (\[eq26\]) and (\[eq41\]), it follows: $$\begin{aligned} |\omega - \omega_n(\omega)| & =& \frac{\tau^n_m(\omega)m^{b_1(\omega)+\ldots+ b_n(\omega)}}{q_n(\omega)\left(q_n(\omega)+ \tau^n_m(\omega)m^{b_n(\omega)}q_{n-1}(\omega)\right)} \nonumber \\ &=& \frac{m^{-b_{n+1}(\omega)}}{1+\tau^{n+1}_m(\omega)} \cdot \frac{m^{b_1(\omega)+\ldots+ b_n(\omega)}} {q_n(\omega)\left(q_n(\omega)+ \frac{m^{-b_{n+1}(\omega)}}{1+\tau^{n+1}_m(\omega)} m^{b_n(\omega)}q_{n-1}(\omega)\right)} \nonumber \\ &=& \frac{m^{b_1(\omega)+\ldots+ b_n(\omega)}} {m^{b_{n+1}(\omega)}q_n(\omega)\left(q_n(\omega) + \tau^{n+1}_m(\omega)q_n(\omega) + m^{-b_{n+1}(\omega)}m^{b_n(\omega)}q_{n+1}(\omega)\right)} \nonumber \\ &=& \frac{m^{b_1(\omega)+\ldots+ b_n(\omega)}} {q_n(\omega)\left(q_{n+1}(\omega) + \tau^{n+1}_m(\omega)m^{b_{n+1}(\omega)}q_n(\omega)\right)} \label{eq44}\end{aligned}$$ Now, we know that the Fibonacci numbers are defined by recurrence $$F_{n+1} = F_{n} + F_{n-1}, \forall n \in N_+, \mbox{ and } F_0 = F_1=1.$$ Also, from the recurrence relation (\[eq32\]), we infer that $$p_{n+1} \geq F_{n+1} \mbox{ and } q_n \geq F_n, \forall n \in N_+, n \geq 2. \label{eq45}$$ Also, we have that $$\begin{aligned} q_n(\omega) &=& m^{b_n(\omega)} q_{n-1}(\omega) + m^{b_{n-1}(\omega)}q_{n-2}(\omega) \geq m^{b_n(\omega)}q_{n-1}(\omega) \geq \nonumber\\ & \geq& m^{b_n(\omega)} m^{b_{n-1}(\omega)} q_{n-2}(\omega) \geq \ldots \geq m^{b_1(\omega)+\ldots+b_n(\omega)}q_0(\omega). \nonumber\end{aligned}$$ i.e. $$q_n(\omega) \geq m^{b_1(\omega)+\ldots+b_n(\omega)}, \forall n \in N_+. \label{eq46}$$ Thus, from relations (\[eq45\]) and (\[eq46\]), we have that $$q_n(\omega) \geq \max(F_n, m^{b_1(\omega)+\ldots+b_n(\omega)}), \forall n \in N_+.$$ Now, since the transformation $\tau_m$ belonging to $(0, m-1)$ and from the last two relations, we can show that $$\frac{m^{b_1(\omega)+\ldots+ b_n(\omega)}}{q_n(\omega)\left(q_{n+1}(\omega) + \tau^{n+1}_m(\omega) m^{b_{n+1}(\omega)} q_n(\omega)\right)} \leq \frac{m^{b_1(\omega)+\ldots+ b_n(\omega)}}{q_n(\omega)q_{n+1}(\omega)}\leq$$ $$\leq \frac{1}{q_n(\omega)} \leq \frac{1}{\max(F_n, m^{b_1(\omega)+\ldots+b_n(\omega)})}.$$ It is obvious that the left inequality is true. Since $\max(F_n, m^{b_1(\omega)+\ldots+b_n(\omega)})$ is an increasing function, we have $$\lim_{n \rightarrow \infty}\omega_n(\omega) = \omega.$$ The proof is complete. REMARK ====== This paper is the first one which addresses this type of continued fraction expansion, and will be followed by other papers which will present the metrical theory of this expansion, the principal aim being solving Gauss’ problem. [\[01\]]{} K. Dajani, C. Kraaikamp, [*Ergodic theory of numbers*]{}, Cambridge University Press, 2002. A.I. Hincin, [*Fractii continue*]{}, Editura Tehnica, Bucuresti, 1960. M. Iosifescu, [*A very simple proof of a generalization of the Gauss-Kuzmin-Lévy theorem on continued fractions, and related questions*]{}, Rev. Roumaine Math. Pures Appl. 37 (1992), 901-914. M. Iosifescu, C. Kraaikamp, [*Metrical theory of continued fractions*]{}, Kluwer Academic, 2002. M. Iosifescu, G.I. Sebe, [*An exact convergence rate in a Gauss-Kuzmin-Lévy problem for some continued fraction expansion*]{}, in vol. Mathematical Analysis and Applications, 90-109. AIP Conf. Proc. 835 (2006), Amer. Inst. Physics, Melville, NY. A.M. Rockett, P. Sz" usz, [*Continued fractions*]{}, World Scientific, Singapore, 1992. P. Sz" usz, [*" Uber einen Kusminschen Satz*]{}, Acta Math. Acad. Sci. Hungar, 12 (1961), 447-453. G.I. Sebe, [*A Wirsing-type approach to some continued fraction expansion*]{}, Int. J. Math. Math. Sci., 12 (2005), 1943-1950. E. Wirsing, [*On the theorem of Gauss-Kuzmin-Lévy and Frobenius-type theorem for function space*]{}, Acta Arithmetica 24 (1974), 507-528.
{ "pile_set_name": "ArXiv" }
--- address: | Institut d’Astrophysique de Paris, C.N.R.S./Paris VI, 98$^{bis}$ Boulevard Arago, F-75014, Paris, FRANCE\ E-mail: alfred@iap.fr author: - 'A. VIDAL-MADJAR' title: 'D/H MEASUREMENTS' --- Introduction ============ During primordial Big Bang nucleosynthesis deuterium is produced in significant amounts and then destroyed in stellar interiors. It is thus a key element in cosmology and in galactic chemical evolution (see [*e.g.*]{} Audouze & Tinsley [@at]; Boesgaard & Steigman [@bs]; Olive [*et al.*]{} [@oa]; Pagel [*et al.*]{} [@pa]; Vangioni-Flam & Cassé [@vc4]$^{,~}$[@vc5]; Prantzos [@p]; Scully [*et al.*]{} [@sc]; Cassé & Vangioni-Flam [@cv]). The [*Copernicus*]{} space observatory has provided the first direct measurement of the D/H ratio in the interstellar medium (ISM) representative of the present epoch (Rogerson & York [@ry]) : (D/H)$^{Copernicus}_{\rm ISM}\simeq1.4\pm0.2\times10^{-5}$. More recently D/H evaluations were made in the direction of quasars (QSOs) in low metallicity media. They were completed toward three different QSOs’ (Burles & Tytler [@bta]$^{,~}$[@btb]; O’Meara & Tytler [@ot]) leading to a possible range of $2.4-4.8\times10^{-5}$ for the primordial D/H. These values correspond to a new estimations of the baryon density of the Universe, $\Omega_{\rm b}{\rm h}^{2}=0.019\pm0.0009$, in the frame of the standard BBN model (Burles [*et al.*]{} [@bal]; Nollett & Burles [@nb]). When compared to the recent $\Omega_{\rm b}{\rm h}^{2}$ evaluation made from the Cosmic Microwave Background (CMB) observations (see [*e.g.*]{} Jaffe [*et al.*]{} [@ja]) $\Omega_{\rm b}{\rm h}^{2}=0.032\pm0.005$, this seems to lead to a possible conflict. Note that another D/H measurement made toward a low redshift QSO leading to a D/H value possibly larger than $10^{-4}$ (Webb [*et al.*]{} [@we]; Tytler [*et al.*]{} [@ty]) corresponds to an even stronger disagreement since it translates into $\Omega_{\rm b}{\rm h}^{2}\le0.01$. It is thus important to investigate the possibility of varying D/H ratios in different astrophysical sites (see [*e.g.*]{} Lemoine [*et al.*]{} [@la9]). If variations are indeed found, their cause should be investigated before a reliable primordial D/H evaluation can be inferred from a small number of observations. Interstellar observations ========================= Several methods have been used to measure the interstellar D/H ratio. All will not be discussed here and for more details see [*e.g.*]{} Ferlet [@f2]. The more reliable approach is to observe in absorption, against the background continuum of stars, the atomic Lyman series of D and H in the far-UV. Toward hot stars, with the [*Copernicus*]{} satellite, many important evaluations of D/H were obtained (see e.g. Rogerson and York [@ry]; York and Rogerson [@yr]; Vidal–Madjar [*et al.*]{} [@va1977]; Laurent [*et al.*]{} [@lvy]; Ferlet [*et al.*]{} [@fa1980]; York [@y1983]; Allen [*et al.*]{} [@aa1992]) leading to the detection of variations recently enforced by HST–GHRS (Vidal–Madjar [*et al.*]{} [@va]) toward G191–B2B showing a low value and IMAPS observations, one made toward $\delta$ Ori presenting again a low value (Jenkins [*et al.*]{} [@ja]) confirming the previous analysis made by Laurent [*et al.*]{} [@lvy] from [*Copernicus*]{} observations and the other one toward $\gamma^2$ Vel with a high value (Sonneborn [*et al.*]{} [@so]). These observations seem to indicate that in the ISM, within few hundred parsecs, D/H may vary by more than a factor $\simeq3$. From published values, D/H ranges from $\sim5\times10^{-6}$ $<$ (D/H)$_{ISM}$ $<$ $\sim4\times10^{-5}$. This method also provided a precise D/H evaluation in the local ISM (LISM) in the direction of the cool star Capella (Linsky [*et al.*]{} [@la]) : (D/H)$^{\rm GHRS}_{\rm Capella}=1.60\pm0.09^{+0.05}_{-0.10}\times10^{-5}$ Additional observations made in the LISM lead Linsky [@l98] (see references there in) to the conclusion that the D/H value within the Local Interstellar Cloud (LIC) is (compatible with 12 evaluations) : (D/H)$^{\rm GHRS}_{\rm LIC}=1.50\pm0.10\times10^{-5}$ The nearby ISM ============== Observations of white dwarfs (WD) in the nearby ISM (NISM) for precise D/H evaluations were first proposed and achieved in the direction of G191–B2B by Lemoine [*et al.*]{} [@la6] using the HST–GHRS spectrograph at medium resolution. Follow up observations on G191–B2B at higher resolution with the GHRS Echelle-A grating by Vidal–Madjar [*et al.*]{} [@va] (same instrument configuration used as in the Capella study) lead to a precise D/H evaluation in the NISM along this line of sight within one H[i]{} region – the Local Interstellar Cloud (LIC) also observed toward Capella (these stars are separated by $\sim7^{o}$ on the sky) – and within a more complex and ionized H[ii]{} region presenting a double velocity structure. In these two main interstellar components the D/H ratio was found to be different if one assumes that the D/H value within the LIC is the same as the one found in the direction of Capella, in which case D/H has to be lower ($\sim0.9\times10^{-5}$) in the more ionized components. In any case a lower “average” D/H ratio is found (2$\sigma$ error) : (D/H)$^{\rm GHRS}_{\rm G191-B2B}=1.12\pm0.16\times10^{-5}$ This result has been contested by Sahu [*et al.*]{} [@sa9] who used new HST–STIS high resolution Echelle observations. However Vidal–Madjar [@v] has showed that all data sets (GHRS and STIS) in fact converge on a same value of the D/H ratio, which furthermore agrees with that derived by Vidal–Madjar [*et al.*]{} [@va] and disagrees with that of Sahu [*et al.*]{} [@sa9]. Since the disagreement between the two analysis was on the D[i]{} column density estimation, FUSE observations were expected to clarify the situation since they give access to weaker deuterium Lyman lines that are less sensitive to saturation effects than Lyman $\alpha$. Three independent data sets were obtained corresponding to the three different FUSE entrance apertures (Vidal–Madjar [*et al.*]{} [@va1]). The fits of the D Lyman $\beta$ line in the various FUSE channels are shown in Figure 1 and compared with the estimate of Sahu [*et al.*]{} [@sa9]. These new data confirm the measurement of N(D[i]{}) of Vidal–Madjar [*et al.*]{} [@va]; the value N(D[i]{}) derived by Sahu [*et al.*]{} [@sa9] lies 6$\sigma$ away from the new result. These 6$\sigma$ are quantified in terms of $\Delta\chi^2$, including many possible systematics such as stellar continuum placement, zero level, spectral instrument shifts, line spread function profiles, all free in the fitting process (see e.g. the different stellar continuum levels in Figure 1 from left to right panels). The H[i]{} column density toward G191–B2B is well determined. Independent measurements with EUVE (Dupuis [*et al.*]{} [@da5]), GHRS (medium[@la6] and high resolution[@v]) and STIS (high resolution[@sa9]) using several methods of evaluation (EUV, Lyman continuum opacity and Lyman $\alpha$, damping wing modelling), converge on a value of log N(H[i]{}) = 18.34 ($\pm0.03$). The error on this value includes systematic errors associated with the various measurement techniques. Using the D[i]{} column density as measured by FUSE and the H[i]{} column density compatible with all published values, one arrives at (2$\sigma$ error) : (D/H)$^{\rm FUSE-HST-EUVE}_{\rm G191-B2B}=1.16\pm0.24\times10^{-5}$ This value is marginally compatible ($\ge2\sigma$) with the LIC one. The essential question remains : if D/H variations are confirmed in more sightlines, what could be their cause ? The FUSE observatory ==================== FUSE starts to produce orders of magnitude more data on the distribution of D/H in the ISM. From the planned D/H survey, we should be able to evaluate the deuterium abundance in a wide variety of locations, possibly linked to the past star formation rate as well as to the supposed infall of less processed gas in our Galaxy, and thus better understand Galactic chemical evolution. The FUSE sensitivity should allow evaluations of the deuterium abundance in tens of lines of sights : i) in the direction of white dwarfs and cool stars in the NISM ; ii) toward hot sub-dwarfs in the more distant ISM and nearby Galactic halo ; iii) within the Galactic disk over several kilo-parsecs in the direction of O and early B stars ; iv) in the more distant Galactic halo, within high velocity cloud complexes as well as in intergalactic clouds in the direction of low redshift QSOs, AGNs and blue compact galaxies. The first precise D/H evaluations toward few white dwarfs were presented in early 2001 at the AAS meeting (Moos [*et al.*]{} [@ma1]; Friedman [*et al.*]{} [@fa1]; Hébrard [*et al.*]{} [@ha1]; Kruk [*et al.*]{} [@ka1]; Linsky [*et al.*]{} [@la1]; Sonneborn [*et al.*]{} [@sa1]; Vidal–Madjar [*et al.*]{} [@va1]). The deuterium Lyman lines are clearly seen toward these few WDs and, as an example, the Lyman $\beta$ line is shown in the case of G191–B2B as previously discussed (see Figure 1). Several of these D/H evaluations made in the ISM with FUSE, HST, IMAPS are shown in Figure 2 along with one made recently in the direction of one QSO from ground based observations [@ot], as a function of the line of sight average metallicity as traced by O/H when available. It seems that the D/H variation does not anti–correlate with O/H. Thus a simple mechanism as astration, able to destroy D and produce O, does not seem compatible with the observations. Other mechanisms should be investigated as the ones listed by [*e.g.*]{} Lemoine [*et al.*]{} [@la9]. Conclusion ========== In summary the status of the different – but discordant – D/H evaluations taken with no a priori bias to select one over another could be the following. If the variations of the D/H ratio in the NISM are illusory, one could quote an average value of (D/H)$_{\rm NISM} \simeq 1.3-1.4\times10^{-5}$ barely compatible with all observations. More in agreement with the present observations, D/H seems to vary in the ISM. One has thus to understand why. Until then, any single or small number of values should not be considered to represent the definitive D/H in a given region. This is particularly true for the “primordial” values found in the direction of QSOs since the physical state of the probed environment is more poorly known than the Galactic one. Our hope is that the FUSE mission will solve these problems. Acknowledgments {#acknowledgments .unnumbered} =============== I am very grateful for the entire FUSE operation team for all the impressive work they are doing to make the FUSE observatory come true. I also thank the whole FUSE team for many positive interaction and comments. This work is based on data obtained by the NASA–CNES–CSA FUSE mission operated by the Johns Hopkins University under NASA contract NAS5–32985. [99]{} J. Audouze and B.M. Tinsley, . A.M. Boesgaard and G. Steigman, . K. Olive [*et al.*]{}, . B. Pagel [*et al.*]{}, . E. Vangioni-Flam and M. Cassé, . E. Vangioni-Flam and M. Cassé, . N. Prantzos, . S.T. Scully, [*et al.*]{}, . M. Cassé and E. Vangioni-Flam in [*Structure and Evolution of the Intergalactic Medium from QSO Absorption Line Systems*]{}, eds. P. Petitjean and S. Charlot (IAP Conference, 331, 1998). J. Rogerson and D. York, . S. Burles and D. Tytler, . S. Burles and D. Tytler, . J. O’Meara and D. Tytler, in these proceedings [*Cosmic Evolution*]{}, eds. M. Lemoine and R. Ferlet, 2001. S. Burles [*et al.*]{}, . K.M. Nollett and S. Burles, . A.H. Jaffe [*et al.*]{}, astro-ph/0007333, 2000. J.K. Webb [*et al.*]{}, . D. Tytler [*et al.*]{}, . M. Lemoine [*et al.*]{}, . R. Ferlet, in IAU$\#$150 [*Astrochemistry of Cosmic Phenomena*]{}, eds. P.D. Singh, (Kluwer, 85, 1992). D. York and J. Rogerson, . A. Vidal–Madjar [*et al.*]{}, . C. Laurent, A. Vidal–Madjar and D.G. York, . R. Ferlet [*et al.*]{}, . D.G. York, . M.M. Allen, E.B. Jenkins and T.P. Snow, . A. Vidal–Madjar [*et al.*]{}, . E.B. Jenkins [*et al.*]{}, . G. Sonneborn, [*et al.*]{}, . J. Linsky [*et al.*]{}, . J. Linsky, . M. Lemoine [*et al.*]{}, . M.S. Sahu [*et al.*]{}, . A. Vidal–Madjar, in [*The Light Elements and Their Evolution*]{}, eds. L. da Silva, M. Spite and J. R. de Medeiros (ASP Conference Series, 151, 2000). A. Vidal–Madjar [*et al.*]{}, [*Ap.J.*]{} in preparation, (2001). J. Dupuis [*et al.*]{}, . H.W. Moos [*et al.*]{}, [*Ap.J.*]{} in preparation, (2001). S.D. Friedman [*et al.*]{}, [*Ap.J.*]{} in preparation, (2001). G. Hébrard [*et al.*]{}, [*Ap.J.*]{} in preparation, (2001). J.W. Kruk [*et al.*]{}, [*Ap.J.*]{} in preparation, (2001). J.L. Linsky [*et al.*]{}, [*Ap.J.*]{} in preparation, (2001). G. Sonneborn [*et al.*]{}, [*Ap.J.*]{} in preparation, (2001). D.M. Meyer [*et al.*]{}, .
{ "pile_set_name": "ArXiv" }
--- address: ' , , , ' author: - - - bibliography: - 'References.bib' title: 'A Line-of-Sight Channel Model for the 100–450 Gigahertz Frequency Band' --- Introduction {#section:1} ============ Simplified Molecular Absorption Loss Model {#section:3} ========================================== Numerical Results and Discussion {#section:4} ================================ Conclusion {#section:5} ========== Methods/Experimental ==================== This paper is a purely theoretical model on simple way to estimate the absorption loss. Although theoretical, the original data obtained from the HITRAN database [@HITRAN12] is based on experimental data. The goal in this article is to simplify the complex database approach into simple polynomial equations with only few floating parameters, such as humidity and frequency. As such, the model produced in this paper is suitable for LOS channel loss estimation for various wireless communications systems. Those include back- and fronthaul connectivity and general LOS link channel estimation. The work is heavily based on the HITRAN database and the theoretical models for absorption loss as well as simple LOS free space path loss models. Abbreviations {#abbreviations .unnumbered} ============= 5G: fifth generation; 6G: sixth generation; B5G: beyond fifth generation; FSPL: free space path loss; HITRAN: high-resolution transmission molecular absorption database; ITU-R: International Telecommunication Union Radio Communication Sector; LOS: line-of-sight; mmWave: millimeter wave; Rx: receiver; Tx: transmitter. Availability of data and materials {#availability-of-data-and-materials .unnumbered} ================================== Not applicable. Competing interests {#competing-interests .unnumbered} =================== The authors declare that they have no competing interests. Funding {#funding .unnumbered} ======= Author’s contributions {#authors-contributions .unnumbered} ====================== JK derived the molecular absorption loss model. All the authors participated in writing the article and revising the manuscript. All the authors read and approved the final manuscript. Authors’ information {#authors-information .unnumbered} ==================== Address of all the authors: Centre for Wireless Communications (CWC), University of Oulu, P.O. Box 4500, 90014 Oulu, Finland. Emails: forename.surname(at)oulu.fi
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider radiation transport theory applied to non-dispersive but refractive media. This setting is used to discuss Minkowski’s and Abraham’s electromagnetic momentum, and to derive conservation equations independent of the choice of momentum definition. Using general relativistic kinetic theory, we derive and discuss a radiation gas energy-momentum conservation equation valid in arbitrary curved spacetime with diffractive media.' address: 'Department of Physics, Ume[å]{} University, SE–901 87 Ume[å]{}, Sweden' author: - Mattias Marklund title: Radiation transport in diffractive media --- Introduction ============ Radiative transfer is a mature area of research, with important application both in laboratory and astrophysical systems (see, e.g., [@Mihalas; @Milne] and references therein). Much is thus known about the properties of the relevant radiation transport equations in non-dispersive, non-diffractive media in flat spacetime. However, less is known in the case of dispersive and/or diffractive media, and the addition of spacetime curvature reduces the number of relevant publications further (for a representative but incomplete selection see [@Milne; @Harris; @Pomraning; @Enome; @Bicak-Hadrava; @Anderson-Spiegel; @Kichenassamy-Krikorian]). Different ways to tackle the problems involved in formulating a general theory of curved spacetime radiation transport in diffractive media exists in the literature. In many cases, the treatment of moments of the transfer equations, and their respective conservation equation, is lacking, as is the discussion of the interpretation of the radiation fluid moments. The latter can be viewed as a nontrivial task, since a clear-cut definition of the radiation momentum density is still under discussion (although claimed otherwise by some authors, see, e.g., [@Jackson]). There are exceptions in the literature though. For example, Anderson & Spiegel introduces an optical geometry on top of the curved spacetime, by introducing an effective metric which incorporates the refractive index of the medium (similar to Gordon [@Gordon]). In this effective geometry, the transfer equation takes the standard form, and a treatment of the fluid moments is thus straightforward, although their interpretation is somewhat obscured by the presence of the effective geometry. Moreover, the proper conservation equations for the fluid moments is not derived. As noted by many authors, related to the problem of radiative transfer in diffractive media is concept of photon momentum and the Minkowski–Abraham debate. Although the definition of Abraham is preferred due to a number of theoretical reasons (symmetries [@Jackson; @Landau-Lifshitz], derivations from microscopic theory [@deGroot] etc., see [@Brevik] for an overview), measurements have not given a definite answer [@Jones-Richards; @Jones-Leslie] (for a review, see [@Loudon] and references therein). Due to the problem of separating the electromagnetic from the material degrees of freedom [@Nelson; @Loudon-Allen-Nelson], the problem of defining the electromagnetic momentum in a refractive medium has persisted. However, it seems reasonable that the Minkowski momentum should be treated as a pseudo-momentum, partly depending on material contributions, while the Abraham momentum is a proper electromagnetic momentum (for a discussion, see [@Feigel]). Here we will analyse the radiative transfer equations and derive macroscopic conservation equations in terms of both Abraham’s and Minkowski’s definitions. It will also be shown that variables can be used as to remove the problem of momentum definition when deriving conservation equations. Furthermore, a completely general energy-momentum conservation equation for a radiation fluid in curved spacetime with a refractive medium is derived and discussed. It is shown that the combination of curvature and refraction gives important contributions to the conservation equation, even if the refractive index is spacetime homogeneous. The equation is discussed in the context of cosmological models. Ray dynamics and kinetic theory =============================== In mechanics, given the action $S$ of a system, we may define the momentum and Hamiltonian according to $$\label{eq:mechanics} p_{\alpha} = \frac{\partial S}{\partial x^{\alpha}} \, \qquad \, H = -\frac{\partial S}{\partial t} ,$$ respectively. From this, the equations of motion for a single particle can be written in terms of Hamilton’s equations, i.e.$$\begin{aligned} && \dot{x}^{\alpha} = \frac{\partial H}{\partial p_{\alpha}} , \\ && \dot{p}_{\alpha} = -\frac{\partial H}{\partial x^{\alpha}} .\end{aligned}$$ \[eq:hamilton\] In geometric optics [@Landau-Lifshitz], we introduce the eikonal $\phi = \phi(t, x^{\alpha})$ by writing the electromagnetic field $F$ in the form $$F = a\exp(i\phi) .$$ From the eikonal, we define the the wave-vector and frequency of the field as $$k_{\alpha} = \frac{\partial\phi}{\partial x^{\alpha}} \, \qquad \, \omega = -\frac{\partial\phi}{\partial t} ,$$ in analogy with (\[eq:mechanics\]). The equations of ray optics becomes $$\begin{aligned} && \dot{x}^{\alpha} = \frac{\partial\omega}{\partial k_{\alpha}} , \label{eq:group1} \\ && \dot{k}_{\alpha} = -\frac{\partial\omega}{\partial x^{\alpha}} , \label{eq:force1}\end{aligned}$$ \[eq:ray\] and from this also follows that $d\omega/dt = \partial\omega/\partial t$. These equations are the mass-less quasi-particle analogous of (\[eq:hamilton\]). It is also clear from the eikonal definitions that the natural variables describing the photons are the frequency and the wave-number rather than the energy and the momentum. Photons moving in a isotropic non-dispersive but refractive medium will satisfy a dispersion relation of the form $$\label{eq:dispersion} \omega = \frac{ck}{n}$$ where, in general, the refractive index $n$ is spacetime inhomogeneous. The equations of motion (\[eq:ray\]) for individual photons in such a medium then becomes \[eq:ray2\] $$\begin{aligned} && \dot{x}^{\alpha} = \frac{c}{n}\hat{k}^{\alpha}, \label{eq:group} \\ && \dot{k}_{\alpha} = \frac{\omega}{n}\frac{\partial n}{\partial x^{\alpha}} . \label{eq:force}\end{aligned}$$ Before proceeding further, a discussion of the definition of the photon momentum is in place. We note that we have chosen as our fundamental variables $t, x^{\alpha}$ and $\omega, k_{\alpha}$. The relation to Hamiltonian particle dynamics becomes obvious if we choose $H = \hbar\omega$ and $p_{\alpha} = \hbar k_{\alpha}$, and it is therefore tempting to assume this direct relationship. However, in electromagnetic theory, there are two distinct ways to define the momentum of the electromagnetic field. According to Minkowski, the momentum is proportional to $\mathbf{D}\times\mathbf{B}$, while according to Abraham it is proportional to $\mathbf{E}\times\mathbf{H}$ (see Ref. [@Jackson] for a discussion). Furthermore, in measurement of photon momenta in isotropic diffractive media, there are two distinct forms as well, namely $p = \hbar k = \hbar\omega n/c$ and $p \equiv \hbar\omega/nc = \hbar k/n^2$. We note that the former is in agreement with Minkowski’s momentum density and a number of experiments, while the latter definition is consistent with Abraham’s choice in terms of the Poynting flux (see [@Loudon] and references therein), which also follows from detailed microscopic considerations as well as symmetry arguments. On the other hand, measurements indicates that $p \propto n$, consistent with Minkowski’s momentum definition in a dielectric medium. The discrepancy between the two definitions can be attributed to the contributions of the medium in Minkowski’s definition, while Abraham’s definition (used here) is the ‘proper’ momentum of the photon [@Feigel]. However, we note that while the form (\[eq:group\]) of the group velocity is valid independently of the momentum definition (as expected), (\[eq:force\]) formulated in terms of the momentum takes different forms for the Minkowski and the Abraham definition. Given a spectral distribution $\mathscr{N}(t,x^{\alpha},k_{\alpha})$ of photons, the absence of collisions defines the on-shell Vlasov equation for $\mathscr{N}$ according to $$\dot{\mathscr{N}} = \frac{\partial\mathscr{N}}{\partial t} + \dot{x}^{\alpha}\frac{\partial\mathscr{N}}{\partial x^{\alpha}} + \dot{k}_{\alpha}\frac{\partial\mathscr{N}}{\partial k_{\alpha}} = 0 , \label{eq:vlasov}$$ expressing the phase space conservation of quasi-particles. Using the spectral distribution function defined on-shell, i.e. with (\[eq:dispersion\]) satisfied, we can now define the macroscopic observables, from which a fluid theory can be constructed. Macroscopic variables ===================== Next, we define macroscopic variables as moments of the distribution function $\mathscr{N}$. In general, we define $\langle\psi\rangle = (\int\mathscr{N}d^3k)^{-1} \int \psi\mathscr{N} d^3k$ to be the statistical average of the function $\psi$, which may be defined over the full phase space. Moreover, in these definitions we closely follow Ref. [@Mihalas]. — The *number density* $N$ is defined according to $$N = \int \mathscr{N}d^3k .$$ — The *energy density* $\mu$ is defined as $$\mu = \int \hbar\omega\mathscr{N} d^3k .$$ — The *average fluid velocity* $u^{\alpha}$ is given by $$u^{\alpha} = \langle \dot{x}^{\alpha}\rangle ,$$ and from this we may also define the *thermal (or random) fluid velocity* $w^{\alpha}$ using $$w^{\alpha} = \dot{x}^{\alpha} - u^{\alpha} ,$$ such that $\langle w^{\alpha}\rangle = 0$. — The *radiation energy flux* $q^{\alpha}$ is defined such that $q^{\alpha}\,dS_{\alpha}$ is the rate of energy flow across the surface $dS_{\alpha}$. Thus $$q^{\alpha} = \int\hbar\omega w^{\alpha}\mathscr{N} d^3k .$$ — The *momentum density* ${{\Pi}}^{\alpha}$ of the photon in a refractive medium can be formulated in two ways, in accordance with Minkowski or Abraham, and they read $$\label{eq:momentum-minkowski} {{\Pi}}^{\alpha}_M = \int \hbar k^{\alpha}\mathscr{N} d^3k = \int \frac{n^2}{c^2}\hbar\omega \dot{x}^{\alpha}\mathscr{N} d^3k,$$ and $$\label{eq:momentum-abraham} {{\Pi}}^{\alpha}_A = \int \frac{\hbar k^{\alpha}}{n^2}\mathscr{N} d^3k = \int \frac{1}{c^2}\hbar\omega \dot{x}^{\alpha}\mathscr{N} d^3k ,$$ respectively, where we have made use of (\[eq:dispersion\])–(\[eq:force\]). — The *pressure tensor* $\mathsf{P}^{\alpha\beta}$ can similarly be defined in two distinct ways. Since the pressure tensor $\mathsf{P}^{\alpha\beta}$ is defined to be, for an observer comoving with the average fluid flow, the rate of transport of the $\alpha$ component of momentum per unit area of a surface orthogonal to the $\beta$ coordinate (or tetrad) axis, we have $$\label{eq:pressure-minkowski} \mathsf{P}^{\alpha\beta}_M = \int \hbar k^{\alpha} w^{\beta} \mathscr{N} d^3k = \int \frac{n^2}{c^2} \hbar\omega w^{\alpha} w^{\beta} \mathscr{N} d^3k,$$ and $$\label{eq:pressure-abraham} \mathsf{P}^{\alpha\beta}_A = \int \frac{1}{n^2}\hbar k^{\alpha} w^{\beta} \mathscr{N} d^3k = \int \frac{1}{c^2}\hbar\omega w^{\alpha} w^{\beta} \mathscr{N} d^3k ,$$ for the Minkowski and the Abraham momentum, respectively. Here we have again used (\[eq:dispersion\])–(\[eq:force\]), and we note that the pressure tensor is symmetric. Furthermore, the energy flux and the momentum density is related to each other through the relations $${{\Pi}}^{\alpha}_M = \frac{n^2}{c^2}\left( \mu u^{\alpha} + q^{\alpha} \right) , \label{eq:momentum-energy1}$$ and $${{\Pi}}^{\alpha}_A = \frac{1}{c^2}\left( \mu u^{\alpha} + q^{\alpha} \right) , \label{eq:momentum-energy2}$$ \[eq:momentum-energy\] where we have used general relation $k^{\alpha} = n^2\omega (u^{\alpha} + w^{\alpha})/c^2$. Higher order moments can be defined analogously, but the above will suffice for our purposes, and with these definitions we are ready to set up the fluid equations. We note that the macroscopic quantities defined above are valid also for dispersive media, i.e. when the refractive index depend on the frequency $\omega$ or, in the anisotropic case, the wave vector $k^{\alpha}$. Fluid equations =============== A hierarchy of fluid equations may be obtained from (\[eq:vlasov\]) by taking the moments with respect to suitable microscopic quantities. Given a microscopic variable ${{\psi}}$ (which may be a tensorial object), the general fluid conservation equation takes the form $$\begin{aligned} \frac{\partial}{\partial t}(N\langle {{\psi}}\rangle) + \frac{\partial}{\partial x^{\alpha}}\left( N\langle\dot{x}^{\alpha}{{\psi}}\rangle\right) = N\langle\dot{{{\psi}}}\rangle \label{eq:moment}\end{aligned}$$ where $\dot{{{\psi}}}$ is defined in accordance with (\[eq:vlasov\]). Thus, $N\langle{{\psi}}\rangle$ and $N\langle\dot{x}^a{{\psi}}\rangle$ represent the macroscopic density and macroscopic current, respectively, of the microscopic variable ${{\psi}}$, while the right hand side of (\[eq:moment\]) acts as a source/sink. Finally, the fluid hierarchy of equation can be terminated by assuming a set of thermodynamic relationships between the macroscopic variables, such as an equation of state. Putting $\psi = 1$, we obtain the conservation equation for the number of quasi-particles $$\frac{\partial N}{\partial t} + \frac{\partial}{\partial x^{\beta}}\left( Nu^{\beta} \right) = 0.$$ The energy conservation equation is obtained by taking ${{\psi}} = \hbar\omega$. Thus, for an observer comoving with the average radiation fluid flow we obtain $$\label{eq:energy} \frac{\partial{\mu}}{\partial t} + \frac{\partial}{\partial x^{\alpha}}\left( \mu u^{\alpha} + q^{\alpha} \right) = -\frac{{\mu}}{n}\frac{\partial n}{\partial t} .$$ Similarly, the momentum conservation equation is obtained by $\psi = p_{\alpha}$, and takes the form $$\frac{\partial{{\Pi}}_{M\alpha}}{\partial t} + \frac{\partial}{\partial x^{\beta}}\left( u^{\beta}{{\Pi}}_{M\alpha} + \frac{n^2}{c^2}u_{\alpha}q^{\beta} + \mathsf{P}^{\beta}_{M\alpha} \right) = \frac{\mu}{n}\frac{\partial n}{\partial x^{\alpha}} \label{eq:momentumconservation-M}$$ or $$\begin{aligned} \fl \frac{\partial{{\Pi}}_{A\alpha}}{\partial t} + \frac{\partial}{\partial x^{\beta}}\left( u^{\beta}{{\Pi}}_{A\alpha} + \frac{1}{c^2}u_{\alpha}q^{\beta} + \mathsf{P}^{\beta}_{A\alpha} \right) &=& \frac{\mu}{n^3}\frac{\partial n}{\partial x^{\alpha}} - 2\frac{{{\Pi}}_{A\alpha}}{n}\left( \frac{\partial}{\partial t} + u^{\beta}\frac{\partial}{\partial x^{\beta}} \right)n \nonumber \\ && - \frac{2}{n}\left( \frac{1}{c^2}u_{\alpha}q^{\beta} + \mathsf{P}^{\beta}_{A\alpha} \right)\frac{\partial n}{\partial x^{\beta}} , \label{eq:momentumconservation-A}\end{aligned}$$ \[eq:momentumconservation1\] depending on if we use (\[eq:momentum-minkowski\]) and (\[eq:pressure-minkowski\]) or (\[eq:momentum-abraham\]) and (\[eq:pressure-abraham\]), respectively. We furthermore note that (\[eq:momentumconservation-M\]) and (\[eq:momentumconservation-A\]) can be written in a slightly more symmetric form, using the relations (\[eq:momentum-energy1\])–(\[eq:momentum-energy2\]), according to $$\frac{\partial{{\Pi}}^{\alpha}_M}{\partial t} + \frac{\partial}{\partial x^{\beta}}\!\!\left( \frac{n^2}{c^2}\mu u^{\alpha}u^{\beta} + \frac{2n^2}{c^2}u^{(\alpha}q^{\beta)} + \mathsf{P}^{\alpha\beta}_{M} \right) = \frac{\mu}{n}\delta^{\alpha\beta}\frac{\partial n}{\partial x^{\beta}}$$ and $$\begin{aligned} \fl \frac{\partial{{\Pi}}^{\alpha}_A}{\partial t} + \frac{\partial}{\partial x^{\beta}}\!\left( \frac{1}{c^2}\mu u^{\alpha}u^{\beta} + \frac{2}{c^2}u^{(\alpha}q^{\beta)} + \mathsf{P}^{\alpha\beta}_A \right) &=& \frac{\mu}{n^3}\delta^{\alpha\beta}\frac{\partial n}{\partial x^{\beta}} - \frac{2}{n}{{\Pi}}^{\alpha}_A \frac{\partial n}{\partial t} \nonumber \\ && \fl - \frac{2}{n}\left( \frac{1}{c^2}\mu u^{\alpha}u^{\beta} + \frac{2}{c^2}u^{(\alpha}q^{\beta)} + \mathsf{P}^{\alpha\beta}_A \right)\frac{\partial n}{\partial x^{\beta}} \label{eq:momentumcons-A}\end{aligned}$$ \[eq:momentumconservation2\] respectively. It is straighforward to see that (\[eq:momentumconservation-M\])–(\[eq:momentumconservation-A\]) can be obtained from each other using the relation ${{\Pi}}^{\alpha}_M = n^2{{\Pi}}^{\alpha}_A$. The system of equations presented above can be closed if we choose a thermodynamic relationship between certain quantities. For an close-to-equilibrium system, the pressure tensor becomes nearly isotropic, and we can write $\mathsf{P}^{\alpha\beta} \approx Ph^{\alpha\beta}$, where $P = h_{\alpha\beta}\mathsf{P}^{\alpha\beta}/3$. From (\[eq:pressure-minkowski\]) and (\[eq:pressure-abraham\]), we obtain $$\label{eq:pressuredefs} P_{M} = {\textstyle\frac}{1}{3}\mu \, \qquad \, P_{A} = {\textstyle\frac}{1}{3}n^{-2}\mu ,$$ respectively. The remaining freedom in the equations is removed by choosing the observer. We will here use the particle frame, where $u^{\alpha} = 0$. Using these choices, we obtain a set of equations independent of the choice of momentum definition, in terms of the energy density and heat flux, according to $$\frac{\partial\mu}{\partial t} + \frac{\partial q^{\alpha}}{\partial x^{\alpha}} = -\frac{\mu}{n}\frac{\partial n}{\partial t} , \label{eq:energy-comoving}$$ and $$\frac{\partial q^{\alpha}}{\partial t} + \frac{c^2}{3n^2}\delta^{\alpha\beta}\frac{\partial\mu}{\partial x^{\beta}} = \frac{c^2\mu}{n^3}h^{\alpha\beta}\frac{\partial n}{\partial x^{\beta}} - \frac{q^{\alpha}}{n}\frac{\partial n}{\partial t} , \label{eq:momentum-comoving}$$ from (\[eq:momentum-energy1\])– (\[eq:momentum-energy2\]), (\[eq:energy\]), and (\[eq:momentumconservation-M\])–(\[eq:momentumconservation-A\]), respectively. We may from this derive general wave equations for the energy density and energy flux for radiation in diffractive media. Covariant conservation laws =========================== In this section we will make use of the $1+3$ orthonormal frame (ONF) approach (see [@Ellis-vanElst] for an overview), since it allows for a general relativistic treatment, simplifies calculations, and gives the natural Cartesian like reference frame for an observer moving with the timelike frame direction. All vector and tensor quantities will be projected onto this frame. We define a set of frame vectors $\{ \bm{e}_a \}$, $a = 0, \ldots, 3$ such that $\bm{e}_a\cdot\bm{e}_b = \eta_{ab}$ gives the constant coefficients of the metric: $g_{ab} = \eta_{ab} = \mathrm{diag}\,(-1, 1, 1, 1)$, i.e., a Lorentz frame. The Ricci rotation coefficients $\Gamma^a\!_{bc}$, antisymmetric in the first two indices, giving the kinematics of spacetime are defined by $\nabla_c\bm{e}_b = \Gamma^a\!_{bc}\bm{e}_a$. Here $\nabla_c$ is the covariant derivative with respect to the Lorentz frame. The distribution function $\tilde{\mathscr{N}}$ will now be a function of the canonical phase space variables $x^a$ and $k_a$, and the conservation of phase space density can be written $$\label{eq:fourkinetic} \frac{d\tilde{\mathscr{N}}}{d\lambda} = \hat{L}[\tilde{\mathscr{N}}] = \dot{x}^a\bm{e}_a(\tilde{\mathscr{N}}) + \dot{k}_a\frac{\partial\tilde{\mathscr{N}}}{\partial k_a} = \mathscr{C} ,$$ where the over dot stands for $d/d\lambda$, where $\lambda$ is an affine parameter along the photon path, and we have introduced the Liouville operator $\hat{L} = \dot{x}^a\bm{e}_a + \dot{k}_a\partial/\partial k_a$. Furthermore, for the sake of generality, we have added the collisional operator $\mathscr{C}$, representing photon emission and absorption [^1]. If $f^a$ are the external forces acting on the pencil of light, the covariant derivative along the photon path is $$\frac{Dk_b}{d\lambda} = \dot{k}_b - \Gamma^c\!_{ba}k_c\dot{x}^a = f_b$$ With the Hamiltonian $H = H(x^a,k_a)$ in eight dimensional phase space, the equations of motion thus reads $$\begin{aligned} && \dot{x}^a = \frac{\partial H}{\partial k_a} , \label{eq:group2} \\ && \dot{k}_a = -\bm{e}_a(H) + \Gamma^c\!_{ab}k_c\dot{x}^b, \label{eq:force2}\end{aligned}$$ \[eq:fourhamilton\] generalising (\[eq:group1\])–(\[eq:force1\]). Here the last term of (\[eq:force2\]) gives the gravitational contribution to the equations of motion. We denote the observer four-velocity by $U^a$, normalised such that $U^aU_a = -c^2$. We partially fix the frame by letting the observer four-velocity $U^a = \delta^a_0$, and split spacetime quantities with respect to $U^a$. The space metric orthogonal to $U^a$ takes the form $$h_{ab} = \eta_{ab} + U_aU_b/c^2 ,$$ and the wavevector can be written $$k_a = \omega U_a/c^2 + k\ell_a ,$$ where $\omega = -U^ak_a$, $k = (h^{ab}k_ak_b)^{1/2}$, and $\ell_a = h_a\!^bk_b/k$. We note that $\ell^a\ell_a = 1$ and $\ell_aU^a = 0$. In these variables, the dispersion relation (\[eq:dispersion\]) retains its form. There is a certain arbitrariness in the choice of Hamiltonian. We choose the Hamiltonian $H$ such that $H = 0$ gives the dispersion relation (and $\partial H/\partial k_a \neq 0$ on the dispersion surface), and the variables $x^a, \omega, k$, and $\ell_a$ is treated as independent, with the vanishing Hamiltonian as a constraint. Suppose the dispersion relation takes the form $$\omega = W(x^a, k, \ell_a) .$$ Then $$H = -\omega + W(x^a, k, \ell_a)$$ satisfies the necessary criteria listed above, giving the equations of motion $$\begin{aligned} && \dot{x}^a = U^a + \frac{\partial W}{\partial k_a}, \label{eq:velocity}\\ && \dot{k}_a = -\bm{e}_a(W) + \Gamma^c\!_{ab}k_c\dot{x}^b ,\end{aligned}$$ via (\[eq:group2\])–(\[eq:force2\]), and, moreover, $\dot{H} = \bm{e}_0(H) = \bm{e}_0(W)$. With (\[eq:dispersion\]), $W(x^a,k) = kc/n(x^a)$, and $$\begin{aligned} && \frac{\partial W}{\partial k_a} = \frac{c\ell^a}{n}, \label{eq:wk} \\ && \bm{e}_a(W) = - \frac{\omega}{n}\bm{e}_a(n) . \label{eq:wx}\end{aligned}$$ Thus, with the particular dispersion relation (\[eq:dispersion\]), the kinetic equation (\[eq:fourkinetic\]) becomes $$\label{eq:fourkinetic2} \left( U^a + \frac{c\ell^a}{n} \right)\bm{e}_a(\tilde{\mathscr{N}}) + \left[ \frac{\omega}{n}\bm{e}_a(n) + \Gamma^c\!_{ab}k_c\dot{x}^b\right] \frac{\partial\tilde{\mathscr{N}}}{\partial k_a} = \mathscr{C} .$$ In the literature, the common definition of the energy-momentum tensor is $$T^{ab}_M = \hbar\int k^ak^b\tilde{\mathscr{N}}\,{{\bm{\kappa}}} ,$$ where ${{\bm{\kappa}}} = |\mathrm{det}\,g|^{-1/2}(\delta(H)/\omega)\,d^4k$ is the invariant volume measure in momentum space. This definition is consistent with the Minkowski definition of the photon momentum in diffractive media. Furthermore, in non-diffractive media, this is a conserved quantity, due to the one-to-one correspondence between $\dot{x}^a$ and $k_a$. However, in diffractive media, this correspondence is lost, since $$\label{eq:x-k} \dot{x}^a = c^2\frac{k^a}{\omega} + \frac{c}{n}(1 - n^2)\ell^a = \frac{c^2}{\omega}\left[ g^{ab} - \left( 1 - \frac{1}{n^2} \right)h^{ab} \right]k_b$$ \[see (\[eq:velocity\]) and (\[eq:wk\])\]. Thus, in diffractive media, the energy-momentum tensor may alternatively be defined according to $$\label{eq:en-mom} T^{ab}_A = \hbar\int\frac{\omega^2}{c^2} \frac{\dot{x}^a}{c} \frac{\dot{x}^b}{c} \tilde{\mathscr{N}}\,{{\bm{\kappa}}} = \frac{\hbar}{c^4}\int\omega \dot{x}^a\dot{x}^b \tilde{\mathscr{N}}\,\delta(H)\,d^4k ,$$ in accordance with Abraham’s photon momentum definition. We note that when $n \rightarrow 1$, we regain the common definition of the energy-momentum tensor in non-diffractive media. Moreover, since (\[eq:x-k\]) holds in general, we have the relation $$T^{ab}_A = \left[ g^a\!_c - \left( 1 - \frac{1}{n^2} \right)h^a\!_c \right] \left[ g^b\!_d - \left( 1 - \frac{1}{n^2} \right)h^b\!_d \right]T^{cd}_M \label{eq:transformation}$$ between the two definitions of the energy-momentum definition. From any energy-momentum tensor, we may define the fluid quantities used in the preceding sections. We thus have the relativistic energy density $\mu = T_{ab}U^aU^b$ with respect to $U^a$, the relativistic momentum density $\Pi^a = -h^{ab}T_{bc}U^c$, the scalar pressure $P = (c^2T_{ab}h^{ab})/3$, and the full anisotropic pressure $\mathsf{P}_{ab} = T_{cd}h^c\!_ah^d\!_b$. By construction $\Pi^a$ and $\mathsf{P}_{ab}$ are spacelike quantities, i.e., orthogonal to $U^a$. The equations of conservation of energy and momentum can be obtained by multiplying (\[eq:fourkinetic2\]) by $\hbar\omega^2\dot{x}^b/c^4$, and integrating over the momentum space variables on the dispersion surface. The result is the energy-momentum conservation equation $$\begin{aligned} &&\fl \nabla_aT^{ab}_A = 2\left(1 - \frac{1}{n^2} \right)\Gamma^c\!_{ad}\left[ n^2g^{a(b}h_{ce} - h^{a(b}g_{ce} - n^2\left(1 - \frac{1}{n^2}\right)h^{a(b}h_{ce} \right]T^{d)e}_A \nonumber \\ &&\fl\quad + \frac{1}{c^2}\left\{ \left[ g^{ab} - \left(1 - \frac{1}{n^2} \right)h^{ab} \right]U_dU_e + 2U^aU_dg^b\!_e - 4 h^{(a}\!_dg^{b)}\!_e\right\}T^{de}_A \frac{\nabla_a n}{n} + C^b , \label{eq:conservation}\end{aligned}$$ where $C^b = \int (\hbar\omega^2\dot{x}^b/c^4) \mathscr{C} \, {{\bm{\kappa}}}$ is the collisional contribution, and $\nabla_a$ is the covariant derivative with respect to the Lorentz frame. We note that as $n \rightarrow 1$, all terms on the right hand side, except $C^b$, vanishes and we obtain the standard conservation equation for non-diffractive media. Moreover, using the transformation (\[eq:transformation\]) we obtain the corresponding equation in terms of the Minkowski variables. The first term on the right hand side gives the coupling between spacetime kinematics and the media refractive properties. Thus, when we have either flat spacetime or non-diffractive media, this term vanishes. The second term arises from the spatial and/or time-like dependence of the refractive index. Equation (\[eq:conservation\]) gives the evolution of energy and momentum of a radiation fluid in diffractive media on an arbitrary curved spacetime. Thus, (\[eq:conservation\]) is the proper starting point for analysing photon gas dynamics in general relativistic gravity. As compared to the case of a Minkowski background spacetime, there is a significant alteration to the equation due to the novel coupling between gravity and the diffractive media, as given by the first term on the right hand side of (\[eq:conservation\]). As a simple example of curvature effects, we use (\[eq:conservation\]) to derive the appropriate energy conservation equation in a Friedmann–Lemaître–Robertson–Walker spacetime, i.e. spatial homogeneity and isotropy is assumed. The spacetime is characterised by the time dependent scalar quantities $\mu$ (energy density), $P$ (pressure), $\Theta$ (expansion), and $n$ (refractive index) (see [@Ellis-vanElst] for details). Thus, (\[eq:conservation\]) gives $$\label{eq:cosm} \frac{d\mu}{dt} = -\frac{4}{3}\Theta \mu - \frac{3\mu}{n}\frac{dn}{dt} ,$$ where we have used $P_A = \mu/3n^2$ \[see (\[eq:pressuredefs\])\], and $d/dt = U^a\nabla_a$. Of course, the analysis of the equation requires a specified density dependence of the refractive index, and therefore also a modified equation of state (for an example, see [@AmelinoCamelia; @Alexander-Brandenberger-Magueijo]), and will not be pursued further here. We note the possibility to formulate (\[eq:conservation\]) in terms of an effective energy-momentum tensor in certain cases, in particular for homogeneous spacetimes [@Triginer-Zimdahl-Pavon]. For example, we may define $\mu_{\mathrm{eff}} = n^3\mu$, so that (\[eq:cosm\]) takes the standard form for this new effective density. In general though, this is not a consistent approach [@Triginer-Zimdahl-Pavon], and can apparently only be overcome by introducing an effective geometry (see, e.g., [@Anderson-Spiegel]). Conclusions =========== We have discussed the consequences of the different electromagnetic momentum definitions, due to Minkowski and Abraham respectively, in the context of radiation fluid dynamics. Starting from a kinetic description, a set of fluid equations was derived and compared for the different definitions of fluid variables. It was found that by expressing the equations in terms of certain variables, they became independent of the choice of momentum definition. Finally, from a general relativistic kinetic theory the energy-momentum conservation equation valid for a radiation gas in a refractive medium on an arbitrary spacetime, including collisional effects, was derived and discussed in a cosmological setting. This work was supported by the Swedish Research Council through the contract No. 621-2004-3217. The author would like to thank Gert Brodin and Chris Clarkson for helpful discussions. References {#references .unnumbered} ========== [99]{} D. Mihalas and B. Weibel–Mihalas, *Foundation of Radiation Hydrodynamics* (Dover Publications, New York, 1999). E.A. Milne, in *Selected Papers on the Transfer of Radiation*, ed. D.H. Menzel (Dover Publications, New York, 1966). E.G. Harris, Phys. Rev. **138** B479 (1965). G.C. Pomraning, Astrophys. J. **153** 321 (1968). S. Énomé, Pub. Astronom. Soc. Japan **21** 367 (1969). J. Bičak and P. Hadrava, Astron. Astrophys.  **44** 389 (1975). J.L. Anderson and E.A. Spiegel, Astrophys. J. **202** 454 (1975). S. Kichenassamy and R.A. Krikorian, Phys. Rev. D **32** 1866 (1985). J.D. Jackson, *Classical Electrodynamics* (John Wiley & Sons, New York, 1975). W. Gordon, Ann. Phys. **72** 28 (1923). L.D. Landau and E.M. Lifshitz, *The Classical Theory of Fields* (Pergamon Press, Oxford, 1975). S.R. de Groot and L.G. Suttorp, *Foundations of Electrodynamics* (North–Holland, Amsterdam, 1972). I. Brevik, Phys. Rep. **52** 133 (1979). R.V. Jones and J.C.S. Richards, Proc. R. Soc. London, Ser. A **221** 480 (1954). R.V. Jones and B. Leslie, Proc. R. Soc. London, Ser. A **360** 347 (1978). R. Loudon, J. Mod. Opt. **49** 821 (2002). D.F. Nelson, Phys. Rev. A **44** 3985 (1991). R. Loudon, L. Allen and D.F. Nelson, Phys. Rev. E **55** 1071 (1997). A. Feigel, Phys. Rev. Lett. **92** 020404 (2004). G.F.R. Ellis and H. van Elst, in M. Lachieze-Rey (ed.), [*Theoretical and Observational Cosmology*]{}, NATO Science Series, Kluwer Academic Publishers (1998) [*gr-qc/9812046v4*]{} G. Amelino-Camelia, Int. J. Mod. Phys. D **11** 35 (2002). S. Alexander, R. Brandenberger and J. Magueijo, Phys. Rev. D **67** 081301(R) (2003). J. Triginer, W. Zimdahl and D. Pavón, Class. Quantum Grav. **13** 403 (1996). [^1]: The process of emission and absorption is thoroughly treated in any book on radiation transport, see e.g. [@Mihalas], and will therefore not be discussed further here, apart from the following brief comment. As an example, in case of applications to the early universe, the major contribution to the collisional term would be in the form of Thomson scattering, but there are of course numerous other scattering events that could dominate in other parameter regimes.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Light vector meson leptoproduction is analyzed on the basis of the generalized parton distributions. Our results on the cross section and spin effects are in good agrement with experiment at HERA, COMPASS and HERMES energies. Predictions for $A_{UT}$ asymmetry for various reactions are presented.' --- **Cross sections and spin asymmetries in vector meson leptoproduction** S.V. Goloskokov Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna 141980, Moscow region, Russia In this report, investigation of vector meson leptoproduction is based on the handbag approach where the leading twist amplitude at high $Q^2$ factorizes into hard meson electroproduction off partons and the Generalized Parton Distributions (GPDs) [@fact]. The higher twist (TT) amplitude which is essential in the description of spin effects exhibits the infrared singularities, which signals the breakdown of factorization [@mp]. These problems can be solved in our model [@gk06] where subprocesses are calculated within the modified perturbative approach in which quark transverse degrees of freedom accompanied by Sudakov suppressions are considered. The quark transverse momentum regularizes the end-point singularities in the TT amplitudes so that it can be calculated. In the model, the amplitude of the vector meson production off the proton with positive helicity reads as a convolution of the partonic subprocess ${\cal H}^V$ and GPDs $H^i\,(\widetilde{H}^i)$ $$\begin{aligned} \label{amptt-nf-ji} {\cal M}^{Vi}_{\mu'+,\mu +} &=& \frac{e}{2}\, \sum_{a}e_a\,{\cal C}_a^{V}\, \sum_{\lambda} \int_{xi}^1 d\xb {\cal H}^{Vi}_{\mu'\lambda,\mu \lambda} H_i(\xb,\xi,t) ,\end{aligned}$$ where $i$ denotes the gluon and quark contribution, sum over $a$ includes quarks flavor $a$ and $C_a^{V}$ are the corresponding flavor factors [@gk06]; $\mu$ ($\mu'$) is the helicity of the photon (meson), and $\xb$ is the momentum fraction of the parton with helicity $\lambda$. The skewness $\xi$ is related to Bjorken-$x$ by $\xi\simeq x/2$. In the region of small $x \leq 0.01$ gluons give the dominant contribution. At larger $x \sim 0.2$ the quark contribution plays an important role[@gk06]. To estimate GPDs, we use the double distribution representation [@mus99] $$H_i(\xb,\xi,t) = \int_{-1} ^{1}\, d\beta \int_{-1+|\beta|} ^{1-|\beta|}\, d\alpha \delta(\beta+ \xi \, \alpha - \xb) \, f_i(\beta,\alpha,t).$$ The GPDs are related with PDFs through the double distribution function $$\label{ddf} f_i(\beta,\alpha,t)= h_i(\beta,t)\, \frac{\Gamma(2n_i+2)}{2^{2n_i+1}\,\Gamma^2(n_i+1)} \,\frac{[(1-|\beta|)^2-\alpha^2]^{n_i}} {(1-|\beta|)^{2n_i+1}}\,.$$ The powers $n_i=1,2$ (i= gluon, sea, valence contributions) and the functions $h_i(\beta,t)$ are proportional to parton distributions [@gk06]. To calculate GPDs, we use the CTEQ6 fits of PDFs for gluon, valence quarks and sea [@CTEQ]. Note that the $u(d)$ sea and strange sea are not flavor symmetric. In agrement with CTEQ6 PDFs we suppose that $H^u_{sea} = H^d_{sea} = \kappa_s H^s_{sea}$, with $$\label{kapp} \kappa_s=1+0.68/(1+0.52 \ln(Q^2/Q^2_0))$$ The parton subprocess ${\cal H}^{V}$ contains a hard part which is calculated perturbatively and the $k_\perp$- dependent wave function. It contains the leading and higher twist terms describing the longitudinally and transversally polarized vector mesons, respectively. The quark transverse momenta are considered in hard propagators decrease the $LL$ amplitude and the cross section becomes in agrement with data. For the $TT$ amplitude these terms regularize the end point singularities. We consider the gluon, sea and quark GPDs contribution to the amplitude. This permits us to analyse vector meson production from low $x$ to moderate values of $x$ ($ \sim 0.2$) typical for HERMES and COMPASS. The obtained results [@gk06] are in reasonable agreement with experiments at HERA[@h1; @zeus], HERMES [@hermes], COMPASS [@compass] energies for electroproduced $\rho$ and $\phi$ mesons. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ **(a)** The ratio of cross sections $\sigma_\phi/\sigma_\rho$ at HERA energies- full line and HERMES- dashed line. Data are from H1 -solid, ZEUS -open squares, HERMES solid circles. **(b)** The longitudinal cross section for $\phi$ at $Q^2=3.8\,\mbox{GeV}^2$. Data: HERMES, ZEUS, H1, open circle- CLAS data point.](golos-f1.ps "fig:"){width="7.7cm" height="5.9cm"} ![ **(a)** The ratio of cross sections $\sigma_\phi/\sigma_\rho$ at HERA energies- full line and HERMES- dashed line. Data are from H1 -solid, ZEUS -open squares, HERMES solid circles. **(b)** The longitudinal cross section for $\phi$ at $Q^2=3.8\,\mbox{GeV}^2$. Data: HERMES, ZEUS, H1, open circle- CLAS data point.](golos-f2.ps "fig:"){width="7.7cm" height="5.8cm"} **(a)** **(b)** --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In Fig 1a, we show the strong violation of the $\sigma_\phi/\sigma_\rho$ ratio from 2/9 value at HERA energies and low $Q^2$, which is caused by the flavor symmetry breaking (\[kapp\]) between $\bar u$ and $\bar s$. The valence quark contribution to $\sigma_\rho$ decreases this ratio at HERMES energies. It was found that the valence quarks substantially contribute only at HERMES energies. At lower energies this contribution becomes small and the cross section decreases with energy. This is in contradiction with CLAS results which innerve essential increasing of $\sigma_\rho$ for $W<5 \mbox{GeV}$. On the other hand, we found good description of $\phi$ production at CLAS [@clas] Fig 1b. This means that we have problem only with the valence quark contribution at low energies. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ **(a)** The ratio of longitudinal and transverse cross sections for $\rho$ production at low $Q^2$ Full line- HERA , dashed-dotted -COMPASS and dashed- HERMES. **(b)** Predicted $A_{UT}$ asymmetry at COMPASS for various mesons. Dotted-dashed line $\rho^0$; full line $\omega$; dotted line $\rho^+$ and dashed line $K^{* 0}$. ](golos-f3.ps "fig:"){width="7.7cm" height="6cm"} ![ **(a)** The ratio of longitudinal and transverse cross sections for $\rho$ production at low $Q^2$ Full line- HERA , dashed-dotted -COMPASS and dashed- HERMES. **(b)** Predicted $A_{UT}$ asymmetry at COMPASS for various mesons. Dotted-dashed line $\rho^0$; full line $\omega$; dotted line $\rho^+$ and dashed line $K^{* 0}$. ](golos-f4.ps "fig:"){width="7.7cm" height="6cm"} **(a)** **(b)** ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ **(a)** The integrated cross sections of vector meson production at $W=10 \mbox{GeV}$. Lines are the same as in Fig 2b. **(b)** Predictions for $A_{UT}$ asymmetry $W=8 \mbox{GeV}$. Preliminary COMPASS data at this energy are shown [@sandacz]. ](golos-f5.ps "fig:"){width="7.7cm" height="6cm"} ![ **(a)** The integrated cross sections of vector meson production at $W=10 \mbox{GeV}$. Lines are the same as in Fig 2b. **(b)** Predictions for $A_{UT}$ asymmetry $W=8 \mbox{GeV}$. Preliminary COMPASS data at this energy are shown [@sandacz]. ](golos-f6.ps "fig:"){width="7.7cm" height="6cm"} **(a)** **(b)** --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The results for the $R=\sigma_L/\sigma_T$ ratio are shown in Fig 2a for HERA, COMPASS and HERMES energies. We found that our model describe fine $W$ and $Q^2$ dependencies of $R$. The analysis of the target $A_{UT}$ asymmetry for electroproduction of various vector mesons was carried out in our approach too[@gk08t]. This asymmetry is sensitive to an interference between $H$ and $E$ GPDs. We constructed the GPD $E$ from double distributions and constrain it by the Pauli form factors of the nucleon, positivity bounds and sum rules. The GPDs $H$ were taken from our analysis of the electroproduction cross section. Predictions for the $A_{UT}$ asymmetry at $W=10 \mbox{GeV}$ are given for $\omega$, $\phi$, $\rho^+$, $K^{*0}$ mesons [@gk08t] in Fig 2b. It can be seen that we predicted not small negative asymmetry for $\omega$ and large positive asymmetry for $\rho^+$ production. In these reactions the valence $u$ and $d$ quark GPDs contribute to the production amplitude in combination $\sim E^u-E^d$ and do not compensate each other ($E^u$ and $E^d$ GPDs has different signs). The opposite case is for the $\rho^0$ production where one have the $\sim E^u+E^d$ contribution to the amplitude and valence quarks compensate each other essentially. As a result $A_{UT}$ asymmetry for $\rho^0$ is predicted to be quite small. Unfortunately, it is much more difficult to analyse experimentally $A_{UT}$ asymmetry for $\omega$ and $\rho^+$ production with respect to $\rho^0$ because the cross section for the first reactions is much smaller compared to $\rho^0$, Fig. 3a. Our prediction for $A_{UT}$ asymmetry of $\rho^0$ production at COMPASS reproduces well the preliminary experimental data Fig. 3b. Thus, we can conclude that the vector meson electroproduction at small $x$ is a good tool to probe the GPDs. In different energy ranges, information about quark and gluon GPDs can be extracted from the cross section and spin observables of the vector meson electroproduction. This work is supported in part by the Russian Foundation for Basic Research, Grant 09-02-01149 and by the Heisenberg-Landau program. [99]{} X. Ji, Phys. Rev. [**D55**]{} (1997), 7114;\ A.V. Radyushkin, Phys. Lett. [**B380**]{} (1996) 417;\ J.C.Collins [*et al.*]{}, Phys. Rev. [**D56**]{} (1997) 2982. L. Mankiewicz, G. Piller, Phys. Rev. [**D61**]{} (2000) 074013;\ I.V. Anikin, O.V. Teryaev, Phys. Lett. [**B554**]{} (2003) 51. S.V. Goloskokov, P. Kroll, Euro. Phys. J. [**C50**]{} (2007) 829; ibid [**C53**]{} (2008) 367. I. V. Musatov, A. V. Radyushkin, Phys. Rev. [**D61**]{} (2000) 074027. J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky, W. K. Tung, JHEP [**0207**]{} (2002) 012. C. Adloff et al. \[H1 Collab.\], Eur. Phys. J. [**C13**]{} (2000) 371;\ S.Aid et al. \[H1 Collab.\], Nucl. Phys. [ **B468**]{} (1996) 3. J. Breitweg et al. \[ZEUS Collab.\], Eur. Phys. J. [**C6**]{} (1999) 603;\ S. Chekanov et al. \[ZEUS Collab.\], Nucl. Phys. [**B718**]{} (2005) 3;\ S. Chekanov et al. \[ZEUS Collab.\], PMC Phys. [**A1**]{} (2007) 6. A. Airapetian et al. \[HERMES Collab.\], Eur. Phys. J. [**C17**]{} (2000) 389;\ A. Borissov, \[HERMES Collab.\], “Proc. of Diffraction 06”, [ **PoS**]{} (DIFF2006), 014. D. Neyret \[COMPASS Collab.\], “Proc. of SPIN2004”, Trieste, Italy, 2004;\ V. Y. Alexakhin et al. \[COMPASS Collab.\], Eur. Phys. J. [**C52**]{} (2007) 255. J. P. Santoro et al. \[CLAS Collab.\], Phys. Rev. [**C78**]{} (2008) 025210. A. Sandacz \[COMPASS Collab.\], this proceedings. S.V. Goloskokov, P. Kroll, Eur. Phys. J. [**C59**]{} (2009) 809.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present intermediate-resolution (R$\sim$1000) spectra in the $\sim$3500-10000 Å range of 14 globular clusters in the magellanic irregular galaxy NGC 4449 acquired with the Multi Object Double Spectrograph on the Large Binocular Telescope. We derived Lick indices in the optical and the CaII-triplet index in the near-infrared in order to infer the clusters’ stellar population properties. The inferred cluster ages are typically older than $\sim$9 Gyr, although ages are derived with large uncertainties. The clusters exhibit intermediate metallicities, in the range $-1.2\lesssim$\[Fe/H\]$\lesssim-0.7$, and typically sub-solar \[$\alpha/Fe$\] ratios, with a peak at $\sim-0.4$. These properties suggest that i) during the first few Gyrs NGC 4449 formed stars slowly and inefficiently, with galactic winds having possibly contributed to the expulsion of the $\alpha$-elements, and ii) globular clusters in NGC 4449 formed relatively “late”, from a medium already enriched in the products of type Ia supernovae. The majority of clusters appear also under-abundant in CN compared to Milky Way halo globular clusters, perhaps because of the lack of a conspicuous N-enriched, second-generation of stars like that observed in Galactic globular clusters. Using the cluster velocities, we infer the dynamical mass of NGC4449 inside 2.88 kpc to be M($<$2.88 kpc)=$3.15^{+3.16}_{-0.75} \times 10^9~M_\odot$. We also report the serendipitous discovery of a planetary nebula within one of the targeted clusters, a rather rare event.' author: - | F. Annibali,$^{1}$[^1] E. Morandi,$^{2}$ L. L. Watkins,$^{3}$ M. Tosi,$^{1}$ A. Aloisi,$^{3}$ A. Buzzoni,$^{1}$ F. Cusano,$^{1}$ M. Fumana,$^{4}$ A. Marchetti,$^{4}$ M. Mignoli,$^{1}$ A. Mucciarelli,$^{1,2}$ D. Romano,$^{1}$ and R. P. van der Marel,$^{3}$\ $^{1}$INAF- Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Piero Gobetti, 93/3, 40129 - Bologna, Italy\ $^{2}$Dipartimento di Fisica e Astronomia, Università di Bologna, via Piero Gobetti 93/2, Bologna, Italy\ $^{3}$Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA\ $^{4}$INAF-Istituto di Astrofisica Spaziale e Fisica Cosmica, Via Bassini 15, I-20133 Milano, Italy date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'LBT/MODS spectroscopy of globular clusters in the irregular galaxy NGC 4449.' --- \[firstpage\] galaxies: abundances — galaxies: dwarf — galaxies: individual:NGC 4449) — galaxies: irregular — galaxies: starburst — galaxies: star clusters: general Introduction {#intro} ============ Dwarf galaxies are extremely important for our understanding of cosmology and galaxy evolution for several reasons: i) they are the most frequent type of galaxy in the Universe [e.g. @lilly95]; ii) in the Lambda Cold Dark Matter ($\Lambda$CDM) cosmological scenario, they are considered to be the first systems to collapse, supplying the building blocks for the formation of more massive galaxies through merging and accretion [e.g. @kw93]; iii) dwarf galaxies that experienced star formation (SF) at redshift z$>$6 are believed to be responsible for the re-ionization of the Universe [e.g. @bouwens12]; iv) due to their low potential well, dwarf galaxies are highly susceptible to loss of gas and metals (either because of galactic outflows powered by supernova explosions or because of environmental processes) and thus provide an important contribution to the enrichment of the intergalactic medium (IGM). Deriving how and when dwarf galaxies formed their stars is therefore fundamental to studies of galaxy formation and evolution. The most straightforward and powerful way to derive a galaxy’s star formation history (SFH) is by resolving its stellar content: high signal-to-noise photometry of individual stars directly translates into deep color-magnitude diagrams (CMDs) that can be modeled to reconstruct the behavior of the star formation rate (SFR) as a function of look-back time [e.g. @tosi91; @bertelli92; @gallart96; @dolphin97; @gro12; @cignoni16]. Indeed, since the advent of the Hubble Space Telescope (HST), the study of resolved stellar populations in a large number of galaxies within the Local Group and beyond has received a tremendous boost [e.g. @annibali09; @angst; @mcquinn10a; @annibali13; @weisz14; @sacchi16]. However, the look-back time that can be reached critically depends on the depth of the CMD: accurate information on the earliest epochs is only possible by reaching the oldest, faintest main sequence turnoffs and this translates, even with the superb spatial resolution of HST, into a typical distance of $\lesssim$1 Mpc from us. This distance limit corresponds in practice to a limit on the galaxy types for which the most ancient star formation history can be accurately recovered, the large majority of systems included in this volume being dwarf spheroidals (dSphs). With the obvious exception of the Magellanic Clouds, late-type, star-forming dwarfs are found at larger distances than dSphs, and the most active nearby star-forming dwarfs (e.g. NGC 1569, NGC 1705, NGC 4449) are as far as D${\;\lower.6ex\hbox{$\sim$}\kern-7.75pt\raise.65ex\hbox{$>$}\;}$3 Mpc. Spectroscopic studies of (unresolved) star clusters provide an alternative approach to gather insights into the past SFH. Such studies are particularly valuable in those cases where the resolved CMD does not reach much below the tip of the red-giant branch (RGB), making it impossible to resolve the details of the SFH prior to 1$-$2 Gyr ago. Indeed, star clusters are present in all types of galaxies and are suggested to be the birth site of many (possibly most) stars. The formation of massive star clusters is thought to be favored by the occurrence of intense star formation events, as suggested by the presence of a correlation between the cluster formation efficiency and the galaxy SFR density [e.g. @larsen00; @goddard10; @adamo11b; @adamo15]. Therefore, clusters can be powerful tracers of the star formation process in their host galaxies. Despite the large number of photometric studies of clusters in dwarf galaxies performed so far [e.g. @hunter00; @hunter01; @billett02; @annibali09; @adamo11a; @anni11; @cook12; @pellerin12], only a few systems (early-type dwarfs, in the majority of cases) have been targeted with 6-10 m class telescopes to derive cluster spectroscopic ages and chemical composition [e.g. @puzia00; @strader03; @strader05; @conselice06; @sharina07; @sharina10; @strader12]. Although spectroscopic ages older than $\sim$2 Gyr are derived with large uncertainties, both because of the well known age-metallicity degeneracy and because of the progressively lower age-sensitivity of the Balmer absorption lines with increasing look-back time, the additional information on the chemical abundance ratios can provide stringent constraints on the galaxy SFH back to the earliest epochs. In this paper we present deep spectroscopy obtained with the Large Binocular Telescope (LBT) of clusters in the Magellanic irregular galaxy NGC 4449 ($\alpha_{2000}$=$12^h 28^m 11^{s}.9$ $\delta_{2000}$=$+44^{\circ} 05^{'} 40^{"}$) at a distance of $3.82 \pm 0.18$ Mpc from us [@annibali08]. With an integrated absolute magnitude of $M_B$=$-18.2$, NGC 4449 is $\approx$1.4 times more luminous than the Large Magellanic Cloud (LMC). Its metallicity has been derived through spectroscopy of H II regions and planetary nebulae , and ranges between 12 + $\log$(O/H) = 8.26 $\pm$ 0.09 and 12 + $\log$(O/H) = 8.37 $\pm$ 0.05, close to the LMC value, although @kumari17 found a metallicity as low as 12 + $\log$(O/H) = 7.88 $\pm$ 0.14 in the very central galaxy regions. NGC 4449 is remarkable for several reasons: it is one of the most luminous and active nearby irregular galaxies, with a current SFR of $\sim 1$ M$_{\odot}$ yr$^{-1}$ [@mcquinn10b; @sacchi17]; it has a conspicuous population of clusters ($\approx$80), with a specific frequency of massive clusters higher than in nearby spirals and in the LMC [@anni11]; it hosts an old, very massive and elliptical cluster, associated with two tails of young stars, that has been suggested to be the nucleus of a former gas-rich satellite galaxy undergoing tidal disruption by NGC 4449 [@annibali12]; it is the first dwarf galaxy where a stellar tidal stream has been discovered [@delgado12; @rich12]; it has a very extended HI halo ($\sim 90$ kpc in diameter) which is a factor of $\sim 10$ larger than the optical diameter of the galaxy, and that appears to rotate in the opposite direction to the gas in the center [@hunter98]. All these studies suggest that NGC 4449 experienced a complex evolution and was possibly built, at least in part, through the accretion of satellite galaxies. From optical CMDs of the stars resolved with the advanced Camera for Survey (ACS) on board HST, @mcquinn10b and @sacchi17 derived the SFH of NGC 4449. These analyses indicate that NGC 4449 enhanced its star formation activity $\sim$500 Myr ago, while the rate was much lower at earlier epochs; however, the impossibility of reaching the old main-sequence turnoffs or even the red clump/horizontal branch with the available data implies that the SFH of NGC 4449 is very uncertain prior to $\sim$1-2 Gyr ago. Star clusters appear mostly unresolved in NGC 4449; @anni11 performed integrated-light photometry of $\sim$80 young and old clusters identified in the ACS images, and found that their colors are compatible, under the assumption of a metallicity of $\sim$1/4 solar, with a continuous age distribution over the whole Hubble time. However, only spectroscopy can allow for real progress by breaking, to some extent, the age-metallicity degeneracy and by providing information on the element abundance ratios. Motivated by this goal, we performed a spectroscopic follow-up of a few clusters in the @anni11 sample using the Multi Object Double Spectrographs on the Large Binocular Telescope (LBT/MODS). In Section \[data\_reduction\] we describe the observations and the data reduction; in Section \[index\_section\] we compute optical and near-infrared absorption-line indices; in Section \[stpop\] we derive the cluster stellar population parameters; in Section \[cl58\_section\] we present our serendipitous discovery of a candidate PN within one of the clusters; in Section \[dynamics\] we obtain an estimate of the dynamical mass of NGC 4449 from cluster velocities; in Sections \[discussion\] and  \[conclusions\] we discuss and summarize our results. ![image](fig1.eps){width="\textwidth"} Observations and data reduction {#data_reduction} =============================== Clusters to be targeted for spectroscopy were selected from the catalog of @anni11, hereafter A11, based on the HST/ACS F435W (B), F555W (V), and F814W (I) images acquired within GO program 10585, PI Aloisi. We selected 14 clusters located in relatively external regions of NGC 4449 to avoid the severe crowding toward the most central star-forming regions. The observations were performed with the Multi Object Double Spectrographs on the Large Binocular Telescope (LBT/MODS) in January 21 and 22, 2013 within program 2012B\_23, run B (PI Annibali). The 1”$\times$8” slit mask superimposed on the ACS V image is shown in Figure \[image\], while Figure \[mosaic\] shows color-composite ACS images for the target clusters. The observations were obtained with the blue G400L (3200$-$5800 Å) and the red G670L (5000$-$10000 Å) gratings on the blue and red channels in dichroic mode for 8$\times$1800 sec, for a total integration time of 4 h. Notice that the instrumental sensitivity in dichroic mode is very low in the $\sim$5500-5800 Å range. The seeing varied from $\sim$0.7” to $\sim$0.9”, and the average airmass was $\sim$1.4. The journal of the observations is provided in Table \[obs\]. Three Lick standard stars of F-K spectral types (see Table \[lick\_std\]) were also observed during our run with a 1”$\times$8” longslit with the purpose of calibrating our measurements into the widely used Lick-IDS system (see Section \[index\_section\] for more details). N. Date-obs Exptime Seeing Airmass PosA ParA ---- ------------ --------- -------- --------- --------------- --------------- 1 2013-01-20 1800 s 0.9” 1.7 $-35^{\circ}$ $-78^{\circ}$ 2 2013-01-20 1800 s 0.9” 1.5 $-35^{\circ}$ $-83^{\circ}$ 3 2013-01-20 1800 s 0.8” 1.3 $-35^{\circ}$ $-88^{\circ}$ 4 2013-01-20 1800 s 0.7” 1.2 $-35^{\circ}$ $-93^{\circ}$ 5 2013-01-21 1800 s 0.9” 1.7 $-35^{\circ}$ $-76^{\circ}$ 6 2013-01-21 1800 s 0.8” 1.5 $-35^{\circ}$ $-81^{\circ}$ 7 2013-01-21 1800 s 0.8” 1.4 $-35^{\circ}$ $-86^{\circ}$ 8 2013-01-21 1800 s 0.7” 1.3 $-35^{\circ}$ $-91^{\circ}$ : Journal of LBT/MODS observations for clusters in NGC 4449[]{data-label="obs"} Col. (1): exposure number; Col. (2): date of observations; Col. (3): exposure time in seconds; Col. (4): average seeing in arcsec; Col (5): average airmass; Col (6): position angle in degree; Col (7): parallactic angle in degree. Name Spectral Type Date-obs Exptime V ----------- --------------- ------------ --------- ----- HD 74377 K3 V 2013-01-20 1 s 8.2 HD 84937 F5 VI 2013-01-20 1 s 8.3 HD 108177 F5 VI 2013-01-21 1 s 9.7 : Observed Lick standard stars.[]{data-label="lick_std"} Lick standard stars were observed with a 1”$\times$8” longslit; Col. (1): star name; Col. (2): spectral type; Col. (3): date of observations; Col. (4): exposure time in seconds; Col. (5): V apparent magnitude. Bias and flat-field subtraction, and wavelength calibrations were performed with the Italian LBT Spectroscopic reduction Facility at INAF-IASF Milano, producing the calibrated two-dimensional (2D) spectra for the individual sub-exposures. The accuracy of the wavelength calibration obtained from the pipeline was checked against prominent sky lines adopting the nominal sky wavelengths tabulated in @uves_sky. In fact, while arc-lamps typically provide a good calibration of the wavelength variation with pixel position, a zero-point offset may be expected due to the fact that the light from the night sky and the light from the lamp do not follow the same path through the optics. In our case, we found that the $\Delta \lambda$ offset depended on slit position and was lower than $\sim$1 Å for both the blue and the red spectra; this zero-point correction was applied to our data. Sky subtraction was performed on the 2D calibrated spectra; to this purpose, we used the [*[background]{}*]{} task in IRAF[^2], typically choosing the windows at the two opposite sides of the cluster. This procedure does in principle remove, together with the sky, also the contribution from the NGC 4449’ s unresolved background. In particular, the [*[background]{}*]{} subtraction should remove from the cluster spectra possible emission line contamination due to the presence of diffuse ionized gas in NGC 4449; however, as we will see in Section \[em\_corr\], the subtraction is not always perfect (e.g. because of the highly variable emission background) and some residual emission may still be present in some clusters after background subtraction. This is shown in Figure \[bg\_sub\], where we present the case of cluster CL 72 as an illustrative example. Here, the emission line spectrum is highly variable within the slit, preventing a perfect background removal; as a result, the final background-subtracted spectrum appears still contaminated by residual emission lines. Ionized gas emission is detected in all the clusters of our sample except clusters CL 75, CL 79, and CL  77; as expected, the strongest emission is observed for the clusters located in the vicinity of star forming regions, i.e. clusters CL 39, CL 67, and CL 8. Cluster CL 58 shows strong emission as well but, at variance with all the other clusters, its emission appears “nuclear”, i.e. confined within the cluster itself (see Fig. \[bg\_sub\]). We will come back to the case of cluster CL 58 later in Section \[cl58\_section\]. ![image](fig2.eps){width="\textwidth"} ![Two examples of cluster spectra contaminated by emission lines. Portions of the spectra around the H$\beta$ and \[O III\]$\lambda\lambda$4959,5007 Å lines are shown. For each cluster, the total-combined un-subtracted spectrum is in the higher row, while the background-subtracted spectrum is just below. Cluster CL 72 suffers contamination from the diffuse ionized gas in NGC 4449, and some residual emission is still present after background subtraction. In cluster CL 58, the (modest) contamination from the diffuse ionized gas in NGC 4449 is optimally removed with the background subtraction; on the other hand, the cluster exhibits a quite strong centrally concentrated emission.[]{data-label="bg_sub"}](fig3.eps){width="\columnwidth"} We combined the sky-subtracted 2D spectra into a single frame for the blue and the red channels, respectively. The [*apall*]{} task in the [*twodspec*]{} IRAF package was then used to extract the 1D spectra from the 2D ones. In order to refine the emission subtraction, the [*apall*]{} task was run with the background subtraction option “on”. To derive the effective spectral resolution, we used the combined 1D spectra with no sky subtraction, and measured the FWHM of the most prominent sky lines; this resulted into resolutions of R$\sim$1040 and R$\sim$1500 at 4358 Å and 7000 Å, respectively. The blue and red 1D spectra were flux calibrated using the sensitivity curves from the Italian LBT spectroscopic reduction pipeline; the curves were derived using the spectrophotometric standard star Feige 66 observed in dichroic mode with a 5”-width slit on January 20, 2013. To obtain the red and blue sensitivity curves, the observed standard was compared with reference spectra in the HST CALSPEC database. Atmospheric extinction corrections were applied using the average extinction curve available from the MODS calibration webpage at http://www.astronomy.ohio-state.edu/MODS/Calib/. As discussed in our study of H II regions and PNe in NGC 4449 based on LBT/MODS data acquired within run A of the same program [@annibali17], this may introduce an uncertainty in flux calibration as high as $\sim15\%$ below $\sim$ 4000 Å. Differential Atmospheric Refraction {#daf_section} ----------------------------------- ![image](fig4.eps){width="\textwidth"} Our observations were significantly affected by differential atmospheric refraction [@filippenko82] due to the displacement between the slit position angle and the parallactic angle, coupled with the relatively high airmass (see the journal of observations in Table \[obs\]). In order to quantify this effect as a function of wavelength, we used HST/ACS imaging in F435W (B), F555W (V), F814W (I), and F658N (H$\alpha$) from our GO program 10585 and archive HST/ACS imaging in F502N (O III), F550M and F660N (H$\alpha$) from GO program 10522 (PI Calzetti). The F435W, F555W and F814W images cover a field of view of $\sim380''\times200''$, obtained with two ACS pointings, while the F502N, F550M, F660N and F658N images have a smaller field of view of $\sim$200”$\times$200”. As a consequence, the most central clusters have photometric coverage in all seven bands, while more “peripheral” ones (i.e. clusters CL 76, CL 75, CL 79, CL 3, CL 77, and CL 74) are detected only in B, V and I. Aperture photometry of the clusters was performed on the ACS images with the [*Polyphot*]{} IRAF task adopting a polygonal aperture resembling the same aperture used to extract the 1D spectra from the 2D ones. Synthetic magnitudes in the same ACS filters were then derived with the [*Calcphot*]{} task in the [*Synphot*]{} package by convolving the MODS spectra with the ACS bandpasses. Figure \[calibration\] shows the ratio of the ACS fluxes over the synthetic [*Synphot*]{} fluxes as a function of wavelengths. The fit to the data points was obtained adopting a radial basis function (RBF) available in Python [^3]. Fig. \[calibration\] shows a significant increase of the $F_{ACS}/F_{SYN}$ ratio toward bluer wavelengths, while the trend flattens out in the red. This behaviour translates into a loss of flux in the blue region of our spectra, as expected in the case of a significant effect from differential atmospheric refraction. We also notice that, at red wavelengths, the $F_{ACS}/F_{SYN}$ ratio is not equal to unity but lower than that, indicating that the fluxes from the spectra are overestimated. As discussed in @smith07 and @annibali17, flux calibrations are usually tied to reference point source observations, and therefore include an implicit correction for the fraction of the point source light that falls outside the slit; however, in the limit of a perfect uniform, slit-filling extended source, the diffractive losses out of the slit are perfectly balanced by the diffractive gains into the slit from emission beyond its geometric boundary; therefore, point-source-based calibrations likely cause an overestimate of the extended source flux. We used the curves obtained from the fits in Fig. \[calibration\] to correct our spectra; however, it is worth noticing that the line strength indices that will be used in Section \[index\_section\] do not depend on absolute flux calibration (since they are normalized to local continua) and depend only very slightly on relative flux calibration as a function of wavelength. Radial Velocities ----------------- Radial velocities for the clusters were derived through the [*fxcor*]{} IRAF task by convolving the cluster spectra both with theoretical simple stellar populations and with observed stellar spectra. In the blue, the convolution was performed within the $\sim$3700-5500 Å range, masking, when present, emission lines; in the red, we used the CaII triplet in the range $\sim$8000-8400 Å. The velocities measured from the blue and red spectra present a systematic offset of $\lesssim$10 km/s, as shown in Figure \[vel\_cfr\]. This sets an upper limit to spurious sources of error, such as the differential atmospheric refraction already discussed in Section \[daf\_section\]: indeed, the fact that the clusters are mis-centered in the slit as a function of wavelength causes also a small velocity shift. However, Figure \[vel\_cfr\] demonstrates that this error is modest and that the derived velocities are suitable for a dynamical analysis (see Section \[dynamics\]). We find that the velocity uncertainties provided by the [*fxcor*]{} task are lower for the red than for the blue spectra, likely because of the higher signal-to-noise of the CaII triplet lines compared to the lines used in the blue spectral region. Notice also that in general the CaII triplet lines are easier to use for velocity measurements than blue lines, e.g. because they are less sensitive to template mismatch. Furthermore, they are better calibrated in wavelengths thanks to the wealth of sky lines in the red spectral region. For all these reasons, we elect hereafter the cluster velocities obtained from the CaII triplet for our study. ![Comparison between cluster radial velocities obtained with the blue and with the red MODS spectra, respectively. The solid line is the one-to-one relation, while the dotted line is the linear fit to the data points. Notice that the displayed velocities have not been corrected here for the motion of the Sun. \[vel\_cfr\]](fig5.eps){width="\columnwidth"} The velocities are provided in Table \[rvel\]. Corrections for the motion of the Sun were performed with the [*rvcorrect*]{} task in IRAF. In addition, we provide radial velocities for the NGC 4449’s H II regions and PNe observed during run A. The H II region and PN velocities were derived from the most prominent emission lines, and also in this case the absolute wavelength calibration was anchored to strong sky lines observed in the spectra. The cluster radial velocities will be used in Section \[dynamics\] to infer basic dynamical properties in NGC 4449. -------- -------------- -------------- ------------- Name R.A. (J2000) Dec (J2000) Vhelio \[hh mm ss\] \[dd mm ss\] \[km/s\] CL 79 12 27 59.87 44 04 43.34 246$\pm$15 CL 58 12 28 06.44 44 05 55.12 289$\pm$14 CL 8 12 28 18.79 44 06 23.08 227$\pm$20 CL76 12 28 03.88 44 04 15.26 113$\pm$ 20 CL 67 12 28 09.34 44 04 38.99 201$\pm$22 CL 39 12 28 14.60 44 05 00.43 190$\pm$15 CL 20 12 28 18.82 44 05 19.61 115$\pm$15 CL 52 12 28 06.64 44 06 07.78 270$\pm$ 14 CL 3 12 28 16.44 44 07 29.49 162$\pm$23 CL 77 12 27 57.53 44 05 28.16 180$\pm$14 CL 27 12 28 07.72 44 07 13.42 197$\pm$23 CL 24 12 28 07.32 44 07 21.54 214$\pm$26 PN 1 12 28 04.126 44 04 25.14 225$\pm$14 PN 2 12 28 03.540 44 04 34.80 196 $\pm$10 PN 3 12 28 03.972 44 05 56.78 173 $\pm$9 H II-1 12 28 12.626 44 05 04.35 191$\pm$ 7 H II-2 12 28 09.456 44 05 20.35 168$\pm$5 H II-3 12 28 17.798 44 06 32.49 225$\pm$8 H II-4 12 28 16.224 44 06 43.32 218$\pm$23 H II-5 12 28 13.002 44 06 56.38 217$\pm$16 H II-6 12 28 13.925 44 07 19.04 232$\pm$ 4 -------- -------------- -------------- ------------- : Heliocentric radial velocities of clusters, H II regions and PNe in NGC 4449.[]{data-label="rvel"} Col. (1): Cluster, H II region or PN identification; Col. (2-3): right ascension and declination in J2000; Col. (4): heliocentric velocities in km/s. Final Spectra ------------- The cluster spectra were transformed into the rest-frame system using the IRAF task [*dpcor*]{}. The final rest-frame, calibrated spectra in the blue and red MODS channels are shown in Figures \[spectra\_blue\_a\], \[spectra\_blue\_b\], \[spectra\_red\_a\] and \[spectra\_red\_b\]. ![image](fig6.eps){width="\textwidth"} ![image](fig7.eps){width="\textwidth"} ![image](fig8.eps){width="\textwidth"} ![image](fig9.eps){width="\textwidth"} Narrow Band Indices {#index_section} =================== From the final calibrated spectra, we derived optical and near-infrared narrow-band indices that quantify the strength of stellar absorption lines with respect to local continua. More specifically, we computed the Lick indices in the optical [@worthey94; @wo97], and the CaII triplet index CaT\* defined by @cenarro01a and @cenarro09. Lick Indices {#lick_section} ------------ For the computation of the Lick indices, the adopted procedure is the same as described in our previous papers [e.g. @rampazzo05; @dE]. Because of the low MODS sensitivity in the $\sim$5500-5800 Å range (see Section \[data\_reduction\]), the Lick indices Fe5709, Fe5781, NaD, TiO$_1$ and TiO$_2$ were not considered in our study. The MODS spectra were first degraded to match the wavelength-dependent resolution of the Lick-IDS system (FWHM$\sim$8.4 Å at 5400 Å versus a MODS resolution of FWHM$\sim$5 Å at the same wavelength) by convolution with a Gaussian kernel of variable width. Then, the indices were computed on the degraded MODS spectra using the refined passband definitions of @trager98. In order to calibrate the “raw” indices into the Lick-IDS system, we used the three Lick standard stars (HD 74377, HD 84937, HD 108177) observed during our run with the same 1”$\times$8” slit used for the clusters. Following the prescription by @wo97, we degraded the standard star spectra to the Lick resolution, computed the indices, and compared our measurements with the values provided by @worthey94 to derive linear transformations in the form EW$_{Lick}$ = $\alpha$ EW$_{raw}$ + $\beta$, where $EW_{raw}$ and $EW_{Lick}$ respectively are the raw and the calibrated indices, and $\alpha$ and $\beta$ are the coefficients of the linear transformation. The transformations were derived only for those indices with three Lick star measurements [^4], while we discarded the remaining ones from our study. In the end, we were left with 18 out of the 25 Lick indices defined by @worthey94 and @wo97: CN$_1$, CN$_2$, Ca4227, G4300, Fe4383, Ca4455, Fe4531, C$_2$4668, H$\beta$, Fe5015, Mg$_1$, Mg$_2$, Mg$b$, Fe5270, Fe5335, Fe5406, H$\gamma$A, and H$\gamma$F. The linear transformations, which are shown in Figure \[lickstar\] and Table \[lick\_transf\], are consistent with small zero-point offsets for the majority of indices. No correction due to the cluster velocity dispersion was applied: in fact, while this is an important effect in the case of integrated galaxy indices, it is reasonable to neglect this correction for stellar clusters. The errors on the indices were determined through the following procedure. Starting from each cluster spectrum, we generated 1000 random modifications by adding a wavelength dependent Poissonian fluctuation corresponding to the spectral noise. Then we repeated the index computation procedure on each “perturbed” spectrum and derived the standard deviation. To these errors, we added in quadrature the errors on the emission corrections (see Section \[em\_corr\]) and the scatter around the derived transformations to the Lick systems (see Table \[lick\_transf\]). Index Unit $a$ $b$ $rms$ ------------ ------ --------------- ------------------ ------- CN$_1$ mag 1.13$\pm$0.06 -0.005$\pm$0.004 0.002 CN$_2$ mag 1.0$\pm$0.3 -0.01$\pm$0.01 0.01 Ca4227 Å 0.77$\pm$0.01 0.04$\pm$0.03 0.02 G4300 Å 1.09$\pm$0.08 -0.6$\pm$0.3 0.2 Fe4383 Å 0.84$\pm$0.02 -0.5$\pm$0.1 0.07 Ca4455 Å 1.3$\pm$0.3 0.1 $\pm$0.2 0.2 Fe4531 Å 0.89$\pm$0.03 0.63$\pm$0.06 0.05 C$_2$4668 Å 0.9$\pm$0.2 0.0$\pm$0.2 0.2 H$\beta$ Å 0.98$\pm$0.03 -0.11$\pm$0.08 0.04 Fe5015 Å 1.1$\pm$0.1 0.0$\pm$0.4 0.3 Mg$_1$ mag 1.03$\pm$0.06 0.035$\pm$0.006 0.006 Mg$_2$ mag 1.01$\pm$0.01 0.032$\pm$0.004 0.003 Mg$b$ Å 0.94$\pm$0.02 0.1$\pm$0.1 0.07 Fe5270 Å 0.96$\pm$0.03 0.01$\pm$0.07 0.05 Fe5335 Å 1.05$\pm$0.03 -0.16$\pm$0.05 0.04 Fe5406 Å 1.06$\pm$0.03 -0.03$\pm$0.03 0.02 H$\gamma$A Å 0.97$\pm$0.02 -0.3$\pm$0.1 0.1 H$\gamma$F Å 0.98$\pm$0.02 -0.05$\pm$0.07 0.07 : Linear transformations to the Lick-IDS system.[]{data-label="lick_transf"} $a$ and $b$ are the coefficients of the transformation $EW_{Lick} = \alpha EW_{raw} + \beta$, where $EW_{raw}$ and $EW_{Lick}$ respectively are the raw and the calibrated indices; $rms$ is the root-mean-square deviation around the best linear fit. ![image](fig10.eps){width="\textwidth"} Near infrared Indices {#cat_section} --------------------- In the near infrared, the CaII triplet ($\lambda\lambda$8498, 8542, 8662 Å) is the strongest feature observed. Intermediate-strength atomic lines of Fe I ($\lambda\lambda$8514, 8675, 8689, 8824 Å), Mg I ($\lambda$8807 Å), and Ti I ($\lambda$8435 Å) are also present. The Paschen series is apparent in stars hotter than G3 and can significantly contaminate the measurement of the CaII triplet. In order to overcome this problem, @cenarro01a defined a new Ca II triplet index, named CaT\*, which is corrected from the contamination of the Paschen series. Adopting their definition, we computed the CaT\* for our clusters in NGC 4449. In order to match the lower resolution of our spectra (FWHM$\sim$5 Å  in the near infrared) to the better resolution of the stellar library used by @cenarro01a (FWHM$\sim$1.5 Å), we used the prescriptions by @vaz03. We did not consider the Mg I and sTiO indices defined by @cenarro09, since these turned out to be highly affected by a large spectral noise due to the difficulty in subtracting the near-infrared sky background in our spectra (see Figs. \[spectra\_red\_a\] and  \[spectra\_red\_b\]). Emission contamination {#em_corr} ---------------------- As discussed in Section \[data\_reduction\], the majority of our spectra suffer contamination from the diffuse ionized gas present in NGC 4449 and the removal of the emission line contribution obtained with the background subtraction is not always satisfactory. Even small amounts of contamination from ionized gas can have a significant impact on the derived integrated ages [@serven10; @concas17]: the effect of emission is to fill in and weaken the absorption Balmer lines, resulting into apparent older integrated ages. Figures \[spectra\_blue\_a\] to \[spectra\_red\_b\] show the presence of residual emission in clusters CL 20, CL 39, CL 67, CL 76, CL 8, CL 72, CL 58, and CL 27. The remaining cluster spectra (CL 75, CL 79, CL 52, CL 3, CL 77, CL 24) can reasonably be considered emission-free. We derived a correction to the Balmer absorption indices H$\beta$ and H$\gamma$ (the H$\delta$ indices were not used, as discussed in Section \[lick\_section\]) for clusters CL 20, CL 67, CL 76, CL 72, CL 58 and CL 27, while the emission was too high in clusters CL 39 and CL 8 to attempt a recovery of the true absorption lines. We used the [*deblend*]{} function in the [*splot*]{} IRAF task to measure the flux of the \[O III\]$\lambda$5007 emission line; then, we used the relation between the \[O III\] and the H$\beta$ fluxes to compute the correction to the Balmer indices. It is well known that the $F_{[O~III]}/F_{H\beta}$ ratio is subject to a large dispersion, since it strongly depends on the properties of the ionized gas. To overcome this difficulty and properly correct our data, we used the results from our study of the H II regions in NGC 4449 [@annibali17]: from the six H II regions analyzed, we obtain an average flux ratio of $$\label{eq1} \frac{F_{[O~III]}}{F_{H\beta}}=3.2\pm0.7,$$ from which the H$\beta$ emission can be derived. For H II regions with temperature $T_e=10,000$ K and density $n_e=100 \ cm^{-3}$, the H$\gamma$ emission is obtained from the theoretical relations of @sh95, once the H$\beta$ flux is known: $$\frac{F_{H\gamma}}{F_{H\beta}}\sim0.47.$$ In the end, we computed the corrections to the Balmer indices by normalizing the $F_{H\gamma}$ and $F_{H\beta}$ fluxes to the “pseudo-continua” defined in the Lick system. The uncertainties on the corrections were computed by propagating both the errors on the measured fluxes and the dispersion in Eq.(\[eq1\]). Our results are summarized in Table \[emission\_tab\]. We caution that the adopted emission line ratios may not be appropriate for cluster CL 58, where the emission source could be different from an H II region. For instance, if the emission lines were due to the presence of a planetary nebula, a more adequate ratio would be $F_{[OIII]5007}/F_{H\beta} \sim 10$ [@annibali17], resulting into lower Balmer corrections than those listed in Table \[emission\_tab\]. For this reason, we will consider the final corrected Balmer line strengths for cluster CL 58 as upper limits. For cluster CL 67, we could also directly derive the emission in H$\beta$ and H$\alpha$ by fitting the observed spectrum with combinations of Voigt profiles (in absorption) plus Gaussian profiles (in emission) [see @annibali17 for a similar case]. This allowed us to perform a consistency check of our procedure. From Table \[emission\_tab\] we get $F_{H\alpha}/F_{H\beta}=2.97\pm0.11$, in agreement with the theoretical value of $\sim$2.86 for H II regions [@oster89; @sh95]. Furthermore, Eq.(\[eq1\]) provides an emission in H$\beta$ of $(10\pm2) \times 10^{-17} erg \ s^{-1} cm^{-2}$, marginally consistent with the value of $(14.5\pm0.4) \times 10^{-17} erg \ s^{-1} cm^{-2}$ derived from the direct spectral fit. Notice that for cluster CL 67 the corrections provided in Table \[emission\_tab\] are those obtained from the direct spectral fit. The final corrected indices are provided in Table \[indices\_table\] in the Appendix. ------------ ----------------------------------- ----------------------------------- ----------------------------------- ----------------------- ---------------------- Cluster ID F\[O III\] FH$\beta$ FH$\alpha$ $\Delta EW_{H\gamma}$ $\Delta EW_{H\beta}$ \[$erg s^{-1} cm^{-2} 10^{-17}$\] \[$erg s^{-1} cm^{-2} 10^{-17}$\] \[$erg s^{-1} cm^{-2} 10^{-17}$\] \[Å\] \[Å\] CL 20 1.8$\pm$0.2 $-$ $-$ $0.15\pm0.04$ $0.19\pm0.05$ CL 67 32.7$\pm$0.2 14.5$\pm$0.4 43$\pm$1 $0.78\pm0.02$ $1.67\pm0.04$ CL 76 2.0$\pm$0.1 $-$ $-$ $0.08\pm0.02$ $0.11\pm0.02$ CL 58 27.9$\pm$0.2 $-$ $-$ $0.4\pm0.1$ $0.7\pm0.2$ CL 72 1.3$\pm$0.1 $-$ $-$ $0.13\pm0.03$ $0.18\pm0.04$ CL 27 0.77$\pm$0.08 $-$ $-$ $0.04\pm0.01$ $0.06\pm0.01$ ------------ ----------------------------------- ----------------------------------- ----------------------------------- ----------------------- ---------------------- Col. (1): cluster name; Col. (2)-(4): measured fluxes for \[O III\], H$\beta$, and H$\alpha$ in emission; Col. (5)-(6): correction to be applied to the H$\gamma$ and H$\beta$ absorption indices. Stellar population parameters for clusters in NGC 4449 {#stpop} ====================================================== In Figures \[mod1\] to  \[mod4\] we compare the indices derived for the NGC 4449’s clusters with our set of simple stellar population (SSP) models [@annibali07; @dE]. The models are based on the Padova SSPs [@bressan94 and references therein], and on the fitting functions (FFs) of @worthey94 and @wo97. The SSPs were computed for different element abundance ratios, where the departure from the solar-scaled composition is based on the index responses by @korn05. In the models displayed in Figs. \[mod1\] to \[mod3\], the elements O, Ne, Na, Mg, Si, S, Ca, and Ti are assigned to the $\alpha$-element group, Cr, Mn, Fe, Co, Ni, Cu, and Zn to the Fe-peak group. We assume that the elements within one group are enhanced/depressed by the same factor; the $\alpha$ and Fe-peak elements are respectively enhanced and depressed in the \[$\alpha$/Fe\]$>$0 models, while the opposite holds for the \[$\alpha$/Fe\]$<$0 models. Other elements, such as N and C, are left untouched and scale with the solar composition. We caution that all these assumptions are an over-simplification: in fact, recent studies suggest that Mg possibly behaves differently from the other $\alpha$’s [see @pancino17 their Section 4.3] and it is well known that Galactic globular clusters exhibit an anti-correlation between Na and O, and between C and N [e.g. @carretta06]; an anti-correlation between Mg and Al has also been observed for the most metal-poor and/or most massive GCs [@pancino17]. Furthermore, notice that the stellar evolutionary tracks adopted in our models have solar-scaled chemical compositions [@fagottoa; @fagottob] and that only the effect due to element abundance variations in the model atmospheres, as quantified by the @korn05 response functions, is included. ![image](fig11.eps){width="\textwidth"} ![image](fig12.eps){width="\textwidth"} In Figures \[mod1\] to  \[mod3\], we plot the Lick indices for our clusters against the metallicity sensitive \[MgFe\]$^{'}$ index[^5], which presents the advantage of being mostly insensitive to Mg/Fe variations [@gonza93; @tmb03]. For comparison, we also show the indices derived by @puzia02 for Galactic clusters in the halo and in the bulge. In Fig. \[mod4\], NGC 4449’ s clusters are plotted instead on optical-near infrared planes, with a Lick index versus CaT\*; here the models have solar-scaled composition since, to our knowledge, specific responses of the CaT\* index to individual element abundance variations have not been computed yet. ![image](fig13.eps){width="\textwidth"} ![image](fig14.eps){width="\textwidth"} Discussion on individual indices {#ind_disc} -------------------------------- The [**CN$_1$**]{} and [**CN$_2$**]{} indices measure the strength of the CN absorption band at $\sim$4150 Å, and exhibit a strong positive response to the abundances of C and N. On the other hand, they are almost insensitive to \[$\alpha$/Fe\] variations. In Fig.\[mod1\], the bulk of clusters in NGC 4449 nicely overlaps the SSP models. The MW clusters, instead, exhibit a significant offset from the models, a behaviour that has been explained as due to a significant nitrogen enhancement [e.g. @origlia02; @puzia02; @tmb03]. Three clusters in our sample, namely CL 72, CL 75, and CL 58, occupy the same high CN strength region of the Galactic globular clusters. In order to match the @puzia02 Galactic globular cluster data with their SSP models, @tmb03 needed to assume \[N/$\alpha$\]$=0$.5 (i.e. a factor 3 enhancement in nitrogen with respect to the $\alpha$-elements). Notice that the CN features of the Galactic bulge are instead perfectly reproduced by models without N enhancement [@tmb03]. We will come back to the problem of the N abundance in NGC 4449’ s clusters later in this paper. The [**Ca 4227**]{} index is dominated by the Ca$\lambda$4227 absorption line, strongly correlates with metallicity and, interestingly, is the only index besides CN$_1$ and CN$_2$ to be affected by changes in the C and N abundances (i.e., it anti-correlates with CN). While the MW GCs are shifted to lower Ca 4227 values compared to the models (a discrepancy that according to @tmb03 can be reconciled assuming \[N/$\alpha$\]$=0$.5 models) the NGC 4449 clusters agree pretty well with the SSPs. This behaviour is in agreement with the good match between data and models in the [**CN$_1$**]{} and [**CN$_2$**]{} indices. The [**Ca 4455**]{} index, on the other hand, is not a useful abundance indicator. In fact, despite its name, it is a blend of many elements, and it has been shown to be actually insensitive to Ca [@tb95; @korn05]. It exhibits a very small dependence on the \[$\alpha$/Fe\] ratio. Our data are highly scattered around the models, with quite large errors on the index measurements. Hereafter, we will not consider this index for our cluster stellar population analysis. The [**G 4300**]{} index measures the strength of the G band and is highly sensitive to the CH abundance. It exhibits a dependence on both age and metallicity, and no significant sensitivity to the \[$\alpha$/Fe\] ratio, so that the G4300 vs. \[MgFe\]$^{'}$ plane can be used to separate age and metallicity to some extent. According to Fig. \[mod1\], our clusters have metallicities Z$\lesssim$0.004 and ages in the $\sim3 - 13$ Gyr range. However, @tmb03 noticed that the calibration of this index versus Galactic globular cluster data is not convincing, therefore it is not clear if G4300 can provide reliable results on the stellar population properties. The [**C$_2$4668**]{} index, formerly called Fe4668, is slightly sensitive to Fe and most sensitive to C abundance. It exhibits a low negative correlation with the \[$\alpha$/Fe\] ratio. The NGC 4449 clusters data are highly scattered in the C$_2$4668 vs \[MgFe\]$^{'}$ plane, possibly as a consequence of the large error in the C$_2$4668 measurement. As discussed by @tmb03, this index is not well calibrated against Galactic globular cluster data and therefore is not well suited for element abundance studies. Among the indices that quantify the strength of Fe lines, [**Fe 5270**]{} and [**Fe 5335**]{} are the ones most sensitive to \[$\alpha$/Fe\] variations. The other [**Fe 4383**]{}, [**Fe 4531**]{}, and [**Fe 5015**]{} indices contain blends of many metallic lines other than Fe (e.g. Ti I, Ti II, Ni I) and may be less straightforward than Fe 5270 and Fe 5335 to interpret. Fig. \[mod3\] shows that models of different \[$\alpha$/Fe\] ratios are very well separated in the Fe 5270, Fe 5335 vs. \[MgFe\]$^{'}$ planes, with the NGC 4449’ s clusters preferentially located in the region of \[$\alpha$/Fe\]$<$0. The same behaviour is observed in the [**Mg$b$**]{} vs. [**\[MgFe\]$^{'}$**]{} plane. The Mg$b$ index quantifies the strength of the Mg I b triplet at 5167, 5173, and 5184 Å, and is the most sensitive index to \[$\alpha$/Fe\] variations among the Mg lines. Indeed, the [**Mg$_1$**]{} and [**Mg$_2$**]{} dependence on \[$\alpha$/Fe\] is quite modest, in particular at low metallicities. The Mg$_2$ index is centered on the Mg I b feature, as the Mg$b$, while the Mg$_1$ samples MgH, Fe I, and Ni I lines at $\sim$4930 Å. Both Mg$_1$ and Mg$_2$ pseudo-continua are defined on a $\sim$400 Å baseline, much larger than the typical $\sim$100 Å baseline of the other Lick indices, implying that Mg$_1$ and Mg$_2$ are potentially affected by uncertainties on the relative flux calibration. This could explain the mismatch observed between our cluster data and the models for both the indices: in fact, as discussed in Section \[data\_reduction\], our observations were highly affected by differential flux losses due to atmospheric differential refraction and it is possible that our correction, based on a few photometric bands, was not able to perfectly correct for this effect. ![image](fig15.eps){width="\textwidth"} Finally, the [**H$\beta$**]{}, [**H$\gamma$A**]{}, and [**H$\gamma$F**]{} vs. \[MgFe\]$^{'}$ planes provide the best separation between age and total metallicity. The H$\beta$ index is poorly affected by \[$\alpha$/Fe\] variations, but suffers possible contamination from Balmer emission. On the other hand, the H$\gamma$ indices are less affected by emission than H$\beta$, but present a marked dependence on \[$\alpha$/Fe\] due to the presence of several Fe lines in their pseudo-continua. We recall that our Balmer indices have been corrected for the presence of residual emission lines on the cluster spectra; the two only exceptions are clusters CL 39 and CL 8, not shown here, whose emission was too strong to attempt a recovery of the true absorption strengths. Cluster CL 24 was one of those with no visible emission lines in the final subtracted spectra, nevertheless we notice that it falls slightly below the oldest models in all the H$\beta$, H$\gamma$A, H$\gamma$F vs. \[MgFe\]$^{'}$ planes. All the other clusters fall within the model grid and are consistent with Z$\lesssim$0.004 (or Z$\lesssim$0.008, if we consider the models with \[$\alpha$/Fe\]$=-0.8$) and 4 Gyr $\lesssim$age$\lesssim$ 13 Gyr. A more quantitative analysis of the stellar population parameters will be performed in the next section. Cluster CL 67, with Balmer indices as high as $H\beta=6.4\pm0.6$, $H\gamma A=10.1\pm0.7$, and $H\gamma F=7.3\pm0.4$, falls outside the displayed index ranges, and it is a few hundreds Myr old. Cluster Age \[Gyr\] $\log(Z/Z_{\odot})$ \[$\alpha$/Fe\] \[Fe/H\] \[N/$\alpha$\] --------- ------------- --------------------- ----------------- ---------------- ---------------- CL 20 11$\pm$5 $-1.3\pm0.5$ $-0.4\pm0.3$ $-1.0 \pm 0.6$ $<-0.5$ CL 76 11$\pm$2 $-1.0\pm0.1$ $-0.4\pm0.1$ $-0.7\pm0.2$ $-0.4\pm0.2$ CL 72 11$\pm$4 $-1.0\pm0.3$ $-0.2\pm0.2$ $-0.8\pm0.4$ $>0.5$ CL 75 10$\pm$4 $-1.3\pm0.5$ $-0.8\pm0.4$ $-0.7\pm0.6$ $>0.5$ CL 79 11$\pm$2 $-1.2\pm0.1$ $-0.4\pm0.2$ $-0.9\pm0.2$ $-0.2\pm0.2$ CL 58 $\ge$9 $\le-1.1$ 0.1$\pm$0.4 $\le-1.2$ $0.5\pm0.4$ CL 52 11$\pm$4 $-1.2\pm0.3$ $-0.4\pm0.3$ $-0.9\pm0.4$ $-0.1\pm0.1$ CL 3 9$\pm$2 $-1.0\pm0.2$ 0.2$\pm$0.2 $-1.2\pm0.2$ $-0.3\pm0.2$ CL 77 12$\pm$2 $-1.2\pm0.1$ $-0.5\pm0.2$ $-0.8\pm0.2$ $<-0.5$ CL 27 11$\pm$4 $-1.1\pm0.3$ $-0.3\pm0.2$ $-0.9\pm0.3$ $-0.5\pm0.2$ CL 24 12$\pm$4 $-1.1\pm0.6$ $0.1\pm0.4$ $-1.2\pm0.8$ $<-0.5$ In Fig. \[mod4\] we plotted some key Lick indices versus the near infrared [**CaT\***]{} (Ca II triplet) index. We did not consider cluster CL 75 whose near-infrared spectrum was too noisy due to a non-optimal sky background subtraction. As previously noticed, no models with variable \[$\alpha$/Fe\] ratios have been computed for the near-infrared indices and therefore the displayed SSPs refer to the base-model [^6]. Qualitatively, the CaT\* index provides results that are consistent with the Lick indices, with cluster total metallicities of Z$\lesssim$0.008. The H$\beta$ vs CaT\* plane seems to be a viable alternative to the H$\beta$ vs \[MgFe\]$^{'}$ diagram to disentangle age and metallicity effects. Although a discussion on the cluster element abundance ratios is not possible due to the lack of models different from the base one, we immediately notice the significant mismatch between data and models in the Mg$b$ vs. CaT\* plane, confirming the low \[$\alpha$/Fe\] ratios for our clusters. Ages, metallicities and \[$\alpha$/Fe\] ratios {#agezafe} ---------------------------------------------- If we exclude the clusters with very strong emission contamination (i.e., clusters CL 39, CL 67, and CL 8) from our sample, we are left with a sub-sample of 11 clusters, namely CL 20, CL 76, CL 72, CL 75, CL 79, CL 58, CL 52, CL 3, CL 77, CL 27 and CL 24. For these clusters, we derived ages, metallicities, and \[$\alpha$/Fe\] ratios using the algorithm described in @annibali07. In brief, each SSP model of given age (t), metallicity (Z), and \[$\alpha$/Fe\] ratio ($\alpha$) univocally corresponds to a point in a three-dimensional space defined by an index triplet. For each measured index triplet, we compute the [*likelihood*]{} that the generic (t,Z,$\alpha$) model be the solution to that data point. This procedure provides a [*likelihood*]{} map in the three-dimensional (t,Z,$\alpha$) space, and allows us to derive the “most probable” solution with its associated uncertainty. Following our discussion in Section \[ind\_disc\], we selected in the first place the following indices for our stellar population study: H$\beta$, H$\gamma$A, H$\gamma$F, Mg$b$, Fe 5270, and Fe 5335. We then considered the following index triplet combinations, composed of an age-sensitive index and two metallicity-sensitive indices: (H$\beta$, Mg$b$, $<Fe>$), (H$\gamma$A, Mg$b$, $<Fe>$), (H$\gamma$F, Mg$b$, $<Fe>$), where $<Fe>=0.5\times(Fe~5270+Fe~5335)$. The CN$_1$, CN$_2$, and Ca 4227 indices, potentially affected by CN variations, were analyzed in a second step. For each triplet, we derived ages, total metallicities Z, and \[$\alpha$/Fe\] ratios using our SSPs [@annibali07; @dE] and the algorithm described at the beginning of this section. The results are displayed in Fig. \[triplets\], where we plot the stellar population parameters obtained with the three different triplets. Fig. \[triplets\], left panel, shows that the typical errors on the derived ages are quite large; the results obtained adopting different Balmer indices are highly scattered, although typically consistent with each other given the large errors on the ages. This large age uncertainty is the natural consequence of the progressively reduced age-sensitivity of the Balmer absorption lines with increasing age: for instance, a $\sim$0.2 Å difference on the H$\beta$ index (which is the typical error for our clusters) corresponds to several Gyrs difference at old ages. On the other hand, the metallicity and the \[$\alpha$/Fe\] values obtained with the different triplets are in quite good agreement, despite the age-metallicity degeneracy; this is due to the high capability of the Balmer $+$ Mgb $+$ $<$Fe$>$ diagnostic in separating age, metallicity and $\alpha$/Fe affects [e.g @tmb03]. In the end, we derived final ages, metallicities and \[$\alpha$/Fe\] ratios by averaging the results from the three different triplets, and computed the errors by propagating the uncertainties on the stellar population parameters. Table \[agezafe\_tab\], where we summarize our results, shows that the majority of the analyzed clusters are typically old (${\;\lower.6ex\hbox{$\sim$}\kern-7.75pt\raise.65ex\hbox{$>$}\;}$9 Gyr). The total metallicities are highly sub-solar, with $-1.3 \lesssim \log(Z/Z_{\odot}) \lesssim -1.0$; the majority of clusters have sub-solar \[$\alpha$/Fe\] ratios (in the range $-0.2$–$0.5$), and only three clusters exhibit slightly super-solar \[$\alpha$/Fe\] ratios. \[Fe/H\] values were computed from the total metallicity Z and the \[$\alpha$/Fe\] ratio through to the formula: $$[Fe/H] = \log(Z/Z_{\odot}) + \log(f_{Fe})$$ where we have assumed $Z_{\odot}=0.018$, and $f_{Fe}$ is the enhancement/depression factor of the Fe abundance in the models [see @annibali07 for details]. NGC 4449’ s clusters are displayed in the \[$\alpha$/Fe\] versus \[Fe/H\] plane in Fig. \[afe\_feh\], together with Milky Way halo and bulge clusters from @puzia02. For a self-consistent comparison, the Lick indices provided by @puzia02 were re-processed through our algorithm and models. The difference between clusters in NGC 4449 and in the Milky Way is striking in this plane: while Galactic globular clusters exhibit a flat distribution with solar or super-solar \[$\alpha$/Fe\] ratios at all metallicities, the clusters in NGC 4449 display a trend, with slightly super-solar \[$\alpha$/Fe\] ratios at the lowest cluster metallicities, and highly sub-solar \[$\alpha$/Fe\] values at $[Fe/H]>-1$. Furthermore, the $[Fe/H]$ range of NGC 4449’ s clusters is between $\sim-1.2$ and $-0.7$, higher than MW halo GCs and comparable to MW bulge clusters. ![image](fig16.eps){width="\textwidth"} Nitrogen and Carbon {#nitrogen_carbon} ------------------- The CN$_1$, CN$_2$, and Ca 4427 indices, which are affected by CN abundance variations, can be used as diagnostics to study the nitrogen and carbon composition in our clusters. Fig. \[mod1\], left panels, shows a remarkable difference in CN$_1$, CN$_2$, and Ca 4427 between the bulk of clusters in NGC 4449 and Galactic globular clusters. In order to quantify this behaviour in terms of N and C abundance differences, we computed additional SSP models where we increased or depressed the abundance of C or N with respect to the solar-scaled composition. This resulted into models with different combinations of (\[$\alpha$/Fe\], \[N/$\alpha$\]) or (\[$\alpha$/Fe\], \[C/$\alpha$\]), shown in Fig. \[fig\_nvar\] together with the NGC 4449 and MW cluster data. The plotted models correspond to an age of 11 Gyr and \[$\alpha$/Fe\]$=-0.5$ to reflect the typical stellar population parameters derived for the NGC 4449’s clusters (see Table \[agezafe\_tab\]) and have \[N/$\alpha$\] (or \[C/$\alpha$\]) values from $-0.5$ to $+0.5$. We also plotted a 11 Gyr model with \[$\alpha$/Fe\]$=+0.5$ and \[N/$\alpha$\]$=+0.5$ to account for the typical abundance ratios derived in Galactic globular clusters from integrated-light absorption features [e.g., @tmb03]. We notice that while the CN indices of Galactic globular clusters are well reproduced by N-enhanced models, the majority of clusters in NGC 4449, with the exception of clusters CL 72, CL 75, and CL 58 that are more compatible with the Galactic ones, require solar or sub-solar \[N/$\alpha$\] (or \[C/$\alpha$\]) ratios to match the models. The Ca 4227 index is less affected than CN$_1$ and CN$_2$ by CN abundance variations and it is therefore less useful to characterize the chemical path of C and N. The Ca 4227 indices measured by @puzia02 for Galactic globular clusters are not well reproduced by the \[$\alpha$/Fe\]$=+0.5$, \[N/$\alpha$\]$=+0.5$ models and show an offset as large as $\sim-0.3$ Å. The CN$_1$, CN$_2$, and Ca 4427 indices alone do not allow us to break the degeneracy between C and N abundance variations. The C$_2$4668 and Mg$_1$ indices potentially offer a way out since they are much more sensitive to C than to N, but for our clusters they exhibit large errors or are not well calibrated, which makes them not very useful for our study. @tmb03 showed that C-enhanced models could reproduce the large CN$_1$ and CN$_2$ indices observed in MW globular clusters, but failed in reproducing C$_2$4668 and Mg$_1$; therefore they concluded that a nitrogen rather than a carbon enhancement was more likely in Galactic globular clusters. Building on these results, we computed \[N/$\alpha$\] values for the NGC 4449 clusters through comparison of the CN$_1$ and CN$_2$ indices against models where N is enhanced/depressed and C is left unchanged to the solar-scaled composition. We input into our algorithm, described in Section \[agezafe\], the age, metallicity, and \[$\alpha$/Fe\] values listed in Table \[agezafe\_tab\], derived two different \[N/$\alpha$\] ratios respectively from the (CN$_1$, Mgb, $<$Fe$>$) and (CN$_2$, Mgb, $<$Fe$>$) triplets, and then averaged the results. The mean \[N/$\alpha$\] values, listed in Col. 6 of Table \[agezafe\_tab\], are between $-0.5$ and $-0.1$ for all clusters but CL 72, CL 75, and CL 58, which exhibit highly super-solar \[N/$\alpha$\] ratios. ![image](fig17.eps){width="\textwidth"} A planetary nebula within cluster CL 58? {#cl58_section} ======================================== ![image](fig18.eps){width="\textwidth"} As discussed in Section \[data\_reduction\], the emission lines observed in the MODS spectrum of cluster CL 58 are not due to the extended ionized gas present in NGC 4449, but instead appear to originate within the cluster itself. In order to investigate the origin of this centrally-concentrated emission, we inspected HST images in different bands, with a spatial resolution $\sim$10 times better than the typical seeing of our observations. In particular, we show in Figure \[cl58\_pn\] images of cluster CL 58 in F275W (NUV) from GO program 13364 (PI Calzetti), and in F502N (\[O III\]), F550M (continuum near \[OIII\]), and in F658N (H$\alpha$) from GO program 10522 (PI Calzetti). These images, and in particular the continuum-subtracted \[OIII\] image (F502N$-$F550M), reveal the presence of a source with strong emission in \[O III\] at a projected distance of $\sim$0.25” from the cluster center; this source is also visible in the NUV image and in H$\alpha$ (with a lower contrast than in \[O III\]), while it is not visible in the F550M continuum image. All these properties suggest that it may be a PN belonging to the cluster, although we can not exclude a chance superposition of a “field” PN at the cluster position. The \[O III\] emission inferred from our MODS spectrum of CL 58 is $\sim3\times10^{-16} erg/s/cm^2$ (see Table \[emission\_tab\]), compatible with the brightest PNe detected in NGC 4449 [@annibali17]. Planetary nebulae in globular clusters are rare. Only four PNe are known in Galactic GCs so far [@pease28; @gillett89; @jacoby97]. @jacoby13 identified 3 PN candidates in a sample of 274 M 31 GCs. @bond15 analyzed HST data of 75 extragalactic GCs in different Local Group galaxies (LMC, M 31, M 33, NGC 147, NGC 6822) to search for PNe, and found only two (doubtful) candidates in the vicinity of the M31 globular cluster B086. A PN was discovered in the NGC 5128 (CenA) globular cluster G169 by @minniti02, and one in the Fornax GC H5 by @larsen08. We visually inspected the continuum-subtracted \[O III\] image of NGC 4449 to search for additional PN candidates associated with “old” ($>$1 Gyr, from integrated colors) clusters in the @anni11 catalog (only 20 of them fall within the F502N field of view), but could not find any. Therefore we are left with one PN detection out of a sample of 20 old clusters in NGC 4449. To our knowledge, CL 58 is the only known dwarf-irregular globular cluster to host a candidate PN. According to stellar evolution, globular clusters as old as the Galactic ones should be unable to host PNe, because in such old populations the masses of stars that are today transiting between the AGB and the white dwarf (WD) phase are smaller than $\sim0.55 M_{\odot}$ [see @jacoby17 and references therein]. Some sort of binary interaction has therefore been suggested to explain the presence of the (few) PNe detected in old Galactic globular clusters [@jacoby97]. On the other hand, globular clusters with intermediate rather than extremely old ages are expected to be richer in PNe than Milky Way GCs. For the NGC 4449 globular clusters, ages are derived with very large uncertainties; nevertheless, as we will discuss in Section \[discussion\], the derived chemical paths tend to exclude a rapid and early cluster formation as in the case of the Milky Way. The idea that NGC 4449 lacks, or has very few, old clusters is reinforced by our detection of one PN out of 20 analysed clusters, a frequency higher than that derived in the MW or in M 31. Dynamical properties of NGC 4449 {#dynamics} ================================ We can use the cluster velocities to probe the properties of the cluster population, and of NGC4449 as a whole. Systemic velocity and cluster velocity dispersion {#meandispersion} ------------------------------------------------- Using the central coordinates and distance of NGC4449 given in Section \[intro\], along with the cluster coordinates in Table \[rvel\], we calculate the projected distance of each cluster from the centre of NGC4449. The furthest cluster in our sample lies as a projected distance ${R_\mathrm{max}}$=2.88 kpc from the centre of NGC4449. We augment our data with the cluster sample from @strader12, removing any duplicates. This provides an additional 23 clusters. These clusters come partly from the same field as our sample, with some additional clusters detected in SDSS data. There are very few clusters outside the range of our field, and the sample is likely to be incomplete. As such, we elect to include only the 7 additional clusters inside ${R_\mathrm{max}}$; this leaves us with a sample of 19 clusters in total. Another reason for this choice is that, in the next section, we will use the cluster velocities to estimate the mass of NGC4449; including the clusters outside ${R_\mathrm{max}}$ could significantly bias our mass estimates as they would hinge on just one or two distant clusters. We use a simple maximum-likelihood estimator to evaluate the mean velocity $v = 203 \pm 11$ [kms$^{-1}$]{} and velocity dispersion $\sigma = 45 \pm 8$ [kms$^{-1}$]{} of the cluster sample. We will adopt the mean as the systemic velocity of NGC4449 and subtract it from the individual cluster velocities in subsequent analysis. The systemic velocity is in good agreement with the systemic velocity estimate of $205 \pm 1$ [kms$^{-1}$]{} from @hunter05 and of $204 \pm 2$ [kms$^{-1}$]{} from @strader12, though smaller than earlier estimates of $\sim$214 [kms$^{-1}$]{} from @bajaja94 and @hunter02. Velocity measurements of HI gas find that the velocity dispersion of NGC4449 varies from 15-35 [kms$^{-1}$]{} through the galaxy [@hunter05], slightly lower than we find here for the central globular cluster population. H$\alpha$ measurements find even higher dispersions, with a global average dispersion of $31.5 \pm 10$ [kms$^{-1}$]{} measured by @valdez02, though they note that the dispersion increases to $\sim 40$ [kms$^{-1}$]{} in the bar region near the centre, in good agreement with our cluster measurement. These dispersions are generally higher than would be expected, which is indicative of a system that is highly perturbed due to intense star formation [@hunter98]. Dynamical mass estimate {#dynamicalmass} ----------------------- We can also use the radial velocities along with the positions of the clusters to estimate the total mass of NGC4449 within the region spanned by the clusters, that is inside ${R_\mathrm{max}}$. To do this, we use the tracer mass estimators introduced in @watkins10, which are simple and yet remarkably effective. There are a number of estimators depending on the type of distances and velocities available; given the data we have, we use the estimator requiring projected distances and line-of-sight velocities here. The tracer mass estimators assume that, over the region of interest, the tracer sample has a number density distribution that is a power-law with index $-\gamma$, has a potential that is a power-law with index $-\alpha$, and has a constant anisotropy $\beta$. We assume that the sample is isotropic and thus that $\beta = 0$. The other two parameters require further consideration. To estimate the power-law index $\gamma$ for the number density distribution, we combined the globular cluster samples from @anni11 and @strader12. As we are only concerned with the number density distribution here, we can use all clusters, regardless of whether or not they also have velocity measurements. With duplicates removed, this leaves a sample of 157 clusters. Again using the central coordinates and distance of NGC4449, we calculate the projected distance of each cluster from the centre and then calculate the projected cumulative number profile. To estimate $\gamma$, we fit to this cumulative profile only in the region where we have velocity data ($0.17<R<2.88$ kpc) assuming an underlying power-law density and obtain a best-fitting power-law index $\gamma = 2.18 \pm 0.01$, where the uncertainty here indicates the uncertainty in the fit for the particular model we have assumed and does not account for any additional uncertainties. To get a better estimate of the uncertainty in $\gamma$, we also fit a double power-law, where we assume that the power-law index is $\gamma_\mathrm{in}$ for $R \le R_\mathrm{break}$ and is $\gamma_\mathrm{out}$ for $R > R_\mathrm{break}$ for some break radius $R_\mathrm{break}$, to the full cluster position sample. The best-fit gives $\gamma_\mathrm{in} = 1.78 \pm 0.01$ and $\gamma_\mathrm{out} = 4.41 \pm 0.03$ with $R_\mathrm{break} = 1.59 \pm 0.01$ kpc. So we will use $\gamma = 2.18$ to obtain our best mass estimate, but use the range $1.78 \le \gamma \le 4.41$ to provide uncertainties on the mass estimate. ![The cumulative number density profile of globular clusters in NGC4449. The black line shows the profile measured from the combined catalogues of @anni11 and @strader12. The orange line shows the best-fitting profile to the region of interest (marked by vertical dotted lines) assuming that the underlying density profile is a power-law. The index of the power-law $\gamma$ is given in orange the top-left corner. The blue line shows the best-fitting profile to the whole sample assuming that the underlying density profile is a double power-law with an inner index of $\gamma_\mathrm{in}$ and an outer index $\gamma_\mathrm{out}$ (shown in blue in the top-left corner). The radius at which the power-law index changes from inner to outer is marked by the blue vertical dashed line. We use the single fit as the best estimate of the power-law in this region and use the indices from the double fit to provide uncertainties.[]{data-label="cumulative_number"}](fig19.eps){width="0.9\linewidth"} Figure \[cumulative\_number\] shows the results of these fits. The black line shows the cumulative number density profile of the data, the orange line shows the best-fitting single power-law in the region of interest (the boundaries of which are marked by the vertical dotted lines), the solid blue line shows the best-fitting double power-law to the whole sample, and the dashed blue line marks the break radius for the double power-law fit. The best-fitting power-law indices for the fits are also given. To estimate the power-law index $\alpha$ of the potential, we assume that NGC4449 has a baryonic component embedded in a large dark matter (DM) halo. The power-law slope of the potential will depend on which of these components dominates inside ${R_\mathrm{max}}$, or indeed, if both make significant contributions to the potential. If the baryonic component is dominant so that mass follows light, then $\alpha = \gamma - 2$ for $\gamma \le 3$ and $\alpha = 1$ for $\gamma > 3$. For the range of $\gamma$ we found above, this would imply a range of $-0.22 \le \alpha \le 1$ with a best estimate of $\alpha = 0.18$. To estimate $\alpha$ for the case where the DM component is dominant, we assume that the DM halo of NGC4449 is @nfw (hereafter NFW), but with unknown virial radius and concentration. However, although we do not know the exact halo parameters, we can still place some constraints on these values. The virial radius is likely to be smaller than the Milky Way’s value of $r_\mathrm{vir} \sim 258$ kpc, and the concentration is likely to be between the Milky Way’s value of $c \sim 12$ and the typical concentration of dwarf spheroidals $c \sim 20$. Guided by these constraints, we try a grid of virial radii ($80 \le r_\mathrm{vir} \le 200$ kpc) and concentrations ($10 \le c \le 20$) and find the power-law that best fits the slope of the potential in the region of interest for each halo. ![The variation in potential slope $\alpha$ for a grid of DM halos assumed to be NFW in shape and defined by a concentration $c$ and a virial radius $r_\mathrm{vir}$. To each halo, we fit a power-law over the range of interest for our mass estimation – the colour bar shows how the index $\alpha$ of the best-fitting power-law changes from halo-to-halo. We only show halos with circular velocities at 15.7 kpc of $84 \pm 10$ [kms$^{-1}$]{} to force consistency with HI observations. The white regions are halos with circular velocities outside of this range.[]{data-label="halogrid_alpha"}](fig20.eps){width="0.9\linewidth"} To further narrow down the choice of likely halos, we turn to HI measurements, which find a rotation speed, corrected for inclination, of 84 kms$^{-1}$ at 15.7 kpc [@bajaja94; @martin98]. If we insist that the circular velocity at 15.7 kpc must be within 10 kms$^{-1}$ of the H$\alpha$ measurements, then this restricts the virial radius and concentration combinations allowed. Figure \[halogrid\_alpha\] shows how $\alpha$ varies over the halo grid for the “allowed” halos only. These remaining halos follow a curve in the $r_\mathrm{virial} - c$ plane, and predict $0.05 \lesssim \alpha \lesssim 0.13$ with a mean $\alpha \sim 0.09$, which, we note, is much narrower than the mass-follows-light case. However, we must still consider whether the DM is the dominant mass component inside ${R_\mathrm{max}}$ or if the baryonic mass dominates the form of the potential. Using $\gamma = 2.18$ as previously determined, the total mass enclosed within ${R_\mathrm{max}}$ for this range of $\alpha$ values shows little variation (only $\sim$1%). However, the total DM mass inside ${R_\mathrm{max}}$ changes significantly from halo-to-halo, such that the DM is approximately half of the total mass at its lowest and apparently exceeds the total mass at its highest (we choose not to rule out these halos as a larger $\gamma$ would alleviate this problem). Overall, we conclude that the DM makes a significant or dominant contribution to the mass within ${R_\mathrm{max}}$ and so it would be incorrect to assume that mass-follows-light. So we will use the mean value of $\alpha = 0.09$ found from the NFW fitting to obtain our best mass estimate, but, to be conservative, use the broader range $-0.22 \le \gamma \le 1$ to provide uncertainties on the mass estimate. ![The variation in estimated mass inside 2.88 kpc as a function of potential slope $\alpha$ and density slope $\gamma$. The range of $\gamma$ was set by fitting to existing globular cluster data. The range of $\alpha$ was set by assuming that mass follows light, in which case $\alpha = \mathrm{min}(1, \gamma - 2)$ (marked by the white line), as this range was larger than that estimated from considering DM halo models (highlighted by vertical dotted lines). The solid black lines show our best estimates for $\alpha$ and $\gamma$, and their intersection shows the adopted best mass estimate. The minimum and maximum masses in this grid set our uncertainties.[]{data-label="massgrid"}](fig21.eps){width="0.9\linewidth"} Now that we have estimates for $\alpha$ and $\gamma$, we can proceed with the mass estimates. As discussed, we will use our best estimates of $\alpha = 0.09$ and $\gamma = 2.18$, to get a “best” mass estimate; and to estimate the uncertainty in the mass, we will calculate the mass estimate across $-0.22 \le \alpha \le 1$ and $1.78 \le \gamma \le 4.41$. Figure \[massgrid\] shows the results of these calculations. For each $\alpha$-$\gamma$ combination across the grid, we estimate the mass inside ${R_\mathrm{max}}$. The solid black lines shows our adopted “best” $\alpha$ and $\gamma$ values. The dotted black lines show the range of $\alpha$ predicted from the NFW halo fitting. The white line traces the results if we were to assume that mass-follows-light. We see that the mass estimate is dominated by the assumed value of $\gamma$ and is much less sensitive to the value of $\alpha$. Overall, we estimate the mass of NGC4449 inside 2.88 kpc to be $M(<2.88\,\mathrm{kpc}) = 3.15^{+3.16}_{-0.75} \times 10^9$ M$_\odot$. The implied circular velocity at ${R_\mathrm{max}}= 2.88$ kpc is $69^{+29}_{-9}$ [kms$^{-1}$]{}. By comparison, @martin98 estimated a rotation of $\sim$40 [kms$^{-1}$]{} at $\sim$3 kpc using HI observations, which is somewhat lower than our estimate here. However, the significant velocity dispersion of the HI (see Section \[meandispersion\]) implies that the HI rotation speed should indeed be below the circular velocity to maintain hydrostatic equilibrium. @hunter02 found a minimum in the HI distribution at the centre of NGC4449, and estimated that the HI mass inside 2 arcmin ($\sim$2.2 kpc) is just $3 \times 10^8$ M$_\odot$. Assuming that the mass increases linearly with radius, the HI mass inside ${R_\mathrm{max}}$ would be $\sim 4 \times 10^8$ M$_\odot$, so the HI makes up a little more than 10% of the total mass near the centre. The total mass was estimated from HI observations to be $\sim 7.8 \times 10^{10}$ M$_\odot$ inside 30$^{\prime}$ ($\sim$ 33 kpc). If we assume that NGC4449 has a flat rotation curve, and thus that the mass increases linearly with radius, our mass estimate would imply a total mass $\sim 3.6^{+3.6}_{-0.9} \times 10^{10}$ M$_\odot$ within 30$^{\prime}$, which is inconsistent with the HI measurements, even given our rather generous upper mass limit. Furthermore, our estimate is likely to be an overestimate as the assumption of a flat rotation curve is likely reasonable near the centre but may break down towards larger radii, thus creating more tension with the mass implied by HI data. NGC4449 is often considered an analog of the Large Magellanic Cloud (LMC). By comparison, @marel14 find a mass $M(<8.7\,\mathrm{kpc}) = 17 \pm 7 \times 10^{9}$ M$_\odot$ for the LMC. Again assuming that the mass increases linearly with radius, then our mass measurement would imply a mass of $\sim 9.5 \times 10^{9}$ M$_\odot$ at 8.7 kpc, with the uncertainties encompassing $7-19 \times 10^{9}$ M$_\odot$. This estimate is lower than the LMC estimate, but still consistent at the upper end of the uncertainty range. Discussion ========== From our stellar population study of old clusters in NGC 4449, we find that the large majority of them have solar or sub-solar \[$\alpha$/Fe\] and \[N/$\alpha$\] ratios, making them clearly different with respect to Galactic GCs. These chemical properties have strong implications on our understanding of the formation of globular clusters in this irregular galaxy, which we discuss in the following. \[$\alpha$/Fe\] ratios ---------------------- It is well known that the \[$\alpha$/Fe\] ratio quantifies the relative importance of high mass stars versus intermediate/low mass stars to the enrichment of the interstellar medium (ISM) and it is therefore tightly linked to both the stellar initial mass function (IMF) and to the SF timescale [e.g. @greggio83; @tornambe86]. In fact, $\alpha$-elements (O, Ne, Mg, Si, S, Ar, Ca, Ti) are mainly synthesized by massive stars and restored into the interstellar medium (ISM) on short SF timescales ($\lesssim$50 Myr), while Fe, mostly provided by SNIa, is injected into the ISM on longer timescales. Soon as SNIa start to contribute, they dominate the iron enrichment and \[$\alpha$/Fe\] inevitably decreases: @mr01 estimated that the time in maximum enrichment by SNIa varies from $\sim$40-50 Myr for an instantaneous starburst to $\sim$0.3 Gyr for a typical elliptical galaxy to $\sim$4-5 Gyr for a spiral galaxy. Therefore, under the assumption of a universal IMF, super-solar \[$\alpha$/Fe\] ratios can be interpreted as the signature of an early and rapid star formation (e.g., in giant elliptical galaxies, or in the halo’s field stars and globular clusters of spiral galaxies), while sub-solar \[$\alpha$/Fe\] indicate a prolonged star formation, more typical of dwarf systems [e.g. @mt85; @lanfranchi03]. In dwarf galaxies, galactic winds powered by massive stars and SN II explosions may also play an important role in lowering the \[$\alpha$/Fe\] value through expulsion of the newly produced $\alpha$-elements: @marconi94 showed that sub-solar \[$\alpha$/Fe\] ratios can be obtained in dwarf galaxies by assuming differential galactic winds that are more efficient in removing the $\alpha$-elements than Fe since activated only during the bursts where most of the $\alpha$-elements are formed. Alternatively, different \[$\alpha$/Fe\] ratios could be due to a non-universal IMF: for instance, it has been suggested that the IMF could have been skewed towards massive stars in elliptical galaxies and in the Bulge, producing the observed super-solar $\alpha$/Fe ratios, and towards low-mass stars in dwarf galaxies [e.g. @weidner13; @yan17]. From an observational point of view, chemical abundances of individual red giant branch stars have been extensively derived in the nearest dwarf spheroidal galaxies [dSphs; e.g., @shetrone98; @shetrone01; @tolstoy03; @kirby11; @lemasle14; @norris17; @apogee17]. These studies have shown that each dSph starts, at low \[Fe/H\], with super-solar \[$\alpha$/Fe\] ratios, similar to those in the Milky Way halo at low metallicities, and then the \[$\alpha$/Fe\] ratio evolves down to lower values than are seen in the Milky Way at high metallicities [see @tht09 for a review]. The “knee”, i.e. the position where \[$\alpha$/Fe\] starts to decrease, depends on the particular SFH of the galaxy and occurs at larger \[Fe/H\] for a more rapid star formation and a more efficient chemical enrichment. The general finding in dSphs that relatively metal rich (\[Fe/H\]$>-1$) stars are deficient in $\alpha$-elements compared to iron suggests that the most recent generations of stars were formed from an ISM relatively poor in the ejecta from SN II. The observational picture is far less complete for dwarf irregular galaxies because of their larger distance from the Milky Way compared to dSphs (with the obvious exceptions of the LMC and the SMC). Chemical abundance determinations in these systems are mainly limited to H II regions [e.g., @it99; @kniazev05; @magrini05; @vanzee06; @guseva11; @haurberg13; @berg12] and to a few supergiant stars [@venn01; @venn03; @kaufer04; @leaman09; @leaman13], which probe a look-back time of at most $\sim$10 Myr. Chemical abundance studies of planetary nebulae [e.g., @magrini05; @pena07; @rojas16; @flores17; @annibali17] and unresolved star clusters [e.g., @strader03; @puzia08; @sharina10; @hwang14] provide a valuable tool to explore more ancient epochs and have been attempted in a few dwarf irregulars. The H II region and young supergiant tracers indicate typically low \[$\alpha$/Fe\] ratios, as expected in galaxies that have formed stars at a low rate over a long period of time and where galactic winds may have possibly contributed to the expulsion of the $\alpha$-elements. Interestingly, also integrated-light studies of globular clusters in dwarf irregulars provide solar or slightly sub-solar \[$\alpha$/Fe\] values, in agreement with our results. In NGC 4449 we find only a minority of clusters (3 in our sample) with super-solar \[$\alpha$/Fe\] ratios at \[Fe/H\]$\lesssim-1.2$, while the majority of clusters (8 in our sample) display intermediate metallicities ($-1\lesssim$\[Fe/H\]$\lesssim-0.5$) and sub-solar \[$\alpha$/Fe\] values. This indicates that the NGC 4449’s clusters have typically formed from a medium already enriched in the products of SNe type Ia and relatively poor, compared to the solar neighborhood, in the products of massive stars and SN II (either because of the slower star formation history and/or because of preferential loss of $\alpha$-elements due to galactic winds). From our data there are hints that the [*knee*]{} occurs at \[Fe/H\]$\le-1.2 - -1.0$, although a larger cluster sample would be needed to reinforce this result. In particular, we do not observe a very metal poor (i.e, $[Fe/H]<-1.5$), $\alpha$-enhanced cluster population in our sample. Whether such a population is present at larger galactocentric distances than those sampled by our data [e.g., @strader12 identified possible cluster candidates in the outer halo of NGC 4449], or if such a component is absent due to the lack of a major star formation event at early times in NGC 4449, can not be established from our data. Indeed, resolved-star color-magnitude diagrams indicate a low activity at ancient epochs in dIrrs as opposed to an early peak in the star formation history of dSphs [e.g. @monelli10; @weisz14; @skillman17]; this behaviour could naturally explain the different chemical paths observed in dSphs and dIrrs. The idea that the typical globular cluster population in NGC 4449 is younger than in the MW is reinforced by our serendipitous detection of a PN in cluster CL 58 (see Section \[cl58\_section\]). A useful comparison is that between NGC 4449 and the LMC, the irregular whose star cluster system is the best studied. The LMC globular clusters span a wide age/metallicity range, with both old, metal-poor and young, metal-rich objects, due to its complex star formation history. Chemical abundances of individual RGB stars in LMC clusters have been derived by several authors [e.g. @hill00; @j06; @mucciarelli10]. These studies show that old LMC clusters display a behavior of \[$\alpha$/Fe\] as a function of \[Fe/H\] similar to the one observed in the Milky Way stars, with old LMC clusters having \[$\alpha$/Fe\]$\sim0.3$; these clusters should therefore have formed during a rapid star formation event that occurred at early times, when the ISM was not yet significantly enriched by the products of SNIa. However, the majority of clusters in the LMC exhibit relatively “young” ages and intermediate metallicities 1-3 Gyr, \[Fe/H\]$\sim$-0.5 coupled with low \[$\alpha$/Fe\] values, more chemically similar to the clusters in NGC 4449. A possible explanation is that the LMC has formed the majority of its GCs from a medium already enriched by SNIa. Two of the most luminous clusters in our sample, namely clusters CL 77 and CL 79, were also studies by @strader12 through spectroscopy (their clusters B15 and B13, respectively). These clusters appear peculiar in that they are quite massive ($M_{\star}\sim2\times10^6 M_{\odot}$, comparable to the mass of $\omega$Cen in the MW, that is thought to be the remnant nucleus of an accreted dwarf galaxy) and elongated ($1-b/a\sim0.2-0.3$); cluster CL 77 is furthermore associated in projection with two tails of blue stars whose shape is reminiscent of tidal tails, and has therefore been suggested to be the nucleus of a former gas-rich satellite galaxy undergoing tidal disruption by NGC 4449 [@annibali12]. For clusters CL 77 and CL 79, @strader12 derived ages of 11.6$\pm$1.8 Gyr and 7.1$\pm$0,5, a common metallicity of \[Fe/H\] $=-1.12\pm0.06$ dex, and \[$\alpha$/Fe\] values of $-0.2\pm0.1$ and $-0.1\pm0.1$, respectively; these results are marginally consistent with our age estimates of 12$\pm$2 Gyr and 11$\pm$2 Gyr, \[Fe/H\] metallicities of $-0.8\pm0.2$ dex and $-0.9\pm0.2$ dex, and \[$\alpha$/Fe\] ratios of $-0.5\pm0.2$ and $-0.4\pm0.2$, respectively. While the relatively poor signal-to-noise of the @strader12 spectra did not allow them to derive the stellar population parameters for other clusters in NGC 4449 than CL 77 and CL 79, our study shows that the presence of sub-solar \[$\alpha$/Fe\] values is a typical characteristic of the NGC 4449’s clusters and not just a peculiarity of clusters CL 77 and CL 79. \[N/$\alpha$\] ratios --------------------- Besides displaying sub-solar \[$\alpha$/Fe\], the majority of clusters in NGC 4449 appear peculiar also in their CN content, as quantified by the CN$_1$ and CN$_2$ indices: compared to Galactic globular clusters (or to M31 globular clusters), they show lower CN indices at the same metallicities. The CN absorption strength is sensitive to the abundances of both carbon and nitrogen; however, the CN lines observed in Galactic globular clusters are well reproduced by models in which N, rather than C, is enhanced with respect to the solar composition [e.g. @tmb03; @puzia05]: the reason is that a C enhancement would result also into a significant increment of the C$_2$4668 and Mg$_1$ indices, in disagreement with the observations. Using Lick indices, @tmb03 derived \[N/$\alpha$\]$\sim$0.5 for Galactic globular clusters. Recent progress in studies of globular clusters has shown that they contain multiple stellar populations [e.g., @piotto15]: at least two populations of stars are present, one with the same chemical pattern as halo-field stars, and second ones which are enhanced in helium, nitrogen and sodium and depleted in carbon and oxygen [e.g., @gratton12]. A possible scenario to explain this multiplicity is that the second populations have formed from a material enriched from the ejecta of intermediate-mass asymptotic-giant branch stars [e.g. @renzini15; @dantona16]. While the presence of multiple populations seems so far to be ubiquitous in globular clusters, the fraction of first over second-generation stars varies from cluster to cluster, with the first generation of stars typically being the minority; therefore, we would expect the second, N-enriched generations of stars to dominate the integrated cluster light and to drive the large N-enrichment observed in the integrated spectra of Galactic globular clusters. The scenarios to interpret the cluster multiple populations are still very uncertain though, and none of them provide a good explanation for all their observed properties yet [@bastian17]. With the exception of CL 72, CL 75, and CL 58, the CN$_1$ and CN$_2$ indices for the clusters in NGC 4449 lie slightly below the models with \[N/$\alpha$\], \[C/$\alpha$\]$=$0. Unfortunately, our data do not allow us to establish if the low CN absorption is due to either a N or to a C depletion: in fact, as already discussed in Section \[nitrogen\_carbon\], the C$_2$4668 and Mg$_1$ indices, which are highly sensitive to C and could in principle allow for a discrimination between the two effects, show a very large scatter in our sample and are not well calibrated. Under the assumption that the low observed CN strength is due to a N depletion, we computed N/$\alpha$ ratios for the clusters and found $-0.5\lesssim[N/\alpha]\lesssim-0.1$ (while clusters CL 72, CL 75, and CL 58 show highly super-solar ratios of \[N/$\alpha]>0.5$). We stress that, although our data do not permit discrimination between N or C depletion, we can definitively exclude for the majority of clusters in NGC 4449 the presence of a major N-enriched component similar to that observed for old Galactic globular clusters. However, we can not exclude the presence of intrinsic light-element variations, although to a different extent with respect to MW GCs [see e.g. the case of cluster NGC 419 in the SMC, @martocchia17]. Comparison between gas and star metallicities --------------------------------------------- From our spectroscopic study, we derived cluster total metallicities in the range $-1.3 \lesssim \log Z/Z_{\odot} \lesssim -1.0$ (see Table \[agezafe\_tab\]), for an adopted solar metallicity of $Z_{\odot}=0.018$. Since the total metal fraction is mainly driven by oxygen, it is sensible to compare these values with oxygen abundance determinations in NGC 4449’ s H II regions, which trace the present-day ISM composition. @berg12 and @annibali17 derived metallicities in the range $8.26\lesssim 12 + \log(O/H) \lesssim8.37$; this translates, for an assumed solar oxygen abundance of $12+\log(O/H)_{\odot}=8.83\pm0.06$ [@grevesse98][^7], into $-0.57\lesssim\log((O/H)/(O/H)_{\odot})\lesssim-0.46$. This result indicates a ${\;\lower.6ex\hbox{$\sim$}\kern-7.75pt\raise.65ex\hbox{$>$}\;}$0.5 dex metal enrichment in NGC 4449 within the last $\sim$10 Gyr, which is the typical age of the clusters in our sample. A spectroscopic estimate of NGC 4449’ s stellar metallicity was also derived by @kar13: they inferred an average $\sim$1/5 solar metallicity for the stellar population older than 1 Gyr, which is “qualitatively” consistent with the $\sim$1/10 solar metallicity derived in this paper for our $\sim$10 Gyr old stellar clusters. In a future paper (Romano et al. in preparation) we will use all the available information on the chemical properties of the stellar and gaseous components in NGC 4449 to run chemical evolution models that will allow us to produce a self-consistent scenario for the evolution of this galaxy. Conclusions =========== We acquired intermediate-resolution (R$\sim$1000) spectra in the range $\sim$ 3500$-$10,000 Å with the MODS instrument on the LBT for a sample of 14 star clusters in the irregular galaxy NGC 4449. The clusters were selected from the sample of 81 young and old clusters of @anni11. With the purpose of studying the integrated-light stellar population properties, we derived for our clusters Lick indices in the optical and the CaII triplet index in the near-infrared. The indices were then compared with simple stellar population models to derive ages, metallicities, \[$\alpha$/Fe\] and \[N$/\alpha$\] ratios. Of the 14 clusters observed with MODS, 3 are affected by a major contamination from the diffuse ionized gas in NGC 4449 (in particular, one of these is a $\sim$few hundred Myr old cluster). Therefore, we are left with a sub-sample of 11 clusters for which we could perform a reliable stellar population analysis. For this sub-sample, the main results are: - The clusters have intermediate metallicities, in the range $-1.2\lesssim$\[Fe/H\]$\lesssim-0.7$; the ages are typically older than $\sim$9 Gyr, although determined with large uncertainties. No cluster with iron metallicity as low ($-2\lesssim$\[Fe/H\]$\lesssim-1.2$) as in Milky Way globular clusters is found in our sample. - The majority of clusters exhibit sub-solar $\alpha$/Fe ratios (with a peak at $[\alpha/Fe]\sim-0.4$), suggesting that they formed from a medium already enriched in the products of Type Ia supernovae. Sub-solar \[$\alpha$/Fe\] values are expected in galaxies that formed stars inefficiently and at a low rate and/or where galactic winds possibly contributed to the expulsion of the $\alpha$-elements. - Besides the low \[$\alpha$/Fe\], [the majority of clusters in NGC 4449 appear]{} also to be under-abundant in CN compared to Milky Way halo globular clusters. A possible explanation is the lack of a major contribution from N-enriched, second-generation stars as those detected in the old, metal-poor galactic globular clusters. Intrinsic light-element variations may still be present within NGC 4449’ s GCs, but to a different extent with respect to MW clusters. - We report the serendipitous detection of a PN within cluster CL 58 out of a sample of 20 “old” ($>$1 Gyr, from integrated colors) clusters in NGC 4449 covered by ACS/F502N and ACS/F555M images. PNe in old MW and M31 globular clusters are extremely rare, and our result reinforces the idea that the cluster population in NGC 4449 is typically younger than in these two giant spirals. - We use the cluster velocities to infer the dynamical mass of NGC 4449. We estimate the mass of NGC4449 inside 2.88 kpc to be M($<$2.88 kpc)=$3.15^{+3.16}_{-0.75} \times 10^9~M_\odot$; the upper mass limit within 30$^{\prime}$ is $\sim 3.6^{+3.6}_{-0.9} \times 10^{10}$ M$_\odot$, significantly lower than the mass derived in the literature from HI data. Acknowledgements {#acknowledgements .unnumbered} ================ This work was based on LBT/MODS data. The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: the University of Arizona on behalf of the Arizona Board of Regents; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Leibniz Institute for Astrophysics Potsdam, and Heidelberg University; the Ohio State University, and the Research Corporation, on behalf of the University of Notre Dame, University of Minnesota and University of Virginia. We acknowledge the support from the LBT-Italian Coordination Facility for the execution of observations, data distribution and reduction. F. A. and M. T acknowledge funding from INAF PRIN-SKA-2017 program 1.05.01.88.04. We thank the anonymous referee for his/her very nice report. Lick and Ca II triplet indices derived for globular clusters in NGC 4449 ======================================================================== ------- --------- --------- --------- -------- --------- --------- --------- ------------ ----------- --------- ------- ------- ------ --------- --------- ------ ------ -------- Name CN$_1$ CN$_2$ Ca4227 G4300 Fe4383 Ca4455 Fe4531 C$_2$4668 H$\beta$ Fe5015 Mg1 Mg2 Mgb Fe5270 Fe5335 Hga Hgf CaT\* eCN$_1$ eCN$_2$ eCa4227 eG4300 eFe4383 eCa4455 eFe4531 eC$_2$4668 eH$\beta$ eFe5015 eMg1 eMg2 eMgb eFe5270 eFe5335 eHga eHgf eCaT\* CL 20 -0.10 -0.06 0.3 1.4 -0.5 0.4 0.0 -0.9 2.0 1.7 0.042 0.078 1.1 1.3 1.1 -0.2 0.6 7.0 CL 20 0.01 0.02 0.2 0.5 0.7 0.3 0.5 0.9 0.3 0.7 0.009 0.009 0.3 0.4 0.4 0.5 0.3 0.8 CL 67 -0.13 -0.07 0.0 -2.8 -1.1 -0.2 0.1 -6 6.4 0.0 0.04 0.08 0.7 0.4 0.1 10.1 7.3 7.0 CL 67 0.02 0.03 0.4 0.9 1.3 0.6 1.0 2 0.6 2.0 0.02 0.02 0.8 0.9 1.0 0.7 0.4 3.0 CL 76 -0.081 -0.07 0.34 3.1 0.2 1.0 2.6 -0.05 2.0 2.1 0.058 0.125 1.4 1.7 1.5 -1.6 0.1 6.5 CL 76 0.005 0.01 0.08 0.2 0.2 0.2 0.1 0.34 0.1 0.3 0.006 0.004 0.1 0.1 0.1 0.2 0.1 0.3 CL 72 0.02 0.07 0.8 3.4 1.4 0.0 2.0 -0.2 1.6 3.9 0.054 0.111 1.6 1.9 1.3 -1.7 0.4 6.9 CL 72 0.01 0.02 0.2 0.4 0.5 0.3 0.4 0.6 0.2 0.6 0.008 0.007 0.2 0.3 0.3 0.4 0.2 0.6 CL 75 -0.02 0.02 0.6 3.2 1.0 0.1 1.4 1.5 1.9 2.1 0.044 0.078 0.8 1.5 1.0 0.2 0.4 3.5 CL 75 0.01 0.02 0.2 0.4 0.6 0.3 0.5 0.8 0.3 0.7 0.009 0.009 0.3 0.3 0.4 0.5 0.3 0.8 CL 79 -0.076 -0.05 0.55 2.8 1.2 0.7 2.2 1.9 2.0 2.8 0.058 0.107 1.2 1.5 1.1 -1.2 0.8 6.4 CL 79 0.005 0.01 0.08 0.2 0.2 0.2 0.2 0.3 0.1 0.4 0.006 0.004 0.1 0.1 0.1 0.2 0.1 0.3 CL 58 -0.01 0.02 0.6 2.7 0.7 1.0 2.1 0.7 2.6 1.8 0.08 0.14 1.7 1.4 1.2 -0.6 0.8 7 CL 58 0.02 0.02 0.3 0.6 0.9 0.4 0.7 1.0 0.5 1.0 0.01 0.01 0.5 0.5 0.6 0.7 0.4 1 CL 52 -0.072 -0.04 0.5 2.9 0.9 1.1 1.9 0.5 2.2 3.4 0.066 0.118 1.2 1.5 1.2 -1.2 0.9 6.9 CL 52 0.009 0.01 0.2 0.4 0.4 0.2 0.3 0.6 0.2 0.5 0.008 0.007 0.2 0.3 0.3 0.3 0.2 0.6 CL 3 -0.077 -0.03 0.61 2.5 2.1 0.2 0.5 -0.7 1.9 3.2 0.070 0.112 1.9 1.2 1.4 -1.1 1.2 6.2 CL 3 0.005 0.01 0.08 0.2 0.2 0.2 0.2 0.4 0.1 0.4 0.006 0.005 0.1 0.2 0.2 0.2 0.1 0.4 CL 77 -0.098 -0.08 0.6 3.1 1.6 1.0 2.2 3.3 2.0 1.6 0.052 0.100 1.1 1.5 0.9 -2.4 0.7 6.1 CL 77 0.007 0.01 0.1 0.3 0.3 0.2 0.2 0.4 0.1 0.4 0.007 0.005 0.2 0.2 0.2 0.3 0.1 0.4 CL 27 -0.092 -0.07 0.7 2.0 2.2 0.8 2.0 1.8 1.9 2.9 0.069 0.117 1.5 1.7 1.3 -1.7 0.9 6.1 CL 27 0.008 0.01 0.1 0.3 0.4 0.2 0.3 0.5 0.2 0.5 0.007 0.007 0.2 0.2 0.2 0.3 0.2 0.5 CL 24 -0.11 -0.09 0.0 3.1 0.6 -0.2 2.4 4.7 1.3 3 0.09 0.13 1.5 0.8 0.8 -2.6 0.1 5 CL 24 0.02 0.02 0.3 0.6 0.8 0.4 0.7 1.0 0.4 1 0.01 0.01 0.5 0.5 0.6 0.7 0.4 1 ------- --------- --------- --------- -------- --------- --------- --------- ------------ ----------- --------- ------- ------- ------ --------- --------- ------ ------ -------- : Lick and Ca II triplet indices derived for globular clusters in NGC 4449.[]{data-label="indices_table"} Adamo, A., [Ö]{}stlin, G., & Zackrisson, E. 2011a, , 417, 1904 Adamo, A., [Ö]{}stlin, G., Zackrisson, E., et al. 2011b, , 415, 2388 Adamo, A., Kruijssen, J. M. D., Bastian, N., Silva-Villa, E., & Ryon, J. 2015, , 452, 246 Annibali, F., Bressan, A., Rampazzo, R., Zeilinger, W. W., & Danese, L. 2007, , 463, 455 Annibali, F., Aloisi, A., Mack, J., et al. 2008, , 135, 1900 Annibali, F., Tosi, M., Monelli, M., et al. 2009, , 138, 169 Annibali, F., Gr[ü]{}tzbauch, R., Rampazzo, R., Bressan, A., & Zeilinger, W. W. 2011a, , 528, A19 Annibali, F., Tosi, M., Aloisi, A., & van der Marel, R. P. 2011b, , 142, 129 Annibali, F., Tosi, M., Aloisi, A., van der Marel, R. P., & Martinez-Delgado, D. 2012, , 745, L1 Annibali, F., Cignoni, M., Tosi, M., et al. 2013, , 146, 144 Annibali, F., Tosi, M., Romano, D., et al. 2017, , 843, 20 Bajaja, E., Huchtmeier, W. K., & Klein, U. 1994, , 285, 385 Bastian, N., & Lardo, C. 2017, arXiv:1712.01286 Berg, D. A., Skillman, E. D., Marble, A. R., et al. 2012, , 754, 98 Bertelli, G., Mateo, M., Chiosi, C., & Bressan, A. 1992, , 388, 400 Billett, O. H., Hunter, D. A., & Elmegreen, B. G. 2002, , 123, 1454 Bond, H. E. 2015, , 149, 132 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2012, , 752, L5 Bressan, A., Chiosi, C., & Fagotto, F. 1994, , 94, 63 Caffau, E., Ludwig, H.-G., Steffen, M., et al. 2008, , 488, 1031 Caffau, E., Maiorca, E., Bonifacio, P., et al. 2009, , 498, 877 Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2006, , 450, 523 Cenarro, A. J., Cardiel, N., Gorgas, J., et al. 2001, , 326, 959 Cenarro, A. J., Gorgas, J., Cardiel, N., Vazdekis, A., & Peletier, R. F. 2002, , 329, 863 Cenarro, A. J., Cardiel, N., Vazdekis, A., & Gorgas, J. 2009, , 396, 1895 Cignoni, M., Sabbi, E., van der Marel, R. P., et al. 2016, , 833, 154 Concas, A., Pozzetti, L., Moresco, M., & Cimatti, A. 2017, , 468, 1747 Cook, D. O., Seth, A. C., Dale, D. A., et al. 2012, , 751, 100 Conselice, C. J. 2006, , 639, 120 Dalcanton, J. J., Williams, B. F., Seth, A. C., et al. 2009, , 183, 67 D’Antona, F., Vesperini, E., D’Ercole, A., et al. 2016, , 458, 2122 Dolphin, A. 1997, New Astronomy, 2, 397 Fagotto, F., Bressan, A., Bertelli, G., & Chiosi, C. 1994, , 105, 29 Fagotto, F., Bressan, A., Bertelli, G., & Chiosi, C. 1994, , 104, 365 Filippenko, A. V. 1982, , 94, 715 Flores-Dur[á]{}n, S. N., Pe[ñ]{}a, M., & Ruiz, M. T. 2017, , 601, A147 Gallart, C., Aparicio, A., Bertelli, G., & Chiosi, C. 1996, , 112, 1950 Garc[í]{}a-Rojas, J., Pe[ñ]{}a, M., Flores-Dur[á]{}n, S., & Hern[á]{}ndez-Mart[í]{}nez, L. 2016, , 586, A59 Gillett, F. C., Jacoby, G. H., Joyce, R. R., et al. 1989, , 338, 862 Goddard, Q. E., Bastian, N., & Kennicutt, R. C. 2010, , 405, 857 Gonz[á]{}lez, J. J. 1993, Ph.D. Thesis, Gratton, R. G., Carretta, E., & Bragaglia, A. 2012, , 20, 50 Greggio, L., & Renzini, A. 1983, , 54, 311 Grevesse, N., & Sauval, A. J. 1998, , 85, 161 Grocholski, A. J., van der Marel, R. P., Aloisi, A., et al. 2012, , 143, 117 Guseva, N. G., Izotov, Y. I., Stasi[ń]{}ska, G., et al. 2011, , 529, AA149 Hanuschik, R. W. 2003, , 407, 1157 Haurberg, N. C., Rosenberg, J., & Salzer, J. J. 2013, , 765, 66 Hasselquist, S., Shetrone, M., Smith, V., et al. 2017, , 845, 162 Hwang, N., Park, H. S., Lee, M. G., et al. 2014, , 783, 49 Hill, V., Fran[ç]{}ois, P., Spite, M., Primas, F., & Spite, F. 2000, , 364, L19 Hunter, D. A., Wilcots, E. M., van Woerden, H., Gallagher, J. S., & Kohle, S. 1998, , 495, L47 Hunter, D. A., O’Connell, R. W., Gallagher, J. S., & Smecker-Hane, T. A. 2000, , 120, 2383 Hunter, D. A. 2001, , 559, 225 Hunter, D. A., Rubin, V. C., Swaters, R. A., Sparke, L. S., & Levine, S. E. 2002, , 580, 194 Hunter, D. A., Rubin, V. C., Swaters, R. A., Sparke, L. S., & Levine, S. E. 2005, , 634, 281 Hunter, D. A., Elmegreen, B. G., & Gehret, E. 2016, , 151, 136 Izotov, Y. I., & Thuan, T. X. 1999, , 511, 639 Jacoby, G. H., Morse, J. A., Fullton, L. K., Kwitter, K. B., & Henry, R. B. C. 1997, , 114, 2611 Jacoby, G. H., Ciardullo, R., De Marco, O., et al. 2013, , 769, 10 Jacoby, G. H., De Marco, O., Davies, J., et al. 2017, , 836, 93 Johnson, J. A., Ivans, I. I., & Stetson, P. B. 2006, , 640, 801 Karczewski, O. [Ł]{}., Barlow, M. J., Page, M. J., et al. 2013, , 431, 2493 Kaufer, A., Venn, K. A., Tolstoy, E., Pinte, C., & Kudritzki, R.-P. 2004, , 127, 2723 Kauffmann, G., & White, S. D. M. 1993, , 261, Kirby, E. N., Cohen, J. G., Smith, G. H., et al. 2011, , 727, 79 Kniazev, A. Y., Grebel, E. K., Pustilnik, S. A., Pramskij, A. G., & Zucker, D. B. 2005, , 130, 1558 Korn, A. J., Maraston, C., & Thomas, D. 2005, , 438, 685 Kumari, N., James, B. L., & Irwin, M. J. 2017, , 470, 4618 Lanfranchi, G. A., & Matteucci, F. 2003, , 345, 71 Larsen, S. S. 2008, , 477, L17 Larsen, S. S., & Richtler, T. 2000, , 354, 836 Larsen, S. S., Brodie, J. P., Forbes, D. A., & Strader, J. 2014, , 565, A98 Leaman, R., Cole, A. A., Venn, K. A., et al. 2009, , 699, 1 Leaman, R., Venn, K. A., Brooks, A. M., et al. 2013, , 767, 131 Leaman, R., Venn, K., Brooks, A., et al. 2014, , 85, 504 Lemasle, B., de Boer, T. J. L., Hill, V., et al. 2014, , 572, A88 Lilly, S. J., Tresse, L., Hammer, F., Crampton, D., & Le Fevre, O. 1995, , 455, 108 Magrini, L., Leisy, P., Corradi, R. L. M., et al. 2005, , 443, 115 Marconi, G., Matteucci, F., & Tosi, M. 1994, , 270, 35 Martin, C. L. 1998, , 506, 222 Mart[í]{}nez-Delgado, D., Romanowsky, A. J., Gabany, R. J., et al. 2012, , 748, L24 Martocchia, S., Bastian, N., Usher, C., et al. 2017, , 468, 3150 Mateluna, R., Geisler, D., Villanova, S., et al. 2012, , 548, A82 Matteucci, F., & Tosi, M. 1985, , 217, 391 Matteucci, F., & Recchi, S. 2001, , 558, 351 McQuinn, K. B. W., Skillman, E. D., Cannon, J. M., et al. 2010a, , 721, 297 McQuinn, K. B. W., Skillman, E. D., Cannon, J. M., et al. 2010b, , 724, 49 Minniti, D., & Rejkuba, M. 2002, , 575, L59 Monelli, M., Gallart, C., Hidalgo, S. L., et al. 2010, , 722, 1864 Mucciarelli, A., Origlia, L., & Ferraro, F. R. 2010, , 717, 277 Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, , 490, 493 Norris, J. E., Yong, D., Venn, K. A., et al. 2017, , 230, 28 Origlia, L., Rich, R. M., & Castro, S. 2002, , 123, 1559 Osterbrock, D. E. 1989, Research supported by the University of California, John Simon Guggenheim Memorial Foundation, University of Minnesota, et al. Mill Valley, CA, University Science Books, 1989, 422 p., Pancino, E., Romano, D., Tang, B., et al. 2017, , 601, A112 Pease, F. G. 1928, , 40, 342 Pellerin, A., Meyer, M. M., Calzetti, D., & Harris, J. 2012, , 144, 182 Pe[ñ]{}a, M., Stasi[ń]{}ska, G., & Richer, M. G. 2007, , 476, 745 Pilyugin, L. S., Grebel, E. K., & Zinchenko, I. A. 2015, , 450, 3254 Piotto, G., Milone, A. P., Bedin, L. R., et al. 2015, , 149, 91 Puzia, T. H., Kissler-Patig, M., Brodie, J. P., & Schroder, L. L. 2000, , 120, 777 Puzia, T. H., Saglia, R. P., Kissler-Patig, M., et al. 2002, , 395, 45 Puzia, T. H., Perrett, K. M., & Bridges, T. J. 2005, , 434, 909 Puzia, T. H., & Sharina, M. E. 2008, , 674, 909-926 Rampazzo, R., Annibali, F., Bressan, A., et al. 2005, , 433, 497 Renzini, A., & Voli, M. 1981, , 94, 175 Renzini, A., D’Antona, F., Cassisi, S., et al. 2015, , 454, 4197 Rich, R. M., Collins, M. L. M., Black, C. M., et al. 2012, , 482, 192 Sacchi, E., Annibali, F., Cignoni, M., et al. 2016, , 830, 3 Sacchi, E. et al, ApJ submitted Serven, J., & Worthey, G. 2010, , 140, 152 Sharina, M. E., Puzia, T. H., & Krylatyh, A. S. 2007, Astrophysical Bulletin, 62, 209 Sharina, M. E., Chandar, R., Puzia, T. H., Goudfrooij, P., & Davoust, E. 2010, , 405, 839 Shetrone, M. D., Bolte, M., & Stetson, P. B. 1998, , 115, 1888 Shetrone, M. D., C[ô]{}t[é]{}, P., & Sargent, W. L. W. 2001, , 548, 592 Smith, J. D. T., Armus, L., Dale, D. A., et al. 2007, , 119, 1133 Storey, P. J., & Hummer, D. G. 1995, , 272, 41 Strader, J., Brodie, J. P., & Huchra, J. P. 2003, , 339, 707 Strader, J., Brodie, J. P., Cenarro, A. J., Beasley, M. A., & Forbes, D. A. 2005, , 130, 1315 Strader, J., Seth, A. C., & Caldwell, N. 2012, , 143, 52 Skillman, E. D., Monelli, M., Weisz, D. R., et al. 2017, , 837, 102 Thomas, D., Maraston, C., & Bender, R. 2003, , 339, 897 Tolstoy, E., Venn, K. A., Shetrone, M., et al. 2003, , 125, 707 Tolstoy, E., Hill, V., & Tosi, M. 2009, , 47, 371 Tornambe, A., & Matteucci, F. 1986, , 223, 69 Tosi, M., Greggio, L., Marconi, G., & Focardi, P. 1991, , 102, 951 Trager, S. C., Worthey, G., Faber, S. M., Burstein, D., & Gonz[á]{}lez, J. J. 1998, , 116, 1 Tripicco, M. J., & Bell, R. A. 1995, , 110, 3035 van der Marel, R. P., & Kallivayalil, N. 2014, , 781, 121 van Zee, L., & Haynes, M. P. 2006, , 636, 214 Vazdekis, A., Cenarro, A. J., Gorgas, J., Cardiel, N., & Peletier, R. F. 2003, , 340, 1317 Venn, K. A., Tolstoy, E., Kaufer, A., & Kudritzki, R. P. 2004, Origin and Evolution of the Elements, 58 Watkins, L. L., Evans, N. W., & An, J. H. 2010, , 406, 264 Weidner, C., Kroupa, P., Pflamm-Altenburg, J., & Vazdekis, A. 2013, , 436, 3309 Weisz, D. R., Dolphin, A. E., Skillman, E. D., et al. 2014, , 789, 147 Worthey, G., Faber, S. M., Gonzalez, J. J., & Burstein, D. 1994, , 94, 687 Worthey, G., & Ottaviani, D. L. 1997, , 111, 377 Valdez-Guti[é]{}rrez, M., Rosado, M., Puerari, I., et al. 2002, , 124, 3157 Venn, K. A., Tolstoy, E., Kaufer, A., et al. 2003, , 126, 1326 Venn, K. A., Lennon, D. J., Kaufer, A., et al. 2001, , 547, 765 Yan, Z., Jerabkova, T., & Kroupa, P. 2017, arXiv:1707.04260 [^1]: E-mail: francesca.annibali@oabo.inaf.it [^2]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. [^3]: RBF=$\sqrt{(r/\epsilon)^2 + 1}$, where $r$ is the distance between any point and the centre of the basis function; see http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate. Rbf.html. [^4]: Tabulated index values for Lick standard stars can be found at http://astro.wsu.edu/worthey/html/system.html [^5]: Defined as $[MgFe]^{'}=\sqrt{Mgb \times ( 0.72\cdot Fe5270 + 0.28 \cdot Fe5335)}$. [^6]: The base model uses fitting functions [@cenarro02] calibrated on Galactic stars, and therefore reflects the MW chemical composition: \[$\alpha$/Fe\]$=$0 at solar metallicities, and \[$\alpha$/Fe\]$>0$ at sub-solar metallicities. [^7]: We adopt $12+\log(O/H)_{\odot}=8.83\pm0.06$ and $Z_{\odot}=0.018$ instead of the lower, more recent estimates of $12+\log(O/H)_{\odot}= 8.76\pm0.07$ and $Z_{\odot}=0.0156$ from @caffau08 [@caffau09] to be consistent with the solar abundance values adopted in our SSP models (see Section \[stpop\]).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a multiwavelength study of the central part of the Carina Nebula, including Trumpler 16 and part of Trumpler 14. Analysis of the [*Chandra X-ray Observatory*]{} archival data led to the identification of nearly 450 X-ray sources. These were then cross-identified with optical photometric and spectroscopic information available from literature, and with deep near-infrared ($JHK_s$) imaging observations. A total of 38 known OB stars are found to be X-ray emitters. All the O stars and early B stars show the nominal relation between the X-ray and bolometric luminosities, $L_{\rm X} \sim 10^{-7} L_{\rm bol}$. A few mid- to late-type B stars are found to be associated with X-ray emission, likely attributable to T Tauri companions. We discovered 17 OB star candidates which suffer large extinction in the optical wavebands. Some 300 sources have X-ray and infrared characteristics of late-type pre-main sequence stars. Our sample presents the most comprehensive census of the young stellar population in the Carina Nebula and will be useful for the study of the star-formation history of this massive star-forming region. We also report the finding of a compact ($5\arcmin \times 4\arcmin$) group of 7 X-ray sources, all of which highly reddened in near-infrared and most X-ray bright. The group is spatially coincident with the dark ’V’ shaped dust lane bisecting the Carina Nebula, and may be part of an embedded association. The distribution of the young stellar groups surrounding the region associated with Trumpler 16 is consistent with a triggering process of star formation by the collect-and-collapse scenario.' author: - 'Kaushar Sanchawala, Wen-Ping Chen, Hsu-Tai Lee' - 'Yasuhi Nakajima, Motohide Tamura' - 'Daisuke Baba, Shuji Sato' - 'You-Hua Chu' title: 'X-RAY EMITTING YOUNG STARS IN THE CARINA NEBULA' --- INTRODUCTION ============ Massive stars have a profound influence on neighboring molecular clouds. On the one hand, the powerful stellar radiation and wind from even a single such star would sweep away nearby clouds and henceforth prevent subsequent star formation. On the other hand, the massive star may provide “just the touch” to prompt the collapse of a molecular cloud which otherwise may not contract spontaneously. Whether massive stars play a destructive or a promotive role in cluster formation conceivably depends on the availability of cloud material within the range of action, though the details have not been fully understood. If massive stars by and large suppress star formation, low-mass stars could exist in the immediate surroundings only if the low-mass stars predated massive star formation. In both the Orion and Lacerta OB associations, @lee05 and @leechen have found an evidence of triggered star formation by massive stars. The UV photons from massive stars appear to have ionized adjacent molecular clouds and the implosive pressure then compresses the clouds to form next-generation stars of various masses, often in groups, with high star formation efficiencies. The process is self-sustaining and an entire OB association may be formed as a result. The Carina Nebula, also known as NGC3372, is a remarkable star-forming region where the most massive stars known in the Milky-Way Galaxy co-exist. The Nebula, which occupies about 4 square degree area on the sky, contains at least a dozen known star clusters [@feinstein95]. The clusters with photometric and spectroscopic data are : Bochum (Bo) 10 and 11, Trumpler (Tr) 14, 15 and 16, Collinder (Cr) 228, NGC3293 and NGC3324. Tr14 and Tr16 are the most populous and youngest star clusters and located in the central region of the Nebula. The distance modulus for Tr16, quoted from the literature, ranges from 11.8 [@levato] to 12.55 [@mj93 MJ93 hereafter] and for Tr14, from 12.20 [@feinstein83] to 12.99 [@morrell]. @walborn derived a distance of $2.5$ kpc for Tr16 using $R =3.5$. @crowther derived a distance of $2.6$ kpc for Tr14. @walborn73 and @morrell concluded that the two clusters are at slightly different distance, whereas @turner and @mj93 concluded the two clusters are at the same distance. A distance of 2.5 kpc is adopted for our study. All the clusters listed above, contain a total of 64 known O-type stars, the largest number for any region in the Milky Way [@feinstein95]. Tr14 and Tr16 include six exceedingly rare main-sequence O3 stars. The presence of these very young stars indicates that the two clusters are extremely young. The two clusters also contain two Wolf-Rayet stars which are believed to be evolved from even more massive progenitors than the O3 stars (MJ93). Furthermore Tr16 is the parent cluster to the famous luminous blue variable (LBV), $\eta$ Carinae, which is arguably the most massive star of our Galaxy (MJ93). With such a plethora of unusually massive stars, the Carina Nebula is a unique laboratory to study the massive star formation process, and the interplay among massive stars, interstellar media and low-mass star formation. In recent years, X-ray surveys have been very successful in defining the pre-main sequence population of young star clusters [@fei02]. X-ray emission has been detected from deeply embedded class I Young Stellar Objects (YSOs) to low-mass pre-main sequence (PMS) stars of T Tauri types, and intermediate-mass pre-main sequence stars of Herbig Ae/Be types to the zero-age main-sequence stars. For late type main-sequence stars, starting from late A to K and M dwarfs, the X rays are produced in the very high temperature gas in the corona, which is thought to be heated due to the dynamo magnetic fields [@maggio]. Massive stars, of O and early B types, on the other hand emit X rays which are produced in the shocks due to hydrodynamic instabilities in their radiatively driven strong stellar winds [@lucy82]. The X-ray emission from the classical T Tauri stars (CTTSs) or weak-lined T Tauri stars (WTTSs) is believed to be thermal emission from the gas rapidly heated to temperatures of the order $10^7$ K due to magnetic reconnection events similar to the solar magnetic flares, but elevated by a factor of $10^1$ –$10^4$ [@fei99]. A recent work by @preibisch05 presents the correlation of the X-ray properties with different stellar parameters, for a nearly complete sample of late-type PMS stars in the Orion Nebula Cluster. They concluded that the origin of X-ray emission in T Tauri stars seems to be either a turbulent dynamo working in stellar convection zone, or a solar like $\alpha$-$\Omega$ dynamo at the base of the convection zone if T Tauri stars are not fully convective. Among the existing methods to identify the young stellar populations in a young star cluster, the use of X-ray emission, which is nearly independent of the amount of circumstellar material around the young stars [@fei02], is the least biased, especially in selection of weak-lined T Tauri stars which lack the standard signatures of pre-main sequence stars such as the infrared excess or strong $H\alpha$ emission lines. In this paper we used the [*Chandra X-ray Observatory*]{} archival data of the Carina Nebula to select the young stellar populations of the region. We then made use of the optical photometric and spectroscopic information available in the literature to identify the counterparts of the X-ray sources. We found that more than 2/3 of the X-ray sources do not have any optical counterparts. To characterize these sources further, we used the Simultaneous InfraRed Imager for Unbiased Survey (SIRIUS) camera, mounted on the Infrared Survey Facility, South Africa to carry out $J$, $H$, and $K_{s}$ band imaging observations. Figure 1[^1] shows the optical image of the Nebula from Digitized Sky Survey ($\sim$ 25 $\times$ 25), with the $Chandra$ field marked by a square centered on Tr 16 and covering part of Tr 14, which is in the north west of Tr 16. The field observed in the near infrared is about the same as the field of the optical DSS image. We discuss the X-ray and NIR properties of the known OB stars of the region. We discovered 17 massive star candidates on the basis of their NIR and X-ray properties similar to those of the known OB stars in the region. These candidate OB stars probably escaped previous detection because of their large extinction in the optical wavelengths. Furthermore, we identified some 300 CTTSs and WTTSs candidates, again on the basis of their X-ray and NIR properties. Our study therefore produces the most comprehensive young star sample in the Carina Nebula, which allows us to delineate the star formation history in this seemingly devastating environment. In particular we report the discovery of an embedded ($A_{\rm V} \sim 15$ mag) young stellar group located to the south-east of Tr16, and sandwiched between two dense molecular clouds. Similar patterns of newly formed stars in between clouds seem to encompass the Carina Nebula, a manifestation of the triggered star formation by the collect-and-collapse process [@deh05]. The paper is organized as follows. §2 describes the $Chandra$ and the NIR observations and the data analysis. In §3, we present the cross-identification of $Chandra$ sources with the optical spectroscopic information (available in the literature) and with our NIR sample. We discuss the results and implication of our study in §4. §5 summarizes our results. OBSERVATIONS AND DATA REDUCTION =============================== X-ray data — $Chandra$ ----------------------- The Carina Nebula was observed by the ACIS$-$I detector of [*Chandra X-ray Observatory*]{}. There were two observations in 1999 September 6, observation ID 50 and 1249 (Table 1). We began our data analysis with the Level 1 processed event and filtered cosmic-ray afterglows, hot pixels, $ASCA$ grades (0, 2, 3, 4, 6) and status bits. Charge transfer inefficiency (CTI) and time-dependent gain corrections were not applied, because the focal plane temperature of these two observations was not -120 C. Because of the background flaring at the beginning of observation of obs ID 50, we used a reduced exposure time of 8.5 ks. Therefore, the total exposure time of the two combined observations is 18120 s. The filtering process was done using the $Chandra$ Interactive Analysis of Observations (CIAO) package and following the Science Threads from $Chandra$ X-Ray Center. We also restricted the energy range from 0.4 to 6.0 keV. This would optimize the detection of the PMS stars and reduce spurious detections. Finally, we merged two observations to one image (Figure 2), which was used for source detection. WAVDETECT program within CIAO was utilized to detect sources in the merged image. We ran wavelet scales ranging from 1 to 16 pixels in steps of $\sqrt{2}$ with a source significance threshold of 3$\times$10$^{-6}$. Removing some spurious detections, e.g. some sources around partial shell of X-ray emission surrounding $\eta$ Carinae [@sew01] and along the trailed line due to $\eta$ Carinae itself, we got 454 sources eventually. By using the merged image for source detection, we detected more than double the number of sources than reported by @evans. We extracted the count of each source from the circular region centered on the WAVDETECT source position with a 95% encircled energy radii ($R$(95%EE)) [@fei02]. For the background determination, an annulus around each source between 1.2 and 1.5 $R$(95%EE) was used. Before extracting the source counts from each observation, exposure and background maps were created. An exposure map was computed to take into account vignetting and chip gaps, and an energy range of 1.2 keV was used for generating the exposure map. To avoid any sources within the background annuli for a given source, a background map was created excluding the sources in $R$(95%EE). This background map was used to obtain the source counts. We utilized DMEXTRACT tool of CIAO to extract source counts for each of the two observations. The total count of each source was then computed by combining the two observations. Finally the count rates were calculated for a total exposure time of 18120 s. The typical background count across the $Chandra$ field had a 3-$\sigma$ error of $\sim 1$ count. Near-Infrared Data — SIRIUS --------------------------- We carried out near-infrared imaging observations toward the Carina Nebula using the SIRIUS (Simultaneous InfraRed Imager for Unbiased Survey) camera mounted on the Infrared Survey Facility (IRSF) 1.4 m telescope, in Sutherland, South Africa. The SIRIUS camera [@nagayama] is equipped with three HAWAII arrays of $1024 \times 1024$ pixels and provides simultaneous observations in the three bands, $J$(1.25 $\mathrm{\mu m} $), $H$(1.63 $\mathrm{\mu m}$) and $K_s$(2.14 $\mathrm{\mu m}$) using dichroic mirrors. It offers a field of view of $7.\arcmin8 \times 7.\arcmin8$ with a plate scale of $0.\arcsec45$ $\mathrm{pixel^{-1}}$. In April 2003 nine pointings ($3 \times 3$) were observed covering effectively $22\arcmin \times 22\arcmin$ and including the $Chandra$ field. The central coordinates of the observed fields are $R.A. = 10^h45^m05^s$ and $Dec. = -59\arcdeg 38\arcmin 52\arcsec$. For each pointing, 30 dithered frames were observed, with an integration time of 30 s each, giving a total integration time of 900 s. Two pointings (\#5 and \#6) of the April 2003 data which suffered weather fluctuations were re-observed in January 2005, for which 45 dithered frames were observed with an integration time of 20 s, yielding a total integration time of 900 s for each pointing. The typical seeing during our observations ranged from $1.\arcsec0$–$1.\arcsec4$ and the airmass from 1.2 to 1.5. The standard stars No. 9144 and 9146 from @persson were observed for photometric calibration. We used the IRAF (NOAO’s Image Reduction and Analysis Facility) package to reduce the SIRIUS data. The standard procedures for image reduction, including dark current subtraction, sky subtraction and flat field correction were applied. The images in each band were then average-combined for each pointing to achieve a higher signal-to-noise ratio. We performed photometry on the reduced images using IRAF’s DAOPHOT package [@stetson]. Since the field is crowded, we performed PSF (point spread function) photometry in order to avoid source confusion. To construct the PSF for a given image, we chose about 15 bright stars, well isolated from neighboring stars, located away from the nebulosity and not on the edge of the image. The ALLSTAR task of DAOPHOT was then used to apply the average PSF of the 15 PSF stars to all the stars in the image, from which the instrumental magnitude of each star was derived. The instrumental magnitudes were then calibrated against the standard stars observed on each night. X-RAY SOURCES AND STELLAR COUNTERPARTS ====================================== The optical spectroscopy of the stars in Tr14 and Tr16 has been done by several groups, eg., @walborn73 [@walborn82; @levato; @fitzgerald]. The latest work by MJ93 lists the brightest and the bluest stars of the two clusters. We have used this list (Table 4 in MJ93) to find the counterparts of our X-ray sources. Within a $3\arcsec$ search radius, our cross-identification resulted in 30 OB stars from MJ93 as counterparts of our X-ray sources. Apart from MJ93, we also checked for any possible counterparts using SIMBAD[^2]. This resulted into another 8 OB stars of the region [@tapia], the spectral types of which were determined by the photometric Q method [@json]. We also used our NIR data to search for the counterparts of the X-ray sources. Again with a $3\arcsec $ search radius, we found counterparts for 432 sources. Thus, more than 95% of the X-ray sources have NIR counterparts. For 51 cases out of 432 sources, the NIR photometric errors are larger than 0.1 mag in one or more bands. Most of these large photometric error cases are for stars located in pointing 5, which is the Tr16 region. The NIR photometry in this pointing is affected because of a large number of bright stars and nebulosity around $\eta$ Carinae. Since we are using the NIR colors of the sources to delineate their young stellar nature, an uncertainty larger than 0.1 mag cannot serve the purpose. Hence, in our analysis we consider only those cases for which the photometric uncertainties are smaller than 0.1 mag in all the three bands, which leaves us with 384 sources. For our analysis, we have converted the NIR photometry into California Institute of Technology (CIT) system using the color transformations between the SIRIUS and CIT systems as given in @nakajima. RESULTS AND DISCUSSION ====================== Known OB stars -------------- Table 2 lists the X-ray sources cross-identified with known OB stars. The coordinates of each X-ray source are listed in columns (1) and (2), followed by the identifier of the optical counterpart of the X-ray source, listed in column (3). The optical $B$, and $V$ magnitudes, and the spectral type, listed in columns (4)–(6), were adopted in most cases from MJ93 and in others from @tapia. The color excess of each source, $E(B-V)$, given in column (7), was also taken from MJ93, in which photometry and spectroscopy were used to estimate the intrinsic stellar $(B-V)_0$ [@fitzgerald70]. The bolometric magnitude, $M_{\rm bol}$, in column (8), was taken from @massey01. For a small number of cases, where the spectral types were taken from [@tapia], the color excesses as well as the bolometric magnitudes were estimated using their spectral types. Columns (9)–(11) list the IRSF NIR $J$, $H$ and $K_s$ magnitudes of the counterpart. Column (12) lists the X-ray counts of the sources derived by the DMEXTRACT tool of the CIAO software, as described in §3. We used the WebPIMMS[^3] to derive the unabsorbed X-ray flux of the sources. To convert an X-ray count to the flux, the Raymond Smith Plasma model with temperature $\log~T=6.65$, equivalent to $0.384 {\rm~keV}$, was adopted. For the extinction correction, the color excess, $E(B-V)$ of each source was used to estimate the neutral hydrogen column density, $N_H$. The X-ray flux is given in column (13), and the X-ray luminosity, computed by adopting a distance of 2.5 kpc, is in column (14). The last column (15) contains the logarithmic ratio of the X-ray luminosity to the stellar bolometric luminosity, where the latter was derived from the bolometric magnitude, i.e., column (8). Figure 3 shows the distribution of X-ray luminosities of the known OB stars in our field. Most OB stars have $\log L_{\rm X} \ga 31{\rm ~ergs~s^{-1}}$ with the distribution peaking $\sim \log L_{\rm X} = 31.7 {\rm~ergs~s^{-1}}$. The Wolf-Rayet star (HD93162) is the brightest X-ray source in the sample, with $\log L_{\rm X} = 34.12 {\rm ~ergs~s^{-1}}$. This star has been known to be unusually bright in X rays as compared to other W-R stars in the region [@evans]. Though it has been thought to be a single star, a recent W-R catalog by @hucht lists it as a possible binary (see discussions in @evans). Among the 38 X-ray OB stars, there are 3 B3-type, 3 B5-type and 1 B7-type stars. Mid- to late-B type stars are supposed to be X-ray quiet as they have neither strong enough stellar winds as in the case of O or early B stars, nor the convective zones to power the chromospheric/coronal activities as in the case of late-type stars. However, mid- to late-B type stars have been found to be X-ray emitters in earlier studies, e.g., @cohen. The X-ray luminosities of the mid- to late-type B stars in our sample are comparable to those of T Tauri candidates in the same sample. Although this seems to provide circumstantial evidence of CTTS or Herbig Ae/Be companions to account for the X-ray emission, it does not rule out the possibility of a so far unknown emission mechanism intrinsic to mid- to late-B stars. The X-ray luminosities of the OB stars are known to satisfy the relation with the stellar bolometric luminosities, namely, $L_{\rm X} \propto 10^{-7} L_{\rm {bol}}$. All but a few stars in our sample satisfy this relation (Fig. 4). Among the outliers, labeled on the figure by their spectral types, only HD93162 (a W-R star), and Tr16$-$22 (an O8.5V star) are early type stars and hence their high $L_{\rm X}/L_{\rm {bol}}$ is unusual. Tr16$-$22 is among the brightest X-ray sources in our sample, with $\log L_\mathrm{X} = 32.83$ ergs s$^{-1}$. It is brighter in X rays by a factor of 5–20 compared to other O8.5V stars and even brighter than the two O3 stars in the sample. @evans present a list of known binaries among the massive stars and discuss the X-ray luminosities against their single or binary status. A massive companion may enhance the X-ray production by colliding winds. No binary companion is known to exist for either HD93162 or Tr16$-$22 [@evans] to account for their high X-ray luminosities and high $L_{\rm X}/L_{\rm {bol}}$ ratios. The rest of the X-ray sources which do not satisfy the correlation are mid-B or late-B type stars. A study by @berghofer about the X-ray properties of OB stars using the $ROSAT$ database showed that the $L_{\rm X}/L_{\rm bol}$ relation extends to as early as the spectral type B1.5, and inferred this as a possibly different X-ray emission mechanism for the mid- or late-B type stars as compared to the O or early-B stars. In our sample, there are 3 B3 type stars which seem to satisfy this relation and all the stars later than B3 deviate significantly from the mean value of $L_{\rm X}/L_{\rm {bol}}$ ratio for O and early-B stars. Candidate OB stars ------------------ There are 17 anonymous stars in our sample which have similar NIR and X-ray properties as the known OB stars in the region. These stars appear to be massive stars of O or B types, but we could not find their spectral type information in the literature, e.g., MJ93 or SIMBAD. These candidate OB stars, with their optical and NIR magnitudes along with their X-ray counts and X-ray luminosities are listed in Table 3. To determine their X-ray fluxes from counts, we made use of WebPIIMS. For extinction correction, we used an average $E(B-V) = 0.52$ based on the Table 4 of MJ93, as we did not have the spectral class information to determine their individual color excesses. Other parameters to obtain the X-ray fluxes from the X-ray counts remain the same as for the known OB stars. We found that the use of an average value of $E(B-V)$, rather than the individual $E(B-V)$ values, in case of the known OB stars would make a difference of a factor of two or less in the derived X-ray luminosities. Likewise for the temperature, using a $\log T$ between 6.4 to 7.1 also would make a difference of a factor of two or less in the X-ray luminosities. Hence the use of an average color excess for candidate OB stars should not affect much our results. Figure 5 shows the NIR color-color diagram of the known OB stars and the candidate OB stars. The solid curve represents the dwarf and giant loci [@bb], and the parallel dashed lines represent the reddening vectors, with $A_J/A_V = 0.282$, $A_H/A_V = 0.272$, and $A_K/A_V = 0.112$ [@rieke]. The dotted line indicates the locus for dereddened classical T Tauri stars [@meyer]. It can be seen that the candidate OB stars are either intermixed with or redder than the known OB stars. Figure 6 shows the NIR color-magnitude diagram of the known and candidate OB stars. The solid line represents the unreddened main sequence [@koornneef] at 2.5 kpc. Some candidate OB stars are very bright in NIR, with a few even brighter than $K_s = 8$ mag. In contrast, the candidate OB stars are fainter and redder than the known OB stars in the optical wavelengths (Figure 7), indicative of the effect of dust extinction, while both samples show a comparable range in X-ray luminosities (compare Figure 8 with Figure 2). Thus, it appears that these candidate OB stars have escaped earlier optical spectroscopic studies because of their large optical extinction. Addition of these massive stars expands substantially the known list of luminous stars and thus contributes significantly to the stellar energy budget of the region. PMS candidates -------------- Figures 9 and 10 show the NIR color-color and color-magnitude diagrams of all the 380 X-ray sources with NIR photometric errors less than 0.1 mag. By using the criteria given in @meyer, we find about 180 stars as CTTS candidates. Apart from the CTTS candidates, the NIR colors suggest quite a many possible weak-lined T Tauri star (WTTS) candidates. The X-ray and NIR data together hence turn up a large population of low-mass pre-main sequence candidates. The T Tauri candidates in our sample (CTTS plus WTTS) should be a fairly secure T Tauri population, given their X-ray emission and their NIR color characteristics. Although much work has been done on the massive stellar content in Tr14 and Tr16, a comprehensive sample of the T Tauri population has not been obtained so far. @tapia03 presented $UBVRIJHK$ photometry of Tr14, Tr16 and two other clusters in the region, Tr15 and Cr232, and noticed some stars with NIR excess in Tr14 and Tr16. They estimated the ages of Tr14 and Tr16 to be 1–6 million years. To our knowledge, our sample represents the most comprehensive sample of the young stellar population in Tr14 and Tr16. The distribution of X-ray luminosities of the CTTS candidates is shown in Figure 11. Comparison with Figure 2 shows that the X-ray luminosities of the T Tauri candidates are on the average lower and hence consistent with the notion that late-type stars have weaker X-ray emission. @fei05 pointed out that the X-ray luminosity functions (XLFs) of young stellar clusters show two remarkable characteristics. First, the shapes of the XLFs of different young stellar clusters are very similar to each other after the tail of the high luminosity O stars $(\log L_{\rm X} > 31.5 ~{\rm ergs~s^{-1}})$ is omitted. Secondly, the shape of this ’universal’ XLF in the 0.5–8.0 keV energy range resembles a lognormal distribution with the mean, $\log L_{\rm X} \approx 29.5 ~{\rm ergs~s^{-1}}$ and the standard deviation $\sigma(\log L_{\rm X}) \approx 0.9$ (see Figure 2 in @fei05). The $Chandra$ observations we used in this work include only part of Tr14. For Tr16, we can make an estimate of the total stellar population in reference to the XLF of the Orion Nebula Cluster (ONC) derived from the $Chandra$ Orion Ultradeep Project [@getman05]. The limiting X-ray luminosity of our sample is $L_{\rm X} \sim 30.5~{\rm ergs~s^{-1}}$. Excluding the high X-ray luminosity tail, i.e., $L_{\rm X} > 31.5~{\rm ergs~s^{-1}}$, which includes about 30 known OB stars and candidate OB stars described earlier, the slope of the Tr16 XLF is consistent with that of the ONC in the X-ray luminosity range of our sample. This suggests that our sample represents about 20% of the X-ray members in the cluster. We hence estimate that the total stellar population of Tr16 should be $\sim$ 1000–1300. Furthermore, the X-ray luminosities are known to be correlated with stellar masses, as found in the $ROSAT$ data [@fei93] and also in the $Chandra$ studies of the ONC [@flaccomio03; @preibisch05]. Comparing the XLF of Tr16 with the ONC XLF versus stellar mass (Figure 5 in @fei05b), we infer that our sample is about 60% complete for the stars with masses larger than 1 [${M_{\odot}}$]{}, and 40% complete between 0.3–1 [${M_{\odot}}$]{}. Our deep NIR data covering clusters Tr14, Tr16 and Cr232 would probe even lower mass end of the stellar population. The analysis of the complete NIR results will be presented elsewhere. A compact embedded X-ray group ------------------------------ We notice a group of 7 X-ray sources concentrated in a field of 5$\times$ 4, located south-east of Tr16 and coincident with the prominent dark ’V’ shaped dust lane which bisects the Carina Nebula. Adopting a distance of 2.5 kpc, the physical size of this star group is about 4 pc. Each of these 7 sources has an NIR counterpart, listed in Table 6 with their coordinates, $J$, $H$, and $K_s$ magnitudes and X-ray counts. We use the star identification number in column 1 of Table 6 in further discussion. The NIR colors have been used to estimate the neutral hydrogen column density. Stars in this group are bright and suffer large amounts of reddening, as seen in the NIR color-color and color-magnitude diagrams (Figures 12 and 13). The brightest sources (stars 4, 6 and 7) in NIR, with $K_s \sim$ 8.5–10.5 mag, are also X-ray bright, with $L_{\rm X} \sim 10^{32}$ $\mathrm{ergs~s^{-1}}$. Star 4 is a known O4 star [@rgsmith]. Our NIR magnitudes for this star match with those reported by @rgsmith. Apart from star 4, we could not find any optical photometric or spectroscopic information in the literature for the others sources. The bright NIR and X-ray stars 4, 5, and 6 can be clearly seen in the optical Digitized Sky Survey (DSS) image (Figure 14), whereas the other sources which are highly extincted even in NIR are not visible at all. Figure 15 shows the IRSF $K_s$ image with the sources marked. Stars 1 and 5 are peculiar because they are highly extincted ($A_V \sim$ 15–25 mag estimated from their NIR colors), yet both are X-ray bright with $L_{\rm X} \sim 10^{33}$ $\mathrm{ergs ~ s^{-1}}$. What could be the nature of these sources? Their NIR fluxes and colors, along with their non-detection in the DSS image, imply that they could be reddened T Tauri or class I objects. But their X-ray luminosities are much higher than observed in typical T Tauri stars ($ < 10^{32} \mathrm{ergs~s^{-1}}$). One possibility is that they are heavily embedded massive stars. The rest two sources of the group, stars 2 and 3 are relatively faint in both NIR and X rays, thus appear to be reddened T Tauri stars. It is worth noting that the above mentioned group is spatially close, $\sim 7\arcmin$, to the deeply embedded object, IRAS10430$-$5931. With $\mathrm{~^{12}CO(2-1)}$ and $\mathrm{~^{13}CO(1-0)}$ observations, @megeath found this $IRAS$ source associated with a bright-rimmed globule with a mass of $\sim 67$ [${M_{\odot}}$]{}. They also found sources with NIR excess around this IRAS object and provided the first indication of star-formation activity in the Carina region. More recently, a mid-infrared study by @nsmith discovered several clumps along the edge of the dark cloud east of $\eta$ Carinae, including the clump associated with IRAS10430$-$5931. They noted that each of these clumps is a potential site of triggered star-formation due to their location at the periphery of the Nebula behind the ionization fronts. We compared the spatial distribution of this group with the $\mathrm{~^{12}CO(1-0)}$ observations by @brooks in Figure 16. The group of young stars is ’sandwiched’ between two cloud peaks. It is not clear whether the group is continuation of Tr16 but obstructed by the dark dust lane, or is a separate OB group/association still embedded in the cloud. Figure 17 shows all the $Chandra$ X-ray sources overlaid with the $\mathrm{~^{12}CO(1-0)}$ image [@brooks]. One sees immediately a general paucity of stars with respect to molecular clouds. Tr16 is “sandwiched" between the north-west and south-east cloud complexes. All the X-ray sources (i.e., young stars) associated with these clouds in turn are seen either intervening between clouds or situated near the cloud surfaces facing Tr16. The morphology of young stellar groups and molecular clouds peripheral to an region (i.e., Tr16) fits well the description of the collect-and-collapse mechanism for massive star formation, first proposed by @elmegreen77 and recently demonstrated observationally by @deh05 [@zav06]. The expanding ionization fronts from an region compress the outer layer of a nearby cloud until the gas and dust accumulate to reach the critical density for gravitational collapse to form next-generation stars, which subsequently cast out their own cavities. This collect-collapse-clear process may continue as long as massive stars are produced in the sequence and there is sufficient cloud material in the vicinity. SUMMARY ======= We detected 454 X-ray sources in the $Chandra$ image of the Carina Nebula observed in September 1999. About 1/3 of the X-ray sources have optical counterparts in the literature, including 38 known OB stars in the region. In comparison our NIR observations detect counterparts for more than 95% of the X-ray sources. The X-ray luminosities of the known OB stars range in $\sim 10^{31}$–$10^{34}~{\rm ergs~s^{-1}}$, with the Wolf-Rayet star, HD 93162, being the strongest X-ray source with $\log L_{\rm X} = 34.12 \mathrm{~ergs~s^{-1}}$. The W-R star also has a very high $L_{\rm X}/L_{\rm bol}$ ratio, $\sim -5.39$. The only other early-type star with a high $L_{\rm X}/L_{\rm bol}$ ratio is an O8.5V type star, Tr16$-$22, which also has a very high X-ray luminosity of $L_{\rm X} = 32.83$ for its spectral type. All other O and early B (up to B3 type) stars satisfy the canonical relation, $L_{\rm X} \sim ~10^{-7}~L_{\rm {bol}}$. There are several mid- to late-B type stars emitting X rays with X-ray luminosities comparable with those typical for T Tauri stars. Hence, it is possible that the X-ray emission from these mid- and late-B stars is coming from T Tauri companions. We discovered 17 candidate OB stars which have escaped detection in previous optical studies because of the larger dust extinction they suffer. These candidate OB stars have the same characteristics as known OB stars in terms of X-ray luminosities and NIR fluxes and colors. If most of them turn out to be bona fide OB stars, this will be already half the number of the known OB stars found as X-ray emitters in the region and would add significantly to the stellar energy budget of the region. The NIR colors of the X-ray counterparts show a large population of low-mass pre-main sequence stars of the classical T Tauri type or the weak-lined T Tauri type. Some 180 classical T Tauri candidates are identified, whose X-ray luminosities range between $10^{30}$ to $10^{32}~{\rm ergs~s^{-1}}$, lower than those for OB stars. Comparison of the X-ray luminosity function of Tr16—which is about 60% complete for stars with masses 1–3 [${M_{\odot}}$]{}and 40% complete for 0.3–1 [${M_{\odot}}$]{}—with that of typical young star clusters suggests a total stellar population $\sim$ 1000–1300 in Tr16. A compact group of highly reddened, X-ray bright and NIR bright sources is found to the south-east of Tr16. The group is associated with an $IRAS$ source and coincident with the dust lane where many mid-IR sources have been predicted to be the potential sites of triggered star-formation. This star group is “sandwiched" between two peaks of the $\mathrm{~^{12}CO(1-0)}$ emission. Such star-cloud morphology is also seen in the peripheries of the complex in Tr16, a manifestation of the collect-and-collapse triggering process to account for the formation of massive stars. This publication makes use of the $Chandra$ observations of the Carina Nebula made in September 1999. We made use of the SIMBAD Astronomical Database to search the optical counterparts for the $Chandra$ X-ray sources. We thank Kate Brooks for providing us with the $\mathrm{~^{12}CO(1-0)}$ data of the Carina which was obtained with the Mopra Antenna, operated by the Australia Telescope National Facility, CSIRO during 1996-1997. KS, WPC and HTL acknowledge the financial support of the grant NSC94-2112-M-008-017 of the National Science Council of Taiwan. Bessell, M. S. & Brett, J. M. 1988, , 100, 1134 Berghofer, T. W., Schmitt, J. Brooks, K., Whiteoak, J. B., & Storey, J. W. V. 1998, , 15, 202 Cohen, D. et al. 1997, , 487, 867 Crowther, P. A., Smith, L. J., Hillier, D. J., & Schmutz, W. 1995, , 293, 427 Deharveng, L., Zavagno, A., & Caplan, J., 2005, , 433, 565 Elmegreen, B. G.,& Lada, C. J. 1977, , 214, 725 Evans, N. R., Seward, M. I., Isobe, T., Nichols, J., Schlegel, E. M., & Wolk, S. J. 2003, , 589, 509 Feigeslson, E. D., Casanova, S., Montmerle, T., & Guibert, J. 1993, , 416, 623 Feigelson, E. D., & Montmerele, T. 1999, , 37, 363 Feigelson, E. D., Broos, P., Gaffney, J. A., Garmire, G., Hillenbrand, L. A., Pravdo, S. H., Townsley, L., & Tsuboi, Y. 2002, , 574, 258 Feigelson, E. D., & Getman, K. V. 2005, in The Initial Mass Function: Fifty Years Later, ed. E. Corbelli et al. (Dordrecht: Kluwer) Feigelson, E. D., Getman, K., Townsley, L., Garmire, G., Preibisch, T., Grosso, N., Montmerle, T., Muench, A., & McCaughrean, M. 2005, , 160, 379 Feinstein, A. 1983, A&S, 96, 293 Feinstein, A. 1995, RevMexAA, 2, 57 FitzGerald, M. P. 1970, , 4, 234 FitzGerald, M. P. & Mehta, S. 1987, , 228, 545 Flaccomio, E., Damiani, F., Micela, G., Sciortino, S., Harnden, F. R., Murray, S. S., Wolk, S. J. 2003, , 582, 398 Getman, K. V., Feigelson, E. D., Townsley, L., Bally, J., Lada, C. J., & Reipurth, B. 2002, , 575, 354 Getman, K. V. et al. 2005, , 160, 319 Johnson, H. L. & Morgan, W. W. 1953, , 117, 313 Koornneef, J. 1983, , 128, 84 Lee, H.-T., Chen, W. P., Zhang, Z. W., Hu, J. Y. 2005, , 624, 808 Lee, H.-T. & Chen, W. P., astro-ph/0509315 Levato, H. & Malaroda, S. 1981, , 93, 714 Lucy, L. B. 1982, , 255, 286 Massey, P., & Johnson, J. 1993, , 105, 980 Massey, P., et al. 2001, 121, 1050 Maggio, A., et al. 1987, , 315, 687 Megeath, S. T., Cox, P., Bronfman, L., Roelfsema, P. R. 1996, , 305, 296 Meyer, M., Calvet, N., & Hillenbrand, L. A. 1997, , 114, 288 Morrell, N., Garcia, B. & Levato, H. 1988, , 100, 1431 Nagayama, T. et al. 2003, Proc. SPIE, 4841, 459 Nakajima, Y. et al. 2005, , 129, 776, 2005 Persson, S. E., Murphy, D. C., Krzeminski, W., Roth, M. & Rieke, M. J. 1998, , 116, 2475 Preibisch, T., Zinnecker, H. 2002, , 123, 161 Preibisch, T. et al. 2005, , 160, 401 Rieke, G. H., & Lebofsky, M. J., 1985, ApJ, 288, 618 Seward, F. D., Butt, Y. M., Karovska, M., Prestwich, A., Schlegel, E. M., & Corcoran, M. 2001, , 553, 832 Smith, N., Egan, M. P., Carey, S., Price, S. D., Morse, J. A., Price, P. A. 2000, , 532, L145 Smith, R. G. 1987, , 227, 943 Stetson, P. B. 1987, , 99, 191 Tapia, M., Roth, M., Marraco, H., & Ruiz, M. T. 1988, , 232, 661 Tapia, M., Roth, M., Vazquez, R. A., Feinstein, A. 2003, , 339, 44 Turner, D. G., & Moffat, A. F. J. 1980, , 192, 283 van der Hucht, K. A. 2001, NewARev., 45, 135 Walborn, N. R. 1973, , 78, 1067 Walborn, N. R. 1982, , 87, 1300 Walborn, N. R. 1995, RevMexAA 2, 51 Whitworth, A. P., Bhattal, A. S., Chapman, S. J., Disney, M. J., & Turner, J. A. 1994, , 268, 291 Zavagno, A., Deharveng, L., Comerón, F., Brand, J., Massi, F., Caplan, J., & Russeil, D., , 446, 171 \[chandra\] \[dss\] [^1]: Figures with better resolution can be obtained from http://cepheus.astro.ncu.edu.tw/kaushar.html [^2]: http://simbad.u-strasbg.fr/sim-fid.pl [^3]: http://http://heasarc.gsfc.nasa.gov/Tools/w3pimms.html
{ "pile_set_name": "ArXiv" }
--- abstract: 'Microbiology is the science of microbes, particularly bacteria. Many bacteria are motile: they are capable of self-propulsion. Among these, a significant class execute so-called run-and-tumble motion: they follow a fairly straight path for a certain distance, then abruptly change direction before repeating the process. This dynamics has something in common with Brownian motion (it is diffusive at large scales), and also something in contrast. Specifically, motility parameters such as the run speed and tumble rate depend on the local environment and hence can vary in space. When they do so, even if a steady state is reached, this is not generally invariant under time-reversal: the principle of detailed balance, which restores the microscopic time-reversal symmetry of systems in thermal equilibrium, is mesoscopically absent in motile bacteria. This lack of detailed balance (allowed by the flux of chemical energy that drives motility) creates pitfalls for the unwary modeller. Here I review some statistical-mechanical models for bacterial motility, presenting them as a paradigm for exploring diffusion without detailed balance. I also discuss the extent to which statistical physics is useful in understanding real or potential microbiological experiments.' address: 'SUPA, School of Physics and Astronomy, University of Edinburgh, JCMB Kings Buildings, Mayfield Road, Edinburgh EH9 3JZ, UK' author: - 'M. E. Cates' title: | Diffusive transport without detailed balance in motile bacteria:\ Does microbiology need statistical physics? --- =1 Introduction ============ Bacteria are unicellular organisms, capable of self-reproduction and, in many cases, motility (the biologists’ term for self-propulsion). A range of biomechanical mechanisms for this motility are shown by different species of bacteria, or in some cases by the same species under different environmental conditions. Among the simplest of these mechanisms is the swimming motion of species such as [*Escherichia coli*]{} (the most studied bacterium of all) [@bergbook]. Individual [*E. coli*]{} have helical flagella – whiplike appendages – each of which is forced to rotate by a biochemically powered motor located where it meets the cell body. (The cell body of an [*E. coli*]{} is about 2$\mu$m long and 0.5 $\mu$m wide; its flagella about 10 $\mu$m long and 20 nm wide.) Because of the chirality of these flagella, their clockwise and anticlockwise motion is inequivalent. One sense of rotation causes the flagella to form a coherent bundle which acts like a (low Reynolds number analogue of a) ship’s propeller, resulting in a smooth swimming motion. Initiating the other sense of rotation – even in just a subset of the bundled flagella – causes the bundle to separate, with the result that the cell starts to rotate randomly. The canonical motion of [*E. coli*]{} then consists of periods of straight line swimming (called ‘runs’) interrupted by brief bursts of rotational motion (called ‘tumbles’), see Figure 1. Tumbles are controlled by a biochemical circuit that throws flagellar motors into reverse gear every so often. A typical run lasts about 1s; to a reasonable approximation the duration of runs is Poisson-distributed, so that tumbles can be viewed as random events occurring with a certain event rate $\alpha$. Tumbles are usually much shorter, of duration $\tau\simeq 0.1$s and often treated as instantaneous ($\tau \to 0$). In practice, tumbles may not totally randomize the orientation of the cell body, but for simplicity that assumption is often made, and we adopt it here unless otherwise stated. ![Schematized run-and-tumble dynamics of [*E. coli*]{}.](figure1.pdf){width="55mm"} The swim speed $v$ of [*E. coli*]{} (and other similar species such as [*Salmonella typhimurium*]{}) is is around 20$\mu$m/s, so that under the simplest conditions – in which the statistics of the run-and-tumble motion is unbiased by any environmental factors – each bacterium performs a random walk, with an average step length of about 20$\mu$m, and about one step per second. The diffusion constant of this stochastic process is readily calculated, and (setting $\tau\to 0$ for simplicity) obeys $$D = \frac{v^2}{\alpha d} \label{D}$$ where $d$ is the spatial dimensionality. This diffusivity is hundreds of times larger than would arise by the Brownian motion of colloidal particles of the same size – as can be experimentally checked using a deflagellated mutant (see [@Jana]). On the other hand, Brownian motion does still matter: it sets the longest duration possible for a straight run before its direction is randomized by rotational diffusion. The run duration chosen by evolution is comparable to, but shorter than, the rotational diffusion time $\tau_B$, so that runs are fairly straight. At time- and length-scales much larger than $1/\alpha$ and $v/\alpha$ respectively, the dynamics just described is equivalent to an unbiased random walk, which is in turn equivalent to force-free Brownian motion with constant diffusivity $D$. The run-and-tumble motion of bacteria requires metabolic energy (a food source) and hence represents a far-from-equilibrium process entirely dependent on internal fluxes. Nonetheless, in the absence of environmental factors causing parameters such as $v$ and $\alpha$ to vary in space, at mesoscopic length and time scales the process maps onto free Brownian diffusion, which exhibits detailed balance. The same does not apply when the motility parameters do vary in space. Such variation creates a biased diffusion process which, in the general case, does not have detailed balance. This contrasts with Brownian motion of colloids in an external potential (such as gravity) or with conservative interparticle forces: in those cases, detailed balance is still present and thermal equilibrium (the Boltzmann distribution) is achieved in steady sate. In thermal equilibrium there can be no circulating fluxes of any kind, whereas for motile bacteria it is easy to construct steady-state counterexamples to that rule: we will encounter several below. For this reason, bacterial motility is offered in this article as an interesting paradigm for diffusion without detailed balance, while already comprising an experimentally well-studied and scientifically important topic in its own right. For microbiological purposes, the most important example of spatially varying run-and-tumble parameters is probably chemotaxis [@chemobook]. This is the main mechanism whereby bacteria navigate their environment. As described in more detail below, to achieve chemotaxis bacteria use a biochemical circuit that, roughly speaking, computes the change $\Delta c$, on a fixed time scale $\tau_c$, in the local concentration $c$ of a chemoattractant such as food. If this is positive, meaning that the organism is swimming in a good direction, then the tumble rate $\alpha$ is decreased from its free-swimming value $\alpha_0$. Put differently (and anthropomorphically!), the organism dynamically creates an estimate of ${\bf v}.\nabla c \simeq \Delta c/\tau_c$, and uses this vectorial information to decide how long to keep moving in the direction of ${\bf v}$ [@segall; @block]. To maximize efficiency, $\tau_c$ and $1/\alpha$ must both be as large as possible. However, were the run time $1/\alpha$ to exceed the Brownian rotation time $\tau_B$, or were $\tau_c$ to exceed the run time, the swimmer’s attempt to translate temporal into spatial information – which depends on runs being straight – would fail. Thus one expects $\tau_c \simeq 1/\alpha_0 \simeq \tau_B$, and evolution has indeed arranged things this way. As well as navigating towards better environments (and away from less favourable ones) bacteria may use chemotaxis to detect each others’ presence, by sensing a molecule emitted by other individuals. Although the strategic origins of the chemotactic response mechanism have recently been explored from a statistical physics viewpoint [@strong; @PGG; @clark; @kafri], most of the literature on the macroscopic consequences of chemotaxis postulates or develops models at a more coarse-grained level, as we will mainly do from now on. Chemotaxis is not the only way bacterial motility can be altered by environmental factors; some of the alternatives are explored below. For instance many bacteria modify their behaviour directly in response to the local concentration of signalling molecules (as opposed to their gradients), including those emitted by other individuals. An example is the ‘quorum sensing’ response which causes changes of phenotype (such as a transition to a virulent state) once the local concentration of bacteria exceeds a pre-set threshold [@quorum; @biofilms]. One other important way in which bacteria respond locally to levels of food or signalling molecules is by cell division (when food is plentiful) or cell death (when it runs out). The resulting dynamics is often modelled by a logistic-type equation for $\rho$, the local bacterial density, as $$\dot\rho = A\rho(1-\rho/\rho_0)\label{logistic}$$ where the ‘target density’ $\rho_0$ depends on the environmental conditions, and represents the highest population level sustainable under those conditions. If the density is less than $\rho_0$ the population grows, if it is above $\rho_0$ it decays. In practice cells might not die, but merely cease to breed and/or enter a non-motile dormant state; we ignore such complications here. There is a distinction between studying the biochemical aspects of microbiology (addressing control circuits, signalling pathways etc.), and studying microbes from the viewpoint of collective behaviour, diffusion, pattern formation, and hydrodynamics. The latter domain is our primary concern in what follows. Within that domain, we can broadly distinguish between two approaches. One is taken traditionally by mathematical biologists, and generally involves setting up deterministic differential equations at population level. Such equations are often targeted at a quantitative description of specific datasets; they sometimes, though not always, involve numerous fitting parameters. The power of this approach, across vast swathes of biology (from embryology through cancer growth to ecology), is surveyed by Murray [@murray]. A complementary approach, which is the main focus here, is grounded in statistical physics [@newman]. It emphasises the role of stochasticity; the identification of phase transitions in parameter space; and the use of minimal models to explore universal, or at least generic, mechanisms – even when this seriously compromises a model’s ability to quantitatively fit the data. The relevance or otherwise of this approach to real microbiology is discussed at the end of the article. Run-and-tumble models for independent particles {#independent} =============================================== We start in 1D with an idealized model as formulated by Schnitzer and others [@schnitzerberg; @schnitzer]. As befits a physics-oriented approach, we solve this model first in idealized situations before, in later sections, exploring its relevance to some (real or potential) microbiological experiments. The idealized situations include sedimentation equilibrium (particles subject to a uniform external force) and trapping in a harmonic well. Consider a single particle confined to the $x$ axis, and let $R(x,t)$ and $L(x,t)$ be the probability densities for finding it at $x$ and moving rightward or leftward respectively. Allow both the swim-speeds $v_{L,R}$ and tumble rates $\alpha_{L,R}$ to be different (in general) for left- and right-moving particles, and assume tumbles to be of negligible duration. Note that in 1D, half of all tumbles are ineffective in changing the direction of motion (they convert $R$ into $R$ or $L$ into $L$). Then, since tumbles are independent random switching events one has (with overdot denoting $\partial/\partial t$ and prime $\partial/\partial x$) $$\begin{aligned} \dot R &=& -(v_RR)'-\alpha_RR/2 +\alpha_LL/2 \label{Rdot} \\ \dot L &=& (v_LL)'+\alpha_RR/2 -\alpha_LL/2 \label{Ldot}\end{aligned}$$ These equations can be exactly solved in steady state for any specified dependences of the four parameters $v_{L,R}$ and $\alpha_{L,R}$ on the spatial coordinate $x$. They can also be systematically coarse-grained to give a diffusion-drift equation for the one-particle probability density $p \equiv R+L$: $$\dot p = (Dp'-Vp)' \label{pdot}$$ where explicit forms relating $D(x)$ and $V(x)$ to $v_{L,R}(x)$ and $\alpha_{L,R}(x)$ are given in [@JTPRL]. These forms are chosen to ensure that all steady states of (\[Rdot\],\[Ldot\]) and of (\[pdot\]) are identical. (Transient behaviour will differ of course, since at short times there is less information in (\[pdot\]) than in (\[Rdot\],\[Ldot\]).) The resulting steady states have some notable features. For instance, in the symmetric case, where $v_L = v_R = v(x)$ and $\alpha_L = \alpha_R = \alpha(x)$, one finds the steady-state density [@schnitzer] $$p_{ss}(x) = p_{ss}(0)\frac{v(0)}{v(x)} \label{inverse}$$ where the origin $x=0$ has been chosen as an arbitrary reference point. Thus, the probability density for symmetric run-and-tumble particles is inversely proportional to their speed, but independent of their tumble rate. (The latter holds for instantaneous tumbles only; at finite tumble duration $\tau$, increasing $\alpha$ is equivalent to decreasing $v$ [@schnitzer; @JTEPL].) To those statistical physicists whose intuition has been developed mainly in the context of equilibrium systems, the $v$-dependence in (\[inverse\]) is quite strange. There is no force on these particles, so they have no potential energy. The Boltzmann distribution for isothermal Brownian particles, even with a spatially varying diffusivity $D(x)$, would have $\rho_{ss}$ independent of $x$, in contradiction to (\[inverse\]). (Spatially varying diffusivity arises, for instance, when colloids move in a medium of nonuniform viscosity.) On the other hand, to shoppers on the high street the result (\[inverse\]) is intuitive: like pedestrians, bacteria are unconstrained by detailed balance, and accumulate wherever they move slowly (for instance, in front of an interesting shop window). A second intriguing result concerns sedimentation. In a sedimenting system, upwards and downwards swimmers have different speeds (obeying in 1D $v_{L,R} = v\pm v_s$, where $v_s$ is a sedimentation velocity). The exact result in this case is $$p_{ss}(x) = p_{ss}(0)\exp[-x/\lambda] \label{sed}$$ where the decay length obeys $\lambda = (v^2-v_s^2)/\alpha v_s$ [@JTEPL]. The exponential form is the same as found by Perrin for Brownian colloids under sedimentation [@Perrin]; this is no different from the isothermal atmosphere (a gas under gravity) whose exponential density profile features in most undergraduate physics courses. However, in both of those equilibrium cases, the decay length is inverse in the strength of gravity, $\lambda= D/v_s$ (with $v_s = Dmg/k_BT$, where $m$ is the buoyant mass), so the thickness of the sedimented layer smoothly goes to zero as gravity goes to infinity. In contrast, for independent run-and-tumble particles, the layer thickness goes to zero as $v_s\to v$: complete collapse occurs at a finite threshold of gravity. This result, like the previous one, is unsurprising on reflection: for $v_s>v$, even the upward-swimming particles are moving downwards, so that there can be no steady state unless all particles are in contact with the bottom wall. The same calculation can be done in 3D, with the same conclusion (but a different functional form for $\lambda$) [@JTEPL]. Some readers may complain that this result depends on our having a fixed propulsion speed $v$, while in practice of course there is some distribution of speeds [@wilson]. Were this to be a Maxwell-Boltzmann distribution (with each bacterium somehow sampling this ergodically) then a truly Boltzmann-like result might be recovered. But that requires the distribution of swim speeds to extend, albeit with very small probability, to unbounded values. As far as we know, this is not physiologically possible: bacteria, unlike colloids or gas molecules, have some nonrelativistic maximum swim speed $v_m$, and collapse of the one-particle probability density must then still occur at $v_s> v_m$. Residual Brownian motion will change this in principle, but the resulting $\lambda$ is then almost the same as for immotile microbes (and typically of order one particle diameter or less). Similar results arise when a run-and-tumble particle is confined to a harmonic potential. Such a particle cannot escape beyond an ‘event horizon’ at $r^*$ – the radius at which its propulsive speed, when oriented outward, is balanced by the speed at which it is being pulled inward by the confining force. For large tumble rates or weak trapping ( $\alpha r^*\gg v$) the particle rarely ventures out towards $r^*$ and a near-gaussian steady state density at the trap centre is attained. (The system then equates to a Brownian particle of matching diffusivity $D$, confined in the same potential.) In contrast, when tumbling is rare, a particle starting anywhere within the trap will soon arrive at $r^*$ and (essentially) wait there until its next tumble. In consequence, the steady state probability density has its maximum no longer at the centre of the trap, but at $r^*$ [@JTEPL]. (Such a state, in 3D, is visible as the opening frame of Figure 5 below.) In both the above examples, where the swim speed of a single run-and-tumble particle varies with its position and/or swimming direction, the behaviour predicted in 1D and higher dimensions are qualitatively similar (though not identical). However a strong, qualitative dependence on dimensionality arises when not the swim speed but the tumble rate $\alpha$ depends on position and swimming direction (represented in 3D by the unit vector ${\bf \hat v}$). Schnitzer [@schnitzer] assumed a low-order multipole expansion such that $\alpha = \alpha_0 +{\mbox{\boldmath{$\alpha$}}}_1({\bf r}).{\bf\hat v}$. He found that so long as the vector field ${\mbox{\boldmath{$\alpha$}}}_1$ is conservative (${\mbox{\boldmath{$\alpha$}}}_1 = \nabla \phi$), as always holds in 1D but does not hold generally in higher dimensions, then the scalar from which it derives serves as an effective potential in a Boltzmann-like description: $$p_{ss}({\bf r}) \propto\exp[-\phi({\bf r})/v] \label{alphass}$$ On the other hand, if $\nabla\times{\mbox{\boldmath{$\alpha$}}}_1 \neq 0$, as can arise for $d>1$, then the steady state contains circulating currents and no mapping onto a thermal equilibrium system is possible. Chemotaxis ========== An important application of the above ideas, and the main motivation for studying them in much of the modelling literature [@schnitzer; @othmer1; @othmer2; @rivero], is chemotaxis. As outlined previously, chemotactic bacteria can move themselves up a chemical gradient $\nabla c$, where $c$ is the concentration of a chemoattractant, by modulating their tumbling activity. For a chemorepellant, the sign of the effect is reversed and the arguments given below must be modified accordingly. The standard description is to work directly at a coarse-grained level, adopting the diffusion drift equation for the one-particle probability density (\[pdot\]), and phenomenologically asserting that the drift velocity $V$ in that equation obeys $V=\chi\nabla c$, with $\chi$ some constant (that may depend on $D$). A theory that relates $\chi$ to the microscopic run-and-tumble parameters has had to await relatively recent advances in our understanding of how chemotaxis works microscopically [@segall; @block; @PGG]. A simplified picture of this understanding is as follows. A bacterium has an on-board biochemical circuit whose job is to modulate the tumble rate according to the following integral $$\alpha = \alpha_0-\int_{-\infty}^t K(t-t')c(t')dt' \label{int}$$ Here $\alpha_0$ is value arising when $\nabla c = 0$. This baseline value can depend on environmental factors, but is broadly independent of overall shifts in the concentration of chemophore $c$ [@block; @leibler]. This implies that $\int K(t)dt=0$. Moreover, $K(t)$ is a bi-lobed function (Figure 2) which essentially computes the change in local concentration $\Delta c$ experienced by the bacterium over a certain time scale $\tau_c$. If this change is positive, the next tumble is delayed proportionately. For simplicity we have assumed in (\[int\]) that a negative change conversely promotes tumbles, although experiments in fact suggest a delay-only, one-sided response. (Correcting this would lead to factor 2 changes in some of the results below.) The characteristic time $\tau_c$ sets the temporal scale for the bi-lobed $K(t)$ response (Figure 2). If this time were much longer than $1/\alpha_0$ under normal conditions (in most species, “normal" means bacteria swimming in water), then the time integral in (\[int\]) would cease to be informative of whether the swimmer is pointing up or down the chemical gradient. If on the other hand $\tau_c$ were extremely short, then detection of weak concentration changes would become unnecessarily noisy and inefficient [@bergbook]. Accordingly, one expects evolution to have set $\tau_c\alpha_0\sim 1$, and this is indeed the case [@block]. ![Schematic depiction of the kernel $K(t)$ controlling the chemotactic response of [*E. coli*]{}.](figure2.pdf){width="75mm"} Expanding (\[int\]) in weak concentration gradients, and assuming straight runs, one finds from the above considerations that $\alpha(t) = \alpha_0-\beta{\bf v}.\nabla c$, where (for a more careful analysis see [@PGG; @Otti]) $$\beta = -\int_0^\infty \exp[-\alpha_0t]K(t)dt \label{beta}$$ The exponential term inside the integral uses the unperturbed tumble rate to approximate the probability that the swimmer has not yet tumbled; once it does so, there is no longer a correlation between the temporal change of $c$ and its spatial gradient. The result implies ${\mbox{\boldmath{$\alpha$}}}_1 = -\beta v\nabla c$ and hence $\phi/v = -\beta c$. It follows that steady-state chemotaxis (in which $c$ is by some external means maintained constant in time, but nonuniform in space) can be mapped onto a Boltzmann equilibrium problem via (\[alphass\]), with $-c$ playing the role of a potential and $\beta$, defined via (\[beta\]), playing the role of inverse temperature: $$p_{ss}({\bf r}) \propto \exp[\beta c({\bf r})]\label{chemoss}$$ To match this result for the probability density using the classical approach (comprising Eq.(\[pdot\]) with $V = \chi\nabla c$) we clearly must choose $\chi = \beta D$. Constancy of $\beta$ in space is sufficient to ensure the absence of steady state currents, but these are present whenever $\nabla\beta \times \nabla c\neq 0$. On the other hand, for bacteria with identical phenotypes and hence identical $K(t)$, the $\beta$ value found from (\[beta\]) can still vary between experiments performed in different media (through environmental influences on $\alpha_0$), even if it is spatially uniform in each experiment. This observation will prove crucial in Section \[Chemo2\]. It should be noted here that there are significant subtleties associated with chemotaxis which are not necessarily captured by the adoption of spatially varying tumble rates in Eqs.(\[Rdot\],\[Ldot\]). Indeed, the biochemical mechanism outlined above cannot literally translate into spatially varying tumble rates for left- and right- moving particles since the time integral in Eq.(\[int\]) continues to be calculated; half the time a tumble leads to no change of direction, so the future dynamics of a particle does depend on its history prior to the last tumble event. (The same applies, [*mutatis mutandis*]{}, in higher dimensions.) As lucidly discussed in [@Kafri11], this becomes particularly important when determining exactly how the choice of kernel $K(t)$ influences chemotactic efficiency, both dynamically and in steady state [@strong; @PGG; @clark; @kafri; @Kafri11; @vergassola]. This question is not our focus here, and will not be considered further below. Rectification ============= Before moving on to address many-body physics, we consider one further intriguing property of bacterial motion that appears even at the level of a single particle. This is the phenomenon of rectification [@Austin1; @Austin2; @Austin3]. Working with bacteria in a two-dimensional layer, Austin and collaborators introduced a perforated wall made of asymmetric barriers (Figure 3). Partly for hydrodynamic reasons (see Section \[hydro\]), a swimming bacterium that encounters a straight wall at an oblique angle tends (at least in this quasi-2D geometry) to continue swimming along the wall until either the wall ends, or the swimmer experiences its next tumble event. (Let’s call this the ‘wall hugging’ tendency.) Therefore a barrier as depicted in Figure 3 will act asymmetrically: it will funnel bacteria that approach from one side through the barrier, while causing those approaching from the other side to bounce off. Thus, if such a ‘funnel barrier’ divides a container in two, the bacterial motion is on average rectified, causing an unequal steady state particle density on the two sides of the wall [@Austin1; @Austin2]. ![Left: schematic effect of a funnel barrier. Note that the time reversal of either trajectory is highly improbable (it requires the particle to accidentally be aligned with the wall while still distant from it) breaking time reversibility. Right: fluorescent bacteria in a rectifying chamber, showing a steady state inequality in number density (from [@Austin1]).](figure3_replacement.pdf){width="110mm"} A well known rectification theorem [@rect] states that such asymmetry can only emerge if spatial symmetry breaking (provided by the wall) is accompanied by time-reversal asymmetry in the trajectories. The latter is provided by the wall-hugging tendency (Figure 3) [@wan; @JTEPL]. The necessity of such a mesoscopic violation of detailed balance can be confirmed by replacing the wall-hugging rule by an elastic collision law in direct simulations of the run-and-tumble dynamics; when this is done, rectification disappears [@JTEPL]. (This is true for both specular and ‘bounce-back’ collisions [@wan2].) The simplest model capturing the rectification effect is to replace the wall by a strip on which the tumble rates for left- and right-moving particles are different. (This is reasonable, since particles approaching the wall from the repelling side end up getting turned back, which is like having an extra tumble, whereas those coming in the other direction can glide through with relatively minor angular deflection.) This model can then be addressed using the analysis of spatially varying $\alpha$ outlined in Section \[independent\] above. For a single wall we then find from (\[alphass\]) that the density ratio between the two sides of the wall obeys $p_1/p_2 = \exp(-\Delta\phi/v)$ where $\Delta\phi$ is, in a 1D version of the problem, simply the integrated difference in tumble rates as one passes across the strip [@JTEPL]. In three dimensions, the expression for $\Delta\phi$ is more complicated; nonetheless, the steady-state density ratio should depend only on this local property of the wall, not on the length of the wall nor on the shapes and sizes of the regions that it separates. A corollary of this picture is that funnel gates could be used to create geometries where $\nabla \times {\mbox{\boldmath{$\alpha$}}}_1$ is nonzero, so that the steady state contains macroscopic currents and no mapping onto a thermal equilibrium system exists (Figure 4). This was not explored experimentally in [@Austin1; @Austin2] (though see [@Austin3]), but a directly related phenomenon, with exactly the same microscopic origin (the combination of a spatially asymmetric wall and time-irreversible swimming trajectories) has been reported subsequently. This comprises the unidirectional rotation of an asymmetrically saw-toothed rotor when immersed in a bath of bacterial swimmers [@rotorpaper1; @rotorpaper2; @rotorpaper3], see Figure 4. A somewhat different interaction between swimming bacteria and funnel-like obstacles is reported in [@goldstein1], wherein individual [*B. subtilis*]{} bacteria are shown to be reverse direction on encountering a narrow constriction without turning of the cell body (i.e., without tumbling as such). Unlike the much larger funnel gates used in [@Austin1; @Austin2] these constrictions have dimensions of order the cell diameter. To achieve rectification, the rate for this flagella-flip process would have to be different for approaching a constriction from opposite sides; this seems perfectly possible for the case of an asymmetric, funnel-shaped obstacle. ![(i) Funnel gate arrangement that would give a circulating current in steady state. (ii,iii) Experimentally observed angular rotation of an asymmetric cog in a bacterial bath (from [@rotorpaper1]).](figure4.pdf){width="110mm"} Hydrodynamic interactions {#hydro} ========================= So far we have considered the motion of independent bacteria, focusing primarily on their steady-state probability density $p_{ss}$ under various environmental conditions. We now turn to interactions, starting with the hydrodynamic force that is exerted on one particle in proportion to the velocity of another swimming nearby. Such hydrodynamic interactions (HIs) are in general too complex to treat analytically; however we address them first because [*hydrodynamic interactions have no consequences whatever for steady state densities in thermal equilibrium systems*]{}. (Such densities obey the Boltzmann distribution, which is oblivious to any HIs that may be present.) Accordingly, any effect of HIs on the steady-state behaviour of bacteria is directly attributable to the lack of detailed balance in their mesoscale dynamics. Examples of this principle are already established at one-particle level where HIs can arise between a single bacterium and a wall; in conjunction with the chiral character of the flagellar propulsion system, this causes wall-bound [*E. coli*]{} to have persistently spiralling trajectories [@chiral; @loewen]. This chiral motion, which represents an average steady state current in violation of detailed balance, has been proposed to explain the observed tendency for bacteria to swim upstream against the flow of fluid down a pipe [@urethritis], although a simpler, non-chiral mechanism, involving only the rotation of a swimmer’s axis by the shear flow itself, appears equally plausible [@RupertPRL]. Such upstream swimming tendencies may help explain the invasive capabilities of pathogenic bacteria in infecting the urinary tract [@catheter]. The swimming motions of bacteria also produce [*interparticle*]{} HIs, but these are relatively weak at large separations (and easily swamped by noise [@goldstein1a]). This is because swimming exerts a force dipole on the surrounding fluid, not a monopole as would be the case for a particle being dragged through the fluid by an external force. (Such monopole contributions are, of course, still present when swimming bacteria are additionally subjected to external forces such as gravity.) Hence the HI between swimmers falls off at large distances like $1/r^2$ instead of the $1/r$ result for motion induced by body forces [@lauga]. For bacteria, the force dipole is oriented to pull fluid in around the waist of the particle and eject it in both the forward and backward directions; combined with the orientational tendency of rodlike particles in shear flow, this ‘extensile’ behaviour creates a negative contribution to the shear stress [@sriram]. Thus a dilute aqueous bacterial suspension can have a viscosity less than that of water [@dilute]; at high concentration the viscosity might in principle vanish altogether in laminar flow [@suzanne1; @suzanne2]. The latter result is among many that were recently found from a collective hydrodynamic description of aligned bacterial fluids [@sriram; @kruse] whose further description, with that of related experiments [@goldstein2; @cisneros] and simulation studies [@shelley1; @shelley2; @pedley; @wolgemuth] lie beyond the scope of this report. Hydrodynamic interactions between swimmers also include important near-field terms, arising at separations comparable to the particle size. These depend in detail on the self-propulsion mechanism, and can lead to many intriguing phenomena ranging from relatively simple flock formation [@Llopis] to the phase-locking of nearby particles to form ‘synchronized swimming’ teams [@synchswim]. (Near-field effects also control the swim speed $v$, which therefore need have no relation to the amplitude of the extensile force dipole.) Perhaps unsurprisingly, this level of complexity offers serious challenges to computational researchers [@lauga; @pedley; @graham]. However, one avenue is to set up a minimal (far-field) numerical model of run-and-tumble swimmers in a continuous fluid medium, and use this to address the idealized physics problems discussed previously: sedimentation, and confinement in a harmonic trap. For sedimentation, one finds by this route that HIs have only limited effects [@RupertPRL]. The particle density $\rho_{ss}$ in steady state still decays exponentially with height (to numerical accuracy); however, there is some softening of the singularity associated with gravitational collapse when $v_s \to v$. This is understandable since any layer of collapsed particles will hydrodynamically set up a random stirring of the fluid that can allow at least some upward-pointing swimmers to briefly exceed the escape velocity. A much more drastic effect of HIs is found for particles confined in harmonic traps. Here the current-free steady state of inverted probability density (with the maximum at the outermost edge of the trap $r^*$ rather than the centre), as computed previously for the single-particle case at small $\alpha$, ceases to exist when the number of particles in the trap is high. This is because the shell of outward-swimming particles at $r^*$, now coupled together by hydrodynamics, is mechanically unstable to fluctuations in local density (Figure 5). The final result is collapse of the shell into a dense swarm, which resides at some distance $r_s<r^*$ from the centre of the trap. This swarm is almost stationary, and must therefore transmit the external force on all its members (provided by the trapping force) directly to the surrounding fluid. This fluid must accordingly be in motion: our system of self-propelled particles has spontaneously self-assembled into a pump [@RupertPRL]. More generally, it seems likely that HIs are more disruptive to a steady state in which swimmers are arranged in a coherent pattern (as is the case for the trap at small $\alpha$) than when they are locally disoriented and swimming in random directions (as holds at large $\alpha$, and in sedimentation). ![Time series showing hydrodynamic instability of a shell of trapped swimmers in the low tumble rate regime. Swimmers are colour-coded yellow-to-magenta by local density; fluid velocity vectors are colour-coded green-to-blue by magnitude (scales top left, arbitrary units). The circle is the event horizon $r^*$ in the absence of hydrodynamics; the hydrodynamic simulation is initiated (top left) from a member of the steady-state ensemble in the absence of interactions, where the shell of particles at $r^*$ is visible. Black lines in the bottom right figure show trajectories of representative swimmers when a tumble causes them to leave the self-assembled, pump-like structure. Image courtesy of R. Nash; see [@RupertPRL] for a similar figure and full simulation details.](figure5.pdf){width="90mm"} Before closing this discussion of many-body sedimentation and trapping, a brief comment is warranted on the feasibility of testing such predictions experimentally. In sedimentation, the interesting regime is $v_s\sim v$; this sedimentation velocity is higher than for bacteria in terrestrial gravity but the required range should easily be achieved by centrifugation. The literature is short of sedimentation studies, though one has been done for synthetic swimmers (whose dynamics is, however, not run-and-tumble) [@bocquet]. For particles in traps, the interesting regime is when the trapping radius $r^*$ is comparable to the run length $v/\alpha$. This requires a very soft trap compared to those generally achieved by optical methods, although suitable machinery may now be available [@cambridge]. From the viewpoint of fundamental physics it would be very worthwhile to see such predictions tested in detail – while recognizing that they do not represent the kinds of question that most microbiologists would consider important. Density-dependent motility ========================== From now on we neglect hydrodynamic interactions and focus on systems in which run-and-tumble parameters such as $v$ and $\alpha$ vary in response to the local density of bacteria. Unlike all cases where these parameters are known in advance as a function of position (so that bacteria can be treated independently), this is a true many-body problem. Another class of problems, in which self-propelled bacteria also interact by direct colloidal interaction forces (such as the depletion interaction) is similarly interesting but not addressed here [@JTPRL; @Jana2]. We also do not review in detail related studies of activity-induced phase separation in non-tumbling bacteria [@peruani1; @peruani2] or active subcellular networks [@kruse00; @marchetti]. The first goal in addressing the many-body case is to derive an equation for the collective density field $\rho({\bf r}) = \sum_ig({\bf r}-{\bf r}_i)$ where the sum is over $N$ particles and $g$ is, in principle, a delta function. (For practical purposes we coarse-grain this spiky density by introducing a finite range to $g$.) A widespread procedure in the literature on noninteracting particles is to merely assert that $\rho = Np$ where $p$ is the one-particle probability density obeying (\[pdot\]). This is incorrect, since $p$ is a probability density, not the actual density of one particle (which remains a delta-function); and while $p$ evolves deterministically, the collective density $\rho({\bf r})$ as defined above does not. One must therefore either define a probability density in the $N$-body configuration space and derive a Fokker-Planck equation at that level [@biroli], or work with the Langevin equations (which are stochastic ordinary differential equations) for $N$ particles. The latter is more amenable to the handling of interactions: we can set up the Langevin equations with spatially varying run-and-tumble parameters and then allow this variation to occur through a functional dependence on the density field [@JTPRL]. This $\rho$-dependence passes smoothly from the $v,\alpha$ parameters to the one-body diffusivity $D([\rho],x)$ and drift velocity $V([\rho],x)$, which remain as previously defined in connection with (\[pdot\]). These known functionals of density then enter a many-body, functional Langevin equation for the collective particle dynamics. Omitting an unimportant self-density term [@JTPRL] this reads in 1D: $$\dot\rho = \left(-\rho V + D\rho' + (2D\rho)^{1/2}\Lambda\right)' \label{J_C}$$ where $\Lambda$ is a unit white noise. Clearly the final (noise) term goes missing if one simply asserts that $\rho = Np$ and then uses (\[pdot\]) [@Dean; @FrenchGuy]. ![Construction of the effective free energy density $f(\rho)$ in the mapping from 1D run-and-tumble particles with local motility interactions onto a fluid of interacting Brownian particles. If $v(\rho)$ decreases rapidly enough (left) the resulting $f(\rho)$ has a negative curvature (spinodal) region (right) with the global equilibrium state comprising a coexistence of the binodal densities $\rho_1,\rho_2$. The condition for instability ($f''<0$) translates into the geometric construction shown on $v(\rho)$: draw a line from the origin to any point on the curve and reflect this line in the vertical axis. If the slope of $v(\rho)$ is less than the reflected line, the system is unstable. See [@JTPRL].](figure6.pdf){width="110mm"} The upshot of this formal procedure is to establish that the interacting run-and-tumble system can be mapped at large scales onto a set of interacting Brownian particles, with detailed balance, if and only if a functional ${\cal F}_{ex}[\rho]$ exists such that $$V([\rho],x)/D([\rho],x) = - [\delta {\cal F}_{ex}(\rho)/\delta\rho(x)]' \label{functional}$$ in which case, the system behaves like a fluid in which ${\cal F}_{ex}$ is the excess free energy. In general no such functional exists, but an exception is the case where $v(\rho)$ and $\alpha(\rho)$ are the same for both right- and left-moving particles, and depend on density in a purely local way. In that case one finds that the system is equivalent to a fluid whose free energy density (with $k_BT = 1$) is $$f(\rho) = \rho(\ln\rho-1)+ \int_0^\rho \ln v(u)du \label{ff}$$ On this basis we can conclude that when $v(\rho)$ is a sufficiently rapidly decreasing function of $\rho$, the local free energy density $f(\rho)$ of the equivalent thermodynamic system has negative curvature in an intermediate range of densities [@JTPRL] (Figure 6). At these densities, such a system is predicted to show a spinodal instability, separating into domains of two coexisting binodal densities $\rho_1$ and $\rho_2$, corresponding to a common tangent construction on $f$ (Figure 6). In 1D these domains coarsen to a scale that, for systems with detailed balance, is finite and set by the interfacial tension, which in turn depends on gradient terms in the free energy functional. Such terms are neglected when one assumes a purely local dependence of $v,\alpha$ on density, and since the exact mapping onto a detailed-balance system is so far established only in that limit, they could allow coarsening to continue indefinitely even in 1D. Whether or not that applies, the initial spinodal instability and breakup into coexisting domains is an unmistakable physical effect, and is seen numerically both in 1D [@JTPRL] and 2D (Figure 7) [@Alasdair]. In these numerical experiments, domains also form by a different but equally well known mechanism (nucleation and growth) in the density range that lies between the binodal and the spinodal. (This lies between dashed and solid vertical lines in the plot of $f(\rho)$ presented in Figure 6.) In this range the system is stable to local perturbations but unstable globally; to capture the required nucleation events, the noise term in (\[J\_C\]) is essential. ![A 2D run-and-tumble system undergoing motility-induced phase separation. Simulated via a lattice model ($200\times 200$ sites) as detailed in [@Alasdair], with local density (particles per site) colour-coded on scale at right. Image courtesy of A. Thompson.](figure7.pdf){width="65mm"} The physics behind such motility-driven phase separation is clear, and generic. We have seen earlier in Eq.(\[inverse\]) that run-and-tumble particles accumulate (increase $\rho$) wherever they slow down (reduce $v$). But if $v$ is a decreasing function of density $\rho$, this means they also slow down on encountering a region of higher than average density. This creates a positive feedback loop which, so long as $v(\rho)$ decreases fast enough, runs away to phase separation. There is no attractive interaction between our diffusing run-and-tumble particles, yet they behave exactly as if there was one. In the microbiological literature, formation of dense clusters from a uniform initial population is often encountered (and usually called ‘aggregation’ rather than phase separation). Certainly there are many situations where bacteria down-regulate their swimming activity at high density: for example this is fundamental to the formation, from planktonic swimmers in dilute suspension, of a biofilm [@biofilms]. A biofilm comprises a region with a high local density of bacteria that are immobilized on a wall or similar support. (In most biofilms, bacteria not only stop swimming but actually lose their flagella apparatus after a period of time.) Biofilms are ubiquitous, and generally unwanted; they arise in contexts ranging in seriousness from malodorous breath, via bacterial fouling of water supply pipes [@fouling], to lethal infections in patients with cardiac valve implants [@valves]. Biofilm formation generally involves chemical communication between individuals, but the effect of this may still be representable, at least in crude terms and during the initial stages, as a density-dependent swim speed $v(\rho)$. (A density dependent tumble rate $\alpha(\rho)$ or duration $\tau(\rho)$ has similar effects so long as the latter is finite.) Equivalent phase-separation physics could equally well occur by non-biochemical means, such as a simple crowding effect causing a reduced propulsive efficiency, hyrodynamically driven accumulation of bacteria near surfaces [@RupertPRL], or by an intermediate mechanism such as secretion of a polysaccharide that increases the local fluid viscosity (which is something that biofilms also can do [@exopoly]). Population dynamics =================== So far we have considered a range of similarities and differences between run-and-tumble motion and the Brownian diffusion of colloidal particles. However, bacteria have a further trick up their collective sleeve that colloids do not, namely self-replication. Alongside the transport of particles by a diffusion-drift process (described in general by the evolution equation (\[J\_C\])), the birth and death of bacteria leads to changes in particle density that are (unlike (\[J\_C\])), not the divergence of a current. A simple model of the population dynamics is the logistic equation given in Eq.(\[logistic\]), and we stick to this model here while accepting its limitations. This equation describes a population that evolves, from above or below, towards a target density $\rho_0$ which for simplicity is presumed constant, and set by externally controlled environmental factors such as nutrient levels. This population dynamics can now be phenomenologically coupled to an equation for the phase separation dynamics in a system whose $v(\rho)$ decreases fast enough to make it unstable. (See [@Martin] for a similar example that instead involves sedimentation.) For simplicity we neglect dynamical noise (though keep it in the initial conditions) and assume that mildly nonlocal functionals $V([\rho],{\bf r})$ and $D([\rho],{\bf r})$ in (\[J\_C\]) (and in its higher dimensional analogues) generate a $\nabla^3\rho$ term in the current. (This is the leading order allowed by symmetry.) The remaining local dependences $V(\rho)$ and $D(\rho)$ then together define a collective diffusion constant $D_c(\rho)$ (obeying $D_c(\rho)= \rho D \partial^2f/\partial\rho^2$ under the conditions where (\[functional\],\[ff\]) apply) in terms of which we have [@PNAS] $$\dot\rho = \nabla(D_c\nabla\rho) - \kappa \nabla^4\rho + A\rho(1-\rho/\rho_0)\label{PNAS}$$ Throughout the range of spinodal instability, $D_c(\rho)$ is negative: accordingly small fluctuations in the initial density are amplified by the first term, and damped at short length scales by the second. On their own, as previously described, these terms would lead to phase separation into binodal phases of density $\rho_1$ and $\rho_2$. (The term in $\kappa$ effectively defines an interfacial tension for these domains and this influences their growth rate in the late stages, once interfaces become sharp.) However, this process of indefinite coarsening is clearly impossible, in any dimension, if the logistic term has a target density $\rho_0$ obeying $\rho_1<\rho_0<\rho_2$. This admits only one uniform steady state ($\rho = \rho_0$), so the system cannot evolve into large uniform patches with $\rho = \rho_1$ and $\rho = \rho_2$ as it would otherwise do. Instead, the outcome is a ‘microphase separation’ whereby the spinodal pattern ceases to coarsen beyond a specific length scale. This length scale is fixed by the balance of the coarsening tendency against the fact that regions of low density are constantly producing new particles which must then diffuse into regions of high density, where they die off. A steady state domain pattern is then possible, which however does not map onto any equilibrium system as it contains mesoscopic particle currents from dilute to dense regions [@PNAS]. The model just presented is rather general, and broadly agnostic as to the mechanism whereby $D_c$ has become negative in the spinodal region. All that is needed is a sufficient tendency for bacteria to move towards regions where they are already more numerous. As already explained, a sufficiently decreasing $v(\rho)$ can achieve this, but so can other mechanisms including conventional colloidal attractions [@Jana2], or a quorum sensing response [@quorum]. Chemotaxis could also have the required effect, especially if the chemoattractant is produced by bacteria themselves, and has a short lifetime and/or high diffusivity. This combination of factors would create a nonlocal but near-instantaneous functional dependence of $V=\chi\nabla c$ on the bacterial density $\rho$, from which (\[PNAS\]) then follows in the weak gradient limit. Chemotactic patterns without chemotaxis ======================================= The above remarks are pertinent to a classic set of microbiology experiments on multi-ring and spotted pattern formation in bacterial colonies innoculated from a point source (Figure 8) [@rings]. These patterns have long been taken by various workers as evidence for chemotaxis, and can indeed be reproduced by a detailed multi-parameter model in which bacteria explicitly secrete a chemoattractant in response to food (or another ‘stimulant’, which itself diffuses) and then navigate up the gradient of the attractant [@murray; @budrene; @tyson]. One piece of evidence that these are indeed ‘chemotactic patterns’ is that if the ability to secrete ‘chemoattractant’ is disabled, the patterns go away [@confirm]. However, this finding does not rule out a chemically mediated but non-chemotactic response in which bacteria change their behaviour in response to the mere presence of the relevant chemical (quorum sensing [@quorum]) rather than swimming up its gradient (true chemotaxis). Moreover, whatever the actual mechanism in the organisms studied so far ( primarily [*E. coli*]{} and [*S. tyhphimurium*]{}) it is reasonable for a physicist to ask how general these patterns are, and whether their origin can be understood in simple mechanistic terms. ![Left frames: Simulation of the late stage behaviour of a model coupling population to phase separation, at two different sets of parameter values. (Images courtesy D. Marenduzzo; for a full description see [@PNAS]. Right: Experimental images of colony growth in [*S. Typhimurium*]{} (From [@budrene].)](figure8.pdf){width="85mm"} Given the preceding discussion of the coupling between spinodal decomposition and logistic population growth, the reader should not be surprised to learn that these two ingredients alone, by creating a mechanism for microphase separation on a definite length scale, are sufficient to explain the broad phenomenology of the multi-ring and spotted patterns seen in [*E. coli*]{} and [S. typhimurium]{} [@PNAS]. (On the other hand, a separate set of patterns, seen in [*Bacillus subtilis*]{} and involving intricate feathery whorls, may require a quite different explanation [@benjacob].) Some patterns found by solving (\[PNAS\]) numerically are compared with the experimental ones in Figure 8. Several refinements may be necessary before the links between such a simple generic model and experimental microbiology can be fully established [@Brenner]. Nonetheless, the basic mechanistic picture is an appealing one: specifically it suggests that such patterns can in principle arise without chemotaxis and therefore – if seen in future using some other organism – they should not be taken as diagnostic of it presence. Chemotaxis without chemotactic patterns {#Chemo2} ======================================= A somewhat different sort of ‘chemotactic pattern’ is often seen in bacterial colonies initiated from a point-source inoculum, under different growing conditions from the multi-ring and/or spotted patterns described above. This pattern comprises a single pronounced ring of density which slowly progresses away from the origin leaving a lower density region in its wake [@wolfe]. The pattern is easily understood in organisms that perform chemotaxis primarily in response to a food gradient (not a chemoattractant or repellent emitted by other individuals). As food is depleted at the centre of the colony, individuals there move outwards towards the growth front where food remains plentiful; here they can freely reproduce. This effect is generally so robust that it can be used as a rough-and-ready assay of whether a particular bacterial strain is chemotactic or not [@adler]. Notably though, in most of these assay experiments, the bacterial inoculum, which is placed on the centre of an agar gel, first penetrates the gel before the colony spreads laterally through it in a quasi-2D fashion. Moreover, in microbiological assay work the concentration of agar in the gel is not usually chosen consistently but lies anywhere within a broad window defined as ‘soft agar’. Systematically exploring this range, however, it was recently found experimentally that the chemotactic ring disappears if the concentration of agar in the soft gel becomes too large. For [*E. coli*]{} this happens within, rather than beyond, the range of gel densities historically used in assay experiments [@Otti]. Hence unless care is taken to use a gel of sufficiently low density, there is a risk of false negative assay results, in which the organism does have a chemotactic phenotype, but no chemotactic pattern is seen. A semi-quantitative dynamical model of this effect is presented in [@Otti], but the main reason behind it is already indicated by the approximate results for steady-state chemotaxis, Eqs.(\[beta\],\[chemoss\]) above. These equations show that the ability of an organism to develop a nonuniform population density in response to a chemical gradient depends crucially on the value of $\beta$: if this is too small, chemotaxis is ineffective. As seen from (\[beta\]), $\beta$ depends on $\alpha_0$ and $K(t)$. The properties of the response kernel $K(t)$ are genetically determined, and unlikely to change during the course of an inoculation experiment. (This lasts hours or days, long enough for reproduction but not evolution.) So the key element in determining $\beta$ is $\alpha_0$, the tumble rate arising in the absence of any chemical gradient. In an agar gel, one expects bacteria to change orientation not only by intrinsic tumbling, but by collisions with the gel matrix. To a reasonable approximation, the latter can be represented by an increased intrinsic tumble rate $\alpha_0(C)$, which is now a function of agar concentration $C$. It is then clear from (\[beta\]) that the chemotactic efficiency will collapse if $\alpha_0(C)$ becomes too much larger than $\alpha_0(0)$. This is because the bilobed kernel $K(t)$ in (\[int\]) involves a genetically fixed timescale, $\tau_c\sim1/\alpha_0(0)$, that maximizes the chemical gradient information extractable from a straight run in the organism’s normal environment. If the run length is too much reduced by gel collisions, the value of the integral is still calculated by the on-board biochemical circuit, but no longer delivers any useful information about the chemical gradient. (All contributions from the integral at time scales beyond $1/\alpha_0(C)$ become randomized by the collisional tumbles.) Although the quantitative form for $\beta$ that emerges in the high collision regime differs from (\[beta\]) by an extra factor of $\alpha_0(0)/\alpha_0(C)$ [@Otti], the result is qualitatively the same: a severe loss of chemotactic performance at high gel densities. Recall that in phenomenological models of chemotaxis, $\beta$ sets the value of $\chi/D$ for use when writing $V = \chi\nabla c$ in the diffusion-drift equation (\[pdot\]). Accordingly the loss of chemotactic efficiency described above in dense porous media is not captured in any model that assumes $\chi/D$ to be independent of environmental factors. Indeed, some such models are established by analogy with isothermal gas dynamics [@barton; @ford]. This inadvertently creates a mapping onto a detailed balance system; any possibility of circulating currents in steady state is thereby eliminated. Yet such currents can certainly be expected generically in nonuniform gels (whenever $\nabla\beta\times\nabla c \neq 0$). These concerns are relevant not only to agar gels but to many other situations where bacterial chemotaxis is performed within a porous matrix, such as sand-beds and other filtering elements in wastewater treatment plants [@barton; @ford]: in practice such porous media are often nonuniform. Even in the case of a uniform agar gel, as detailed above and in [@Otti], an implicit assumption that $\chi/D$ is independent of gel density can, by failing to account for the true statistical mechanics of bacterial motility in complex environments, give qualitatively misleading predictions. Does microbiology need statistical physics? =========================================== Emboldened by a some interesting recent opinion pieces on the wider role of physicists in biology [@newman; @wolgemuth2], I will here give a personal answer to the above-stated question. Alongside the selection of bacterial motility problems outlined above, statistical physics has recently been used to address many other questions in microbiology ranging from molecular genetics to multispecies ecology. In many of these cases, a fully stochastic description is essential. This applies especially when one is less interested in the ‘average’ behaviour of a system than in rare events (a genetic switch being flipped [@geneswitch]; fixation of a gene [@blythe]) or in dynamic phase transitions [@bartek]. Indeed the importance of stochasticity is fully accepted in most branches of population biology, and links between that field and statistical physics are relatively secure and well developed [@blythe2]. On the other hand, treatments of stochasticity are rarer in models addressing chemotaxis, pattern formation in colonies, and similar areas of microbiological modelling [@murray]. Above we have mentioned at least one instance where neglect of noise gives qualitatively wrong physics: it fails to predict the nucleation regime of motility-induced phase separation. However there may be other such instances: omitting dynamical noise is tantamount to doing mean-field theory, and thus likely to be often unreliable, for instance whenever one is close to a continuous phase transition. Stochasticity apart, fundamentally different insights are offered by statistical physics and more traditional approaches to the modelling of microbiological dynamics. Statistical physics approaches usually emphasise the search for simplicity and universality of mechanism, deliberately disregarding details in the hope of finding these to be inessential. To an experimental biologist who has carefully acquired lots of data, the decision of a theoretical physicist to deliberately ignore most of it can be baffling. A more elaborate model that makes fuller use of what is known may appear preferable, even if that model also requires additional unknown parameters (whose presence might limit its falsifiability). Whatever modelling strategy one uses, approximations always remain necessary, and if unguided by statistical physics, these can have unintended consequences. We have met one example: in modelling chemotaxis, setting the ratio $\chi/D = \beta$ to be a fixed constant may look like a harmless simplification, but by silently imposing detailed balance, this eliminates the possibility of static circulating chemotactic currents, and all that might follow dynamically from them. It also precludes any explanation of the observed dependence of chemotactic assay behaviour on gel density [@Otti]. This last instance provides a clear example of how a statistical physics description is relevant to an established microbiological experiment – the chemotactic assay – and provides insight beyond models used previously (which assumed $\chi/D=\beta$). A second danger, especially for detailed models with many fit parameters, is to assume that a good fit to the data offers strong evidence for the chosen model. Bayes theorem, which provides a quantitative framework for the avoidance of over-fitting [@mackay], could be more widely used to guard against this danger. However, even this offers limited help in deciding between a complicated model and a simple one: Bayesian analysis normally assumes that the data represents a true model (that one is trying to find) plus some experimental noise. As data accumulates, simple models are the first to be falsified. Yet a model that clearly does not fit the data, but nearly does so despite its gross simplifications, can provide crucial mechanistic insights. Note that this applies even if a more complicated model already exists and fits the data better. Ultimately, choosing between complicated and simple models is a matter of individual preference, since they serve different goals. However, the understanding of generic mechanistic principles is a goal of equal importance to that of describing or even predicting experimental outcomes, and mechanistic insight can often be gleaned from the successes, and equally from the failures, of simplified models. To serve this purpose such models should at least be consistent (both internally, and with physical law) and based on clear scientific precepts. The generation of such models, particularly where stochastic dynamics is required, is a major strength of the statistical physics approach. Some scientists would argue that microbiology can survive purely as an experimentally driven empirical science, without quantitative theory of any kind. If this is true, then clearly microbiology does not need statistical physics. On the other hand, if microbiology needs theory at all, as most would accept, then I believe it does need the statistical physics approach alongside the more traditional strategies that have more generally prevailed until now. Acknowledgements: {#acknowledgements .unnumbered} ================= The author thanks Ronojoy Adhikari, Rosalind Allen, Richard Blythe, Otti Croze, Martin Evans, Davide Marenduzzo, Ignacio Pagonabarraga, Wilson Poon, Julien Tailleur and Alasdair Thompson for many discussions and collaborations pertinent to preparation of this article. He thanks Otti Croze and Julien Tailleur for a detailed reading of the manuscript. The author is funded by a Royal Society Research Professorship and EPSRC Grant EP/EO30173. References {#references .unnumbered} ========== Berg H C 2003 [*E. coli in Motion*]{} (Springer, New York) Schwarz-Linek J [*et al*]{} 2010 Polymer-induced phase separation in Escherichia coli suspensions [*Soft Matter*]{} [**6**]{} 4540-4549 Eisenbach M 2004 [*Chemotaxis*]{} (Imperial College Press, London) Segall J E, Block S M and Berg H C 1986 Temporal correlations in bacterial chemotaxis [*Proc. Nat. Acad. Sci. USA*]{} [**83**]{} 8987-8991 Block S M, Segall J E and Berg H C 1982 Impulse responses in bacterial chemotaxis [*Cell*]{} [**31**]{} 215-226 Strong S P, Freedman B, Bialek W and Koberle R 1998 Adaptqation and optimal chemotactic strategy for E. coli [*Phys. Rev. E*]{} [**57**]{} 4604-4617. de Gennes P-G 2004 Chemotaxis and the role of internal delays [*Eur. Biophys. J*]{} [**33**]{} 691-693 Clark D A and Grant L C 2005 The bacterial chemotactic response reflects a compromise between transient and steady state behavior [*Proc. Nat. Acad. Sci. USA*]{} [**102**]{} 9150-9155 Kafri Y and da Silveira RA 2008 Steady-state chemotactic response of E. coli [*Phys. Rev. Lett.*]{} [**100**]{} 238101 Miller M B and Bassler B L 2001 Quorum sensing in bacteria [*Ann. Rev. Microbiol.*]{} [**55**]{} 165-199 Hall-Stoodley L, Costerton J W and Stoodley P 2004 Bacterial biofilms: From the natural environment to infectious diseases [*Nat. Rev. Microbiol.*]{} [**2**]{} 95-108 Murray J D 2003 [*Mathematical Biology, vol. II: Spatial Models and Biomedical Applications*]{} Third Ed. (Springer, New York) Newman T J 2011 Life and death in biophysics [*Phys. Biol.*]{} [**8**]{} 010201 Schnitzer M J, Block S M, Berg H C and Purcell E M 1990 Strategies for chemotaxis [*Symp. Soc. Gen. Microbiol.*]{} [**46**]{} 15-33 Schnitzer M J 1993 Theory of continuum random-walks and application to chemotaxis [*Phys. Rev. E*]{} [**48**]{} 2553-2568 Tailleur J and Cates M E 2008 Statistical mechanics of interacting run-and-tumble bacteria [*Phys. Rev. Lett.*]{} [**100**]{} 218103 Tailleur J and Cates M E 2009 Sedimentation, trapping and rectification of dilute bacteria [*EPL*]{} [**86**]{} 600002 Haw M D 2002 Colloidal suspensions, Brownian motion, molecular reality: a short history [*J. Phys. Cond. Mat.*]{} [**14**]{} 7769-7779 Wilson L G [*et al*]{} 2011 Differential dynamic microscopy of bacteria motility [*Phys. Rev. Lett.*]{} [**106**]{} 018101 Hillen T and Othmer H G 2000 The diffusion limit of transport equations derived from velocity-jump processes [*Siam J. App. Math.*]{} [**61**]{} 751-775 Erban R and Othmer H G 2005 From individual to collective behavior in bacterial chemotaxis [*Siam J. App. Math.*]{} [**65**]{} 361-391 Rivero M A, Tranquillo R T, Buettner H M and Lauffenberger D A 1989 Transport models for chemotactic cell populations based on individual cell behavior [*Chem. Eng. Sci.*]{} [**44**]{} 2881-2897 Alon U, Surette M G, Barkai N and Leibler S 1999 Robustness in bacterial chemotaxis [*Nature*]{} [**397**]{} 168-171 Croze O A, Ferguson G P, Cates M E and Poon W C K 2011 Migration of chemotactic bacteria in soft agar: role of gel concentration [*Biophys. J*]{} [**101**]{} 525-534 Chatterjee S, da Silveira R A and Kafri Y 2011 Chemotaxis when bacteria remember: Drift versus diffusion, arXiv:1103.5355 Celani A and Vergassola M 2010 Bacterial strategies for chemotactic response, [*Proc. Nat. Acad. Sci. USA*]{} [**107**]{} 1391-1396 Galajda P, Keymer J, Chaikin P and Austin R 2007 A wall of funnels concentrates swimming bacteria [*J. Bacteriol.*]{} [**189**]{} 1033 Galadja P [*et al*]{} 2008 Funnel ratchets in biology at low Reynolds number: choanotaxis [*J. Mod. Optics*]{} [**55**]{} 3413-3422 Lambert G, Liao D and Austin R H 2010 Collective escape of chemotactic swimmers through microscopic ratchets [*Phys. Rev. Lett.*]{} [**104**]{} 168102 Prost J, Chauwin J F, Peliti L and Ajdari A 1994 Asymmetric pumping of particles [*Phys. Rev. Lett.*]{} [**72**]{} 2652-2655 Wan M B [*et al*]{} 2008 Rectification of swimming bacteria and self-driven particle systems by arrays of asymmetric barriers [*Phys. Rev. Lett.*]{} [**101**]{} 181802 Olson Reichhardt C J [*et al*]{} 2011 Active matter on asymmetric substrates, arXiv:1107.4124 Di Leonardo R [*et al*]{} 2010 Bacterial ratchet motors [*Proc. Nat. Acad. Sci. USA*]{} [**107**]{} 9541-9545 Angelani L, Di Leonardo R and Giancarlo R 2009 Self-starting micromotors in a bacterial bath [*Phys. Rev. Lett.*]{} [**102**]{} 048104 Sokolov A, Apodaca M M, Grzybowski B A, Aronson I S 2010 Swimming bacteria power microscopic gears [*Proc. Nat. Acad. Sci. USA*]{} [**107**]{} 969-974 Cisneros L, Dombrowski C, Goldstein R E, Kesseler J O 2006 Reversal of bacterial locomotion at an obstacle [*Phys. Rev. E*]{} [**73**]{} 030901(R) diLuzio W R [*et al*]{} Escherichia coli swim on the right-hand side 2005 [*Nature*]{} [**435**]{} 1271-1274 van Teeffelen S and Loewen H 2008 Dynamics of a Brownian circle swimmer [*Phys. Rev. E*]{} [**78**]{} 020101 Hill J, Kalkanci O, McMurray J L and Koser H 2007 Hydrodynamic surface interactions enable Escherichia coli to seek efficient routes to swim upstream [*Phys. Rev. Lett.*]{} [**98**]{} 068101 Nash R W, Adhikari R, Tailleur J and Cates M E 2010 Run-and-tumble particles with hydrodynamics: Sedimentation, trapping and upstream swimming [*Phys. Rev. Lett.*]{} [**104**]{} 258101 Nicolle L E 2005 Catheter-related urinary tract infection [*Drugs & Aging*]{} [**22**]{} 627-639 Drescher K [*et al.*]{} 2011 Fluid dynamics and noise in bacterial cell-cell and cell-surface scattering [*Proc. Nat. Acad. Sci. USA*]{} [**108**]{} 10940-10945 Lauga E and Powers T R 2009 The hydrodynamics of swimming microorganisms [*Repts. Prog. Phys.*]{} [**72**]{} 096601 Hatwalne Y, Ramaswamy S, Rao M and Simha R A 2004 Rheology of active-particle suspensions [*Phys. Rev. Lett.*]{} [**92**]{} 118101 Sokolov A and Aranson IS 2009 Reduction of viscosity in suspension of swimming bacteria [*Phys. Rev. Lett.*]{} [**103**]{} 148101 Cates M E [*et al*]{} 2008 Shearing active gels close to the isotropic-nematic transition [*Phys. Rev. Lett.*]{} [**101**]{} 068102 Fielding S M, Marenduzzo D and Cates M E 2011 Nonlinear dynamics and rheology of active fluids: Simulations in two dimensions [*Phys. Rev. E*]{} [**83**]{} 041910 Kruse K [*et al*]{} 2004 Asters, vortices and rotating spirals in active gels of polar filaments [*Phys. Rev. Lett.*]{} [**92**]{} 078101 Sokolov A, Aranson I S, Kessler J O and Goldstein R E 2007 Concentration dependence of the collective dynamics of swimming bacteria [*Phys. Rev. Lett.*]{} [**98**]{} 158102 Cisneros LH [*et al*]{} 2007 Fluid dynamics of self-propelled microorganisms, from individuals to concentrated populations 2007 [*Expts. in Fluids*]{} [**43**]{} 737-753 Saintillan D and Shelley M J 2007 Orientational order and instabilities in suspensions of self-locomoting rods [*Phys. Rev. Lett.*]{} [**99**]{} 058102 Saintillan D and Shelley M J 2009 Instabilities and pattern formation in active particle suspensions: Kinetic theory and continuum simulations [*Phys. Rev. Lett.*]{} [**99**]{} 058102 Ishikawa T and Pedley T J 2008 Coherent structures in monolayers of swimming particles [*Phys. Rev. Lett.*]{} [**100**]{} 088103 Wolgemuth C W 2008 Collective swimming and the dynamics of bacterial turbulence [*Biophys. J.*]{} [**95**]{} 1564-1574 Llopis I and Pagonabarraga I 2006 Dynamic regimes of hydrodynamically coupled self-propelling particles [*EPL*]{} [**75**]{} 999-1005 Golestanian R, Yeomans J M and Uchida N 2011 Hydrodynamic synchronization at low Reynolds number [*Soft Matter*]{} [**7**]{} 3074-3082 Underhill P T, Hernandez-Ortiz J P and Graham M D 2008 Diffusion and spatial correlations in suspensions of swimming particles [*Phys. Rev. Lett.*]{} [**100**]{} 248101 Palacci J, Cottin-Bizonne C, Ybert C and Bocquet L 2010 Sedimentation and effective temperature of active colloids [*Phys. Rev. Lett.*]{} [**105**]{} 088304 Guck J [*et al*]{} 2001 The optical stretcher: A novel laser tool to micromanipulate cells [*Biophys. J.*]{} [**81**]{} 767-784 Schwarz-Linek J [*et al*]{} Polymer-induced phase separation in suspensions of bacteria 2010 [*EPL*]{} [**89**]{} 68003 Peruani F, Deutsch A and Baer M Nonequilibrium clustering of self-propelled particles 2006 [*Phys. Rev. E*]{} [**74**]{} 030904(R) Peruani F, Klauss T, Deutsch A and Voss-Boehme 2010 A Traffic jams, gliders and bands in the quest for collective motion of self-propelled particles [*Phys. Rev. Lett.*]{} [**106**]{} 128101 Kruse K and Juelicher F 2000 Actively contracting bundles of polar filaments [*Phys. Rev. Lett.*]{} [**85**]{} 1778-1781 Ahmadi A, Liverpool T B and Marchetti C 2005 Nematic and polar order in active filament solutions [*Phys. Rev. E*]{} [**72**]{} 060901(R) Lefevre A and Biroli G 2007 Dynamics of interacting particle systems: stochastic process and field theory [*J. Stat. Mech.*]{} P07024 Dean D S 1996 Langevin equation for the density of a system of interacting Langevin processes [*J. Phys. A*]{} [**29**]{} L613-L617 Chavanis P-H 2010 A stochastic Keller-Segal model of chemotaxis [*Commun. Nonlin. Sci. Numer. Simul.*]{} [**15**]{} 60-70 Thompson A G, Tailleur J, Cates M E and Blythe R A 2011 Lattice models of nonequilibrium bacterial dynamics [*J. Stat. Mech.*]{} P02029 Flemming H C 2002 Biofouling in water systems – cases, causes, and countermeasures [*App. Microbiol. and Biotech.*]{} [**59**]{} 629-640 Costerton J W, Montanaro L and Arciola C R 2005 Biofilm in implant infections: its production and regulation [*Int. J. Artificial Organs*]{} [**28**]{} 1062-1068 Sutherland I W 2001 Biofilm exopolysaccharides: a strong and sticky framework [*Microbiol. UK*]{} [**147**]{} 3-9 Barrett-Freeman C, Evans M R, Marenduzzo D and Poon W C K [*Phys. Rev. Lett.*]{} [**101**]{} 100602 Cates M E, Marenduzzo D, Pagonabarraga I and Tailleur J 2010 Arrested phase separation in reproducing bacteria creates a generic route to pattern formation [*Proc. Nat. Acad. Sci. USA*]{} [**107**]{} 11715-11720 Budrene E O and Berg H C 1991 Complex patterns formed by motile cells of Escherichia coli [*Nature*]{} [**349**]{} 630-633 Woodward D E [*et al*]{} 1995 Spatio-temporal patterns generated by Salmonella typhimurium [*Biophys. J.*]{} [**68**]{} 2181-2189 Tyson R, Lubkin S R, Murray J D 1999 A minimal mechanism for bacterial pattern formation [*Proc. R. Soc. Lond. Ser. B*]{} [**266**]{} 299-304 Budrene E O and Berg H C 1995 Dynamics of formation of symmetrical patterns by chemotactic bacteria [*Nature*]{} [**376**]{} 49-53 Ben-Jacob E, Cohen I and Levine H 2000 Cooperative self-organization of microorganisms [*Adv. Phys.*]{} [**49**]{} 395-554 Brenner M P 2010 Chemotactic patterns without chemotaxis [*Proc. Nat. Acad. Sci. USA*]{} [**107**]{} 11653-11654 Wolfe A J and Berg H C 1989 Migration of bacteria in semisolid agar [*Proc. Nat. Acad. Sci. USA*]{} [**86**]{} 6973-6977 Adler J 1966 Chemotaxis in bacteria [*Science*]{} [**153**]{} 708-716 Barton J W and Ford R M 1997 Mathematical model for characterization of bacterial migration through sand cores [*Biotech. Bioeng.*]{} [**53**]{} 487-496 Ford R M and Harvey R W 2007 Role of chemotaxis in the transport of bacteria through saturated porous medial [*Adv. Water Resour.*]{} [**30**]{} 1608-1617 Visco P, Allen R J and Evans M R 2008 Exact solution of a model DNA-inversion genetic switch with orientational control [*Phys. Rev. Lett.*]{} [**101**]{} 118104 Baxter G J, Blythe R A and McKane A J 2008 Fixation and consensus times on a network: A unified approach [*Phys. Rev. Lett.*]{} [**101**]{} 258701 Waclaw B, Allen R J and Evans M R 2010 Dynamical phase transition in a model for evolution with migration [*Phys. Rev. Lett.*]{} [**105**]{} 268101 Blythe R A and MacKaine A J 2007 Stochastic models of evolution in genetics, ecology and linguistics [*J. Stat. Mech.*]{} P07018 Mackay D J C 2003 [*Information Theory, Inference and Learning Algorithms*]{} (Cambridge University Press, Cambridge) Wolgemuth C W 2011 Does cell biology need physicists? [*Physics*]{} [**4**]{} 4
{ "pile_set_name": "ArXiv" }
--- abstract: | The ESO Nearby Abell Cluster Survey (the ENACS) has yielded 5634 redshifts for galaxies in the directions of 107 rich, Southern clusters selected from the ACO catalogue (Abell et al. 1989). By combining these data with another 1000 redshifts from the literature, of galaxies in 37 clusters, we construct a volume-limited sample of 128 $R_{\rm ACO} \geq 1$ clusters in a solid angle of 2.55 sr centered on the South Galactic Pole, out to a redshift $z=0.1$. For a subset of 80 of these clusters we can calculate a reliable velocity dispersion, based on at least 10 (but very often between 30 and 150) redshifts. We deal with the main observational problem that hampers an unambiguous interpretation of the distribution of cluster velocity dispersions, namely the contamination by fore- and background galaxies. We also discuss in detail the completeness of the cluster samples for which we derive the distribution of cluster velocity dispersions. We find that a cluster sample which is complete in terms of the field-corrected richness count given in the ACO catalogue gives a result that is essentially identical to that based on a smaller and more conservative sample which is complete in terms of an intrinsic richness count that has been corrected for superposition effects. We find that the large apparent spread in the relation between velocity dispersion and richness count (based either on visual inspection or on machine counts) must be largely intrinsic; i.e. this spread is not primarily due to measurement uncertainties. One of the consequences of the (very) broad relation between cluster richness and velocity dispersion is that all samples of clusters that are defined complete with respect to richness count are unavoidably biased against low-$\sigma_V$ clusters. For the richness limit of our sample this bias operates only for velocity dispersions less than $\approx$800 km/sec. We obtain a statistically reliable distribution of global velocity dispersions which, for velocity dispersions $\sigma_V \ga 800$ km/s, is free from systematic errors and biases. Above this value of $\sigma_V$ our distribution agrees very well with the most recent determination of the distribution of cluster X-ray temperatures, from which we conclude that $\beta = \sigma_V^2 \mu m_H/kT_X \approx 1$. The observed distribution $n(>\sigma_V)$, and especially its high-$\sigma_V$ tail above $\approx$800 km/s, provides a reliable and discriminative constraint on cosmological scenarios for the formation of structure. We stress the need for model predictions that produce exactly the same information as do the observations, namely dispersions of line-of-sight velocity of galaxies within the turn-around radius and inside a cylinder rather than a sphere, for a sample of model clusters with a richness limit that mimics that of the sample of observed clusters. author: - 'A. Mazure , P. Katgert , R. den Hartog , A. Biviano , P. Dubath , E. Escalera , P. Focardi , D. Gerbal , G. Giuricin , B. Jones , O. Le Fèvre , M. Moles , J. Perea ,' - 'G. Rhee' date: 'Received date; accepted date' subtitle: 'II. The Distribution of Velocity Dispersions of Rich Galaxy Clusters [^1]' title: 'The ESO Nearby Abell Cluster Survey [^2] ' --- -1.5cm galaxies: clustering $-$ galaxies: kinematics and dynamics $-$ cosmology: observations $-$ dark matter Introduction ============ The present-day distribution of cluster masses contains information about important details of the formation of large-scale structure in the Universe. In principle, the distribution of present cluster masses constrains the form and amplitude of the spectrum of initial fluctuations, via the tail of high-amplitude fluctuations from which the clusters have formed, as well as the cosmological parameters that influence the formation process. Recently, several authors have attempted to use either the distribution of cluster mass estimates, or of gauges of the mass (such as the global velocity dispersion, or the temperature of the X-ray gas) to constrain parameters in cosmological scenarios. For example, Frenk et al. (1990, FWED hereafter) have attempted to constrain the bias parameter required by the CDM scenario through a comparison of their predictions from N-body simulations with the observed distribution of cluster velocity dispersions and X-ray temperatures. Subsequently, Henry & Arnaud (1991) have used the distribution of cluster X-ray temperatures to constrain the slope and the amplitude of the spectrum of fluctuations. More recently, Bahcall & Cen (1992), Biviano et al. (1993), and White et al.(1993) have used the distribution of estimated masses to constrain the cosmological density parameter, the power-spectrum index, as well as the bias parameter. For constraining the [*slope*]{} of the spectrum of initial fluctuations through the slope of the mass distribution, unbiased estimates of the mass (or of a relevant mass gauge) are required. The latter always require assumptions about either the shape of the galaxy orbits, the shape of the mass distribution, or about the distribution of the gas temperature. Therefore gauges of the total mass that are based on directly observable parameters, such as global velocity dispersions or central X-ray temperatures, are sometimes preferable. However, the use of such mass gauges also requires a lot of care. Global velocity dispersions, although fairly easily obtained, can be affected by projection effects and contamination by field galaxies, as discussed by e.g. FWED. In addition, velocity dispersions may depend on the size of the aperture within which they are determined, because the dispersion of the line-of-sight velocities often varies with distance from the cluster centre (e.g.den Hartog & Katgert 1995). More fundamentally, the velocity dispersion of the galaxies may be a biased estimator of the cluster potential (or mass) as a result of dynamical friction and other relaxation processes. In principle, the determination of the X-ray temperatures is more straightforward. However, temperature estimates may be affected by cooling flows, small-scale inhomogeneities (Walsh & Miralda-Escudé 1995), bulk motions or galactic winds (Metzler & Evrard 1994). Also, temperature estimates of high accuracy require high spectral resolution and are therefore less easy to obtain. To obtain useful constraints on the [*amplitude*]{} of the fluctuation spectrum, it is essential that the completeness of the cluster sample in the chosen volume is accurately known. The completeness of cluster samples constructed from galaxy catalogues obtained with automatic scanning machines, such as the COSMOS and APM machines (see e.g. Lumsden et al. 1992, LNCG hereafter, and Dalton et al. 1992) is, in principle, easier to discuss than that of the ACO catalogue, which until recently was the only source of cluster samples. In theory, one is primarily interested in the completeness with respect to a well-defined limit in mass. In practice, cluster samples based on optical catalogues can be defined only with respect to richness, and the relation between richness and mass seems to be very broad. A further complication is that all optical cluster catalogues suffer from superposition effects, which can only be resolved through extensive spectroscopy. Cluster samples based on X-ray surveys do not suffer from superposition effects, but they are (of necessity) flux-limited, and the extraction of volume-limited samples with well-defined luminosity limits requires follow-up spectroscopy (e.g. Pierre et al. 1994). The large spread in the relation between X-ray luminosity and X-ray temperature (e.g. Edge & Stewart 1991) implies that, as with the optical samples, the construction of cluster samples with a well-defined mass limit from X-ray surveys is not at all trivial. In this paper we discuss the distribution of velocity dispersions, for a volume-limited sample of rich ACO clusters with known completeness. The discussion is based on the results of our ESO Nearby Abell Cluster Survey (ENACS, Katgert et al. 1995, hereafter Paper I), which has yielded 5634 reliable galaxy redshifts in the direction of 107 rich, nearby ACO clusters with redshifts out to about 0.1. We have supplemented our data with about 1000 redshifts from the literature for galaxies in 37 clusters. In Section 2 we describe the construction of a volume-limited sample of rich clusters. In Section 3 we discuss superposition effects, and introduce a [*3-dimensional*]{} richness (derived from Abell’s projected, 2-dimensional richness). In Section 4 we discuss the completeness of the cluster sample, and we estimate the spatial density of rich clusters. In Section 5 we summarize the procedure that we used for eliminating interlopers, which is essential for obtaining unbiased estimates of velocity dispersion. In Section 6 we derive the properly normalized distribution of velocity dispersions. In Section 7 we compare our distribution with earlier results from the literature, which include both distributions of velocity dispersions and of X-ray temperatures. Finally, we also compare our result with some published predictions from N-body simulations. The Cluster Sample ================== Requirements ------------ The observational determination of the distribution of cluster velocity dispersions, $n(\sigma_V)$, requires a cluster sample that is either not biased with respect to $\sigma_V$, or that has a bias which is sufficiently well-known that it can be corrected for in the observations or be accounted for in the predictions. If this condition is fulfilled for a certain range of velocity dispersions, the [*shape*]{} of the distribution can be determined over that range. A determination of the [*amplitude*]{} of the distribution requires that the spatial completeness of the cluster sample is also known. Until complete galaxy redshift surveys over large solid angles and out to sufficiently high redshifts become available, it will not be possible to construct cluster samples that are complete to a well-defined limit in velocity dispersion. The only possible manner in which this ideal can at present be approached is by selecting cluster samples from catalogues that are based on overdensities in projected distributions of galaxies (or in X-ray surface brightness). By selecting only the clusters with an apparent richness (i.e.surface density in a well-defined range of absolute magnitudes) above a certain lower limit, one may hope to obtain an approximate lower limit in intrinsic richness (i.e. with fore- and background-galaxies removed). By virtue of the general (but very broad) correlation between richness and velocity dispersion one can then expect to achieve completeness with respect to velocity dispersion above a lower limit in $\sigma_V$. Below that limit the cluster sample will be inevitably incomplete with respect to velocity dispersion, in a manner that is specific for the adopted richness limit. In other words: observed and predicted velocity dispersion distributions can be compared directly above the limiting $\sigma_V$ set by the richness limit. For smaller values of the velocity dispersion the prediction should take into account the bias introduced by the particular richness limit. The Southern ACO $R\geq 1$ Cluster Sample ----------------------------------------- The ENACS was designed to establish, in combination with data already available in the literature, a database for a complete sample of $R\geq 1$ ACO clusters, out to a redshift of $z=0.1$, in a solid angle of 2.55 sr around the SGP, defined by $b\leq -30\degr$ and $-70\degr \le \delta \leq 0\degr$ (a volume we will refer to as the ‘cone’). For our sample we selected clusters which at the time either had a known spectroscopic redshift $z \leq 0.1$, or which had $m_{10}\leq 16.9$. Judging from the $m_{10}-z$ relation the clusters with $m_{10}\leq 16.9$ should include most of the $z\leq 0.1$ clusters. With this selection, the contamination from $z>0.1$ clusters would clearly be non-negligible due to the spread in the $m_{10}-z$ relation. [lrcrrrr]{} ACO & $N_{\rm mem}$ & z & $\sigma_V^c$ & $C_{\rm ACO}$ & $C_{\rm bck}$ & $C_{\rm 3D}$\ A0013 & 37 & 0.0943 & 886 & 96 & 20.7 & 98.2\ A0074$^\ddagger$ & 5 & 0.0654 & - & 50 & 15.0 & -\ A0085$^\ddagger$ & 116 & 0.0556 & 853 & 59 & 15.9 & 55.9\ & 17 & 0.0779 & 462 & 59 & 15.9 & 7.7\ A0087 & 27 & 0.0550 & 859 & 50 & 24.3 & 47.8\ A0119$^\dagger$ & 125 & 0.0442 & 740 & 69 & 17.5 & 76.1\ A0126$^\ddagger$ & 1 & 0.0850 & - & 51 & 24.3 & -\ A0133$^\ddagger$ & 9 & 0.0566 & - & 60 & 15.8 & -\ A0151$^\dagger$ & 63 & 0.0533 & 669 & 72 & 17.5 & 39.1\ & 40 & 0.0997 & 857 & 72 & 17.5 & 25.2\ & 29 & 0.0410 & 395 & 72 & 17.5 & 18.3\ A0168 & 76 & 0.0450 & 517 & 89 & 16.5 & 80.2\ A0261$^\ddagger$ & 1 & 0.0467 & - & 63 & 30.5 & -\ A0277$^\ddagger$ & 2 & 0.0927 & - & 50 & 16.1 & -\ A0295 & 30 & 0.0426 & 297 & 51 & 24.3 & 52.6\ A0303 & 4 & 0.0595 & - & 50 & 24.3 & 18.6\ A0367 & 27 & 0.0907 & 963 & 101 & 19.5 & 108.4\ A0415$^\ddagger$ & 1 & 0.0788 & - & 67 & 20.8 & -\ A0420 & 19 & 0.0858 & 514 & 55 & 26.3 & 46.8\ A0423$^\ddagger$ & 2 & 0.0795 & - & 89 & 24.3 & -\ A0484$^\ddagger$ & 4 & 0.0386 & - & 50 & 27.3 & -\ A0496$^\ddagger$ & 134 & 0.0328 & 682 & 50 & 16.8 & 57.0\ A0500$^\ddagger$ & 1 & 0.0666 & - & 58 & 17.2 & -\ A0514 & 82 & 0.0713 & 874 & 78 & 14.5 & 68.4\ A0524 & 26 & 0.0779 & 822 & 74 & 34.4 & 65.6\ & 10 & 0.0561 & 211 & 74 & 34.4 & 25.3\ A2361 & 24 & 0.0608 & 329 & 69 & 27.3 & 70.0\ A2362 & 17 & 0.0608 & 340 & 50 & 25.3 & 47.4\ A2377$^\ddagger$ & 1 & 0.0808 & - & 94 & 27.3 & -\ A2382$^\ddagger$ & 1 & 0.0648 & - & 50 & 17.7 & -\ A2384$^\ddagger$ & 1 & 0.0943 & - & 72 & 24.8 & -\ A2399$^\ddagger$ & 1 & 0.0587 & - & 52 & 16.1 & -\ A2400$^\ddagger$ & 1 & 0.0881 & - & 56 & 23.1 & -\ A2401 & 23 & 0.0571 & 472 & 66 & 18.6 & 64.9\ A2410$^\ddagger$ & 1 & 0.0806 & - & 54 & 17.7 & -\ A2420$^\ddagger$ & 1 & 0.0838 & - & 88 & 26.3 & -\ A2426 & 15 & 0.0978 & 846 & 114 & 26.3 & 58.5\ & 11 & 0.0879 & 313 & 114 & 26.3 & 42.9\ A2436 & 14 & 0.0914 & 530 & 56 & 27.3 & 61.4\ A2480 & 11 & 0.0719 & 862 & 108 & 23.0 & 80.0\ A2492$^\ddagger$ & 2 & 0.0711 & - & 62 & 18.6 & -\ A2500 & 13 & 0.0895 & 477 & 71 & 82.2 & 55.3\ & 12 & 0.0783 & 283 & 71 & 82.2 & 51.0\ A2502$^\ddagger$ & 0 & 0.0972 & - & 58 & 24.3 & -\ A2528$^\ddagger$ & 1 & 0.0955 & - & 73 & 22.0 & -\ A2538$^\ddagger$ & 42 & 0.0832 & 861 & 83 & 19.1 & 95.3\ A2556$^\ddagger$ & 2 & 0.0865 & - & 67 & 21.2 & -\ A2559$^\ddagger$ & 1 & 0.0796 & - & 73 & 28.3 & -\ A2566$^\ddagger$ & 1 & 0.0821 & - & 67 & 28.4 & -\ A2569 & 36 & 0.0809 & 481 & 56 & 24.3 & 70.5\ A2599$^\ddagger$ & 1 & 0.0880 & - & 84 & 16.2 & -\ A2638$^\ddagger$ & 1 & 0.0825 & - & 123 & 30.5 & -\ A2644 & 12 & 0.0688 & 259 & 59 & 24.3 & 28.6\ [lrcrrrr]{} ACO & $N_{\rm mem}$ & z & $\sigma_V^c$ & $C_{\rm ACO}$ & $C_{\rm bck}$ & $C_{\rm 3D}$\ A2670$^\ddagger$ & 219 & 0.0762 & 908 & 142 & 15.9 & 114.1\ A2717$^\dagger$ & 56 & 0.0490 & 512 & 52 & 10.8 & 43.4\ A2734 & 77 & 0.0617 & 581 & 58 & 12.3 & 45.9\ A2755 & 22 & 0.0949 & 789 & 120 & 28.2 & 90.6\ A2764 & 19 & 0.0711 & 788 & 55 & 19.6 & 59.1\ A2765 & 16 & 0.0801 & 905 & 55 & 47.7 & 58.7\ A2799 & 36 & 0.0633 & 424 & 63 & 20.2 & 71.3\ A2800 & 34 & 0.0636 & 430 & 59 & 18.6 & 57.4\ A2819 & 50 & 0.0747 & 406 & 90 & 23.8 & 45.2\ & 44 & 0.0867 & 359 & 90 & 23.8 & 39.7\ A2854 & 22 & 0.0613 & 369 & 64 & 28.1 & 58.0\ A2889$^\ddagger$ & 1 & 0.0667 & - & 65 & 22.0 & -\ A2911 & 31 & 0.0808 & 576 & 72 & 21.3 & 65.7\ A2915 & 4 & 0.0864 & - & 55 & 25.0 & -\ A2923 & 16 & 0.0715 & 339 & 50 & 42.3 & 44.8\ A2933 & 9 & 0.0925 & - & 77 & 28.2 & 86.1\ A2954 & 6 & 0.0566 & - & 121 & 32.2 & 38.3\ A2955$^\ddagger$ & 0 & 0.0989 & - & 56 & 34.4 & -\ A3009 & 12 & 0.0653 & 514 & 54 & 21.3 & 56.5\ A3040$^\ddagger$ & 1 & 0.0923 & - & 69 & 17.2 & -\ A3093 & 22 & 0.0830 & 435 & 93 & 22.5 & 63.5\ A3094 & 66 & 0.0672 & 653 & 80 & 18.6 & 65.8\ A3107$^\ddagger$ & 0 & 0.0875 & - & 61 & 20.4 & -\ A3108 & 7 & 0.0625 & - & 73 & 30.5 & 51.7\ & 5 & 0.0819 & - & 73 & 30.5 & 36.9\ A3111 & 35 & 0.0775 & 770 & 54 & 18.6 & 52.9\ A3112 & 67 & 0.0750 & 950 & 116 & 25.2 & 92.8\ A3122 & 87 & 0.0643 & 755 & 100 & 21.4 & 88.7\ A3126$^\ddagger$ & 38 & 0.0856 & 1041 & 75 & 29.3 & 88.1\ A3128$^\dagger$ & 180 & 0.0599 & 802 & 140 & 19.4 & 129.3\ & 12 & 0.0395 & 386 & 140 & 19.4 & 8.6\ & 12 & 0.0771 & 103 & 140 & 19.4 & 8.6\ A3135$^\ddagger$ & 1 & 0.0633 & - & 111 & 28.6 & -\ A3144 & 1 & 0.0423 & - & 54 & 16.3 & -\ A3151 & 34 & 0.0676 & 747 & 52 & 23.8 & 60.0\ A3152$^\ddagger$ & 0 & 0.0891 & - & 51 & 26.0 & -\ A3153$^\ddagger$ & 0 & 0.0958 & - & 64 & 26.0 & -\ A3158 & 105 & 0.0591 & 1005 & 85 & 10.8 & 82.5\ A3194 & 32 & 0.0974 & 790 & 83 & 13.3 & 93.5\ A3202 & 27 & 0.0693 & 433 & 65 & 28.1 & 61.3\ A3223 & 68 & 0.0601 & 636 & 100 & 14.2 & 69.6\ A3264 & 5 & 0.0978 & - & 53 & 37.6 & 41.2\ A3266$^\ddagger$ & 158 & 0.0589 & 1105 & 91 & 19.0 & 97.1\ A3301 & 5 & 0.0536 & - & 172 & 7.3 & -\ A3330$^\ddagger$ & 1 & 0.0910 & - & 52 & 18.0 & -\ A3334$^\ddagger$ & 32 & 0.0965 & 671 & 82 & 29.3 & 86.9\ A3341 & 63 & 0.0378 & 566 & 87 & 23.9 & 59.2\ & 15 & 0.0776 & 751 & 87 & 23.9 & 14.1\ A3351$^\ddagger$ & 0 & 0.0819 & - & 114 & 35.0 & -\ A3360$^\ddagger$ & 36 & 0.0848 & 801 & 85 & 34.6 & 107.7\ A3651 & 78 & 0.0599 & 661 & 75 & 33.2 & 91.8\ A3667$^\dagger$ & 162 & 0.0556 & 1059 & 85 & 33.2 & 85.1\ A3677 & 8 & 0.0912 & - & 60 & 25.0 & 37.7\ A3682 & 10 & 0.0921 & 863 & 66 & 41.1 & 97.3\ A3691 & 33 & 0.0873 & 792 & 115 & 40.9 & 142.9\ [lrcrrrr]{} ACO & $N_{\rm mem}$ & z & $\sigma_V^c$ & $C_{\rm ACO}$ & $C_{\rm bck}$ & $C_{\rm 3D}$\ A3693 & 16 & 0.0910 & 585 & 77 & 25.0 & 49.5\ A3695 & 81 & 0.0893 & 845 & 123 & 39.5 & 137.2\ A3696 & 12 & 0.0882 & 428 & 58 & 30.6 & 88.6\ A3698$^\ddagger$ & 1 & 0.0198 & - & 71 & 7.7 & -\ A3703 & 18 & 0.0735 & 455 & 52 & 27.3 & 44.6\ & 13 & 0.0914 & 697 & 52 & 27.3 & 32.2\ A3705 & 29 & 0.0898 & 1057 & 100 & 32.0 & 93.3\ A3716$^\ddagger$ & 65 & 0.0448 & 781 & 66 & 11.6 & 61.6\ A3733 & 41 & 0.0389 & 696 & 59 & 4.7 & 59.4\ A3744 & 66 & 0.0381 & 559 & 70 & 10.5 & 62.8\ A3764 & 38 & 0.0757 & 671 & 53 & 24.0 & 68.1\ A3781 & 4 & 0.0571 & - & 79 & 16.2 & 25.4\ & 4 & 0.0729 & - & 79 & 16.2 & 25.4\ A3795 & 13 & 0.0890 & 336 & 51 & 31.9 & 77.0\ A3799 & 10 & 0.0453 & 428 & 50 & 24.9 & 50.0\ A3806 & 84 & 0.0765 & 813 & 115 & 11.7 & 89.4\ A3809 & 89 & 0.0620 & 499 & 73 & 20.8 & 55.5\ A3822 & 84 & 0.0759 & 969 & 113 & 22.5 & 112.8\ A3825 & 59 & 0.0751 & 698 & 77 & 12.0 & 58.4\ A3826$^\ddagger$ & 1 & 0.0754 & - & 62 & 13.2 & -\ A3827 & 20 & 0.0984 & 1114 & 100 & 28.4 & 116.7\ A3844$^\ddagger$ & 1 & 0.0730 & - & 52 & 23.0 & -\ A3879 & 46 & 0.0669 & 516 & 114 & 39.5 & 85.0\ A3897 & 10 & 0.0733 & 548 & 63 & 21.3 & 64.8\ A3911$^\ddagger$ & 1 & 0.0960 & - & 58 & 30.5 & -\ A3921 & 32 & 0.0936 & 585 & 93 & 25.0 & 99.3\ A3969$^\ddagger$ & 1 & 0.0699 & - & 55 & 40.5 & -\ A4008 & 27 & 0.0549 & 424 & 66 & 36.0 & 64.1\ A4010 & 30 & 0.0957 & 615 & 67 & 28.2 & 79.3\ A4038$^\ddagger$ & 51 & 0.0292 & 839 & 117 & 17.1 & 110.4\ A4053 & 17 & 0.0720 & 731 & 64 & 16.2 & 43.9\ & 9 & 0.0501 & - & 64 & 16.2 & 23.2\ A4059$^\ddagger$ & 10 & 0.0488 & 526 & 66 & 11.0 & 69.9\ A4067$^\ddagger$ & 30 & 0.0989 & 719 & 72 & 30.5 & 75.0\ [[**Notes:**]{} col.(1): A dagger indicates a combination of data from the ENACS and from the literature, a double dagger indicates that only data from the literature were used; col.(2): secondary systems are listed if they contain either $\geq$10 redshifts or $\geq$50% of the number of redshifts of the main system; col.(3): redshift values (or photometric estimates, indicated by $N_{\rm z}$=0) of clusters for which no ENACS data exist, were taken from Abell et al. (1989), Struble & Rood (1991), Dalton et al. (1994), Postman et al. (1992), West (private communication) and Quintana & Ramírez (1995)]{} At present, after completion of our project and with other new data in the literature, the region defined above contains 128 $R\geq 1$ ACO clusters with a measured or estimated redshift $z\leq 0.1$. A spectroscopically confirmed redshift $z\leq 0.1$ is available for 122 clusters, while for the remaining 6 a redshift $\leq 0.1$ has been estimated on the basis of photometry. The redshift values (or estimates), if not from the ENACS, were taken from Abell et al.(1989), Struble & Rood (1991), Peacock and West (1992, and private communication), Postman et al. (1992), Dalton et al. (1994) and Quintana & Ramírez (1995). We will show below that the 128 clusters form a sample that can be used for statistical analysis. Of the 122 redshift surveys of clusters with $z \leq 0.1$ in the specified region, 78 were contributed to by the ENACS, either exclusively or in large measure. In 80 of the 122 redshift surveys we find at least one system with 10 or more member galaxies. Of the latter 80 surveys, 68 were contributed to by our survey. In Tab. 1a we list several properties of the main systems in the direction of the 128 clusters (in the ‘cone’ and with $z < 0.1$) that constitute the sample on which we will base our discussion of the distribution of cluster velocity dispersions, as well as the properties of 14 subsystems with $z < 0.1$ that either have 10 redshifts or at least half the number of redshifts in the main system. In Tab. 1b we list the same type of data for the other systems described in Paper I, with at least 10 members so that a velocity dispersion could be determined. As the latter are outside the ‘cone’ defined above (or have $z > 0.1$), they have not been used in the present discussion but could be useful for other purposes. A description of the ways in which the data in Tabs. 1$a$ and 1$b$ have been obtained will be given in the next Sections. [lrcrrrr]{} ACO & $N_{\rm mem}$ & z & $\sigma_V^c$ & $C_{\rm ACO}$ & $C_{\rm bck}$ & $C_{\rm 3D}$\ A0118 & 30 & 0.1148 & 649 & 77 & 35.8 & 89.1\ A0229 & 32 & 0.1139 & 856 & 77 & 24.3 & 83.1\ A0380 & 25 & 0.1337 & 703 & 82 & 35.8 & 71.8\ A0543 & 10 & 0.0850 & 413 & 90 &244.2 &139.3\ A0548$^\dagger$ &323 & 0.0416 & 842 & 92 & 12.2 & 88.4\ & 21 & 0.1009 & 406 & & & 5.4\ & 15 & 0.0868 &1060 & & & 3.9\ A0754$^\dagger$ & 90 & 0.0543 & 749 & 92 & 17.0 &101.6\ A0957 & 34 & 0.0447 & 741 & 55 & 16.8 & 67.8\ A0978 & 56 & 0.0544 & 498 & 55 & 16.1 & 61.4\ A1069 & 35 & 0.0650 &1120 & 45 & 17.2 & 54.5\ A1809$^\dagger$ & 58 & 0.0791 & 702 & 78 & 16.0 & 81.7\ A2040 & 37 & 0.0461 & 673 & 52 & 15.9 & 58.4\ A2048 & 25 & 0.0972 & 668 & 75 & 17.7 & 59.4\ A2052$^\dagger$ & 62 & 0.0350 & 655 & 41 & 17.5 & 52.5\ A2353 & 24 & 0.1210 & 599 & 51 & 26.3 & 59.8\ A2715$^*$ & 14 & 0.1139 & 556 &112 & 30.5 & 58.7\ A2778 & 17 & 0.1018 & 947 & 51 & 11.9 & 26.7\ & 10 & 0.1182 & 557 & & & 15.7\ A2871 & 18 & 0.1219 & 930 & 92 & 48.0 & 63.0\ & 14 & 0.1132 & 319 & & & 49.0\ A3141 & 15 & 0.1058 & 646 & 55 & 16.2 & 48.5\ A3142$^*$ & 21 & 0.1030 & 814 & 78 & 13.0 & 50.3\ & 12 & 0.0658 & 785 & & & 28.7\ A3354 & 57 & 0.0584 & 383 & 54 & 9.8 & 33.6\ A3365 & 32 & 0.0926 &1153 & 68 & 32.0 & 91.5\ A3528 & 28 & 0.0526 & 969 & 70 & 6.2 & 54.7\ A3558$^\dagger$ &328 & 0.0479 & 939 &226 & 8.7 &127.0\ A3559 & 39 & 0.0461 & 443 &141 & 9.3 & 85.0\ & 11 & 0.1119 & 537 & & & 24.0\ A3562 &114 & 0.0490 &1048 &129 & 12.6 &140.4\ A3864 & 32 & 0.1033 & 940 & 60 & 29.5 & 69.8\ [[**Notes:**]{} col.(1): an asterisk indicates that the system is in the ‘cone’, a dagger indicates a combination of data from the ENACS and from the literature; col.(2): secondary systems are listed if they contain either $\geq$10 redshifts or $\geq$50% of the number of redshifts of the main system]{} Superposition Effects in the ACO Cluster Catalogue ================================================== It is clear from the redshift distributions towards our target clusters in the ENACS (see Fig. 7 in Paper I) that for most clusters in the $R_{\rm ACO} \geq 1$ sample the fraction of fore- and background galaxies is non-negligible. In Paper I we have discussed how one can identify the fore- and background galaxies, namely as the ‘complement’ of the galaxies in the physically relevant systems. In order to identify the latter, we used a fixed velocity gap to decide whether two galaxies in the survey that are adjacent in redshift, are part of the same system or of two separate systems. The minimum velocity difference that defines galaxies to be in separate systems was determined for the ENACS from the sum of the 107 distributions of the velocity gap between galaxies that are adjacent in velocity. We found that a gap-width of 1000 km/s is sufficient to identify systems, that it does not break up systems inadvertently, and is conservative in the sense that it does not eliminate outlying galaxies of a system. Note that the gap-size of 1000 km/s is geared to the sampling in our survey and to the average properties of the redshift systems; for other datasets the required gap-size may be different. The systems that result from applying this procedure to the ENACS data are given in Tab. 6 of Paper I. Having identified the systems in the redshift surveys of our clusters we can quantify the effect of the superposition of fore- and background galaxies on the ACO richness estimates. Our observing strategy has been to obtain, for the target clusters, redshifts for the $N$ brightest galaxies in a field consisting of 1 to 3 circular apertures with a diameter of $\approx0.5\degr$. For most clusters in our programme this corresponds roughly to the size of the field in which the richness count was determined. We can therefore estimate an intrinsic ‘3-D’ richness of a cluster as the product of the fraction $f_{\rm main}$ of galaxies that reside in the main system (i.e. in the system with the largest number of members) with the [*total*]{} galaxy count obtained by Abell et al. (1989). The total count is not available in the ACO catalogue and must be recovered as the sum of the corrected count $C_{\rm ACO}$ published by Abell et al. (1989) and the correction for the contribution of the field, $C_{\rm bck}$, that they subtracted from their measured total count. The intrinsic 3-D richness thus follows as $C_{\rm 3D} = f_{\rm main} \times (C_{\rm ACO} + C_{\rm bck})$, in which we replace the statistical field corrections of Abell et al. (1989) (based on integrals of the galaxy luminosity function) by field corrections based on redshift surveys. The first ingredient in the calculation of the intrinsic 3 - D richness is $f_{\rm main}$. In Fig. 1$a$ we show the distribution of the fraction $f_{\rm main}$ for the 80 redshift surveys with $N\geq 10$ and $z\le 0.1$ in our sample. We estimated $f_{\rm main}$ only for systems with at least 10 measured redshifts, because for $N<10$ the definition of systems is not very stable, so that the determination of $f_{\rm main}$ is likewise not very reliable. According to Girardi et al. (1993), the minimum number of galaxies required to obtain a reliable estimate of $\sigma_V$ also happens to be about 10. We find that, on average, $\approx$ 73% of the galaxies in our redshift surveys towards $R_{\rm ACO} \geq 1$ clusters with $z\la 0.1$ belongs to the main system. The correction $C_{\rm bck}$, which accounts for the contribution of the field to the total count, was derived as follows. Abell et al.(1989) describe a parametrization of the background correction, which is the number of field galaxies down to a limiting magnitude of $m_3+2$ (the limit of the uncorrected richness count), in the same aperture in which the total count was made. The latter has a diameter that is based on the estimated distance through the $m_{10}-z$ relation. To calculate $C_{\rm bck}$ for a cluster, it is thus necessary to have its $m_3$ and $m_{10}$. These parameters are known for those 65 of the 80 clusters in our sample that are in the southern part of the ACO catalogue. The other 15 clusters were described by Abell (1958), who did not use a parametrized estimate of the background, nor did he list $m_3$. Instead, he estimated $C_{\rm bck}$ by counting all galaxies down to $m_3+2$ in a field near each cluster that clearly did not contain another cluster. To recover an approximate value of $C_{\rm bck}$ for these 15 clusters we first estimated $m_3$ from $m_{10}$. Using 97 $R\ge 1$ clusters in our sample out to $z=0.1$ we found $m_3 = 0.987\, m_{10} - 0.608$, with a spread of 0.30 mag. Finally, we used the parametrization of Abell et al. (1989) for these 15 clusters to calculate $C_{\rm bck}$ (which should be very close but need not identical to the value subtracted by Abell). In Fig. 1$b$ we show the relation between $f_{\rm main}$ and $f_{\rm ACO} (= C_{\rm ACO} / (C_{\rm ACO} + C_{\rm bck})$. The average and median values of $f_{\rm ACO}$ are both 0.76, i.e. practically identical to the corresponding values for $f_{\rm main}$. So, [*on average*]{}, the field correction $C_{\rm bck}$ applied by Abell et al. (1989) was almost the same as the field correction we derive from our redshift surveys. However, $f_{\rm main}$ spans a much wider range than does $f_{\rm ACO}$. It thus appears that the field correction of Abell et al. (1989) has probably introduced a considerable noise in the field-corrected richness estimates. The reason for this is that their correction was based on an ‘average field’, while for an individual cluster the actual field may differ greatly from the average. This conclusion is supported by the data in Fig. 1$c$, where we show the relation between $C_{\rm ACO}$, the count corrected for the model field contribution according to Abell et al. (1989), and $C_{\rm 3D}$, the intrinsic 3-D count calculated using $f_{\rm main}$, which thus takes into account the actual field contamination for each cluster individually. Statistically, $C_{\rm ACO}$ and $C_{\rm 3D}$ appear to measure the same quantity, i.e. the field correction of Abell et al. (1989) is, [*on average*]{}, in very good agreement with our estimates from the redshift surveys. However, the variations in the real field with respect to the average field must be mainly responsible for the very large spread in the values of $C_{\rm 3D}$ for a fixed value of $C_{\rm ACO}$. As the distribution of points in Fig.1 $c$ seems to be very symmetric around the $C_{\rm ACO} = C_{\rm 3D}$ -line, we will later assume that the statistical properties of a complete cluster sample with $C_{\rm ACO} \ge 50$ are not different from those of a sample with $C_{\rm 3D} \ge 50$. For an individual cluster, $C_{\rm 3D}$ is obviously a much more meaningful parameter than $C_{\rm ACO}$. Yet, one has to be aware of possible systematic effects that may affect its use. First, as $C_{\rm 3D}$ involves $f_{\rm main}$ any bias in the determination of $f_{\rm main}$ could also enter $C_{\rm 3D}$. The number of unrelated fore- and background galaxies is likely to depend on the redshift of a cluster, and therefore $f_{\rm main}$ might depend on redshift. However, as is evident from Fig. 2$a$, there is hardly any indication in our data that this is the case. At most, there may be a tendency for a slight bias against low values of $f_{\rm main}$ at the lowest redshifts. This is consistent with the fact that for the nearest clusters the field contribution is low and may not be very easy to determine properly. In principle, a slight bias against low values of $f_{\rm main}$ could result in a slight bias against low values of $C_{\rm 3D}$ for nearby clusters. But, as can be seen from Fig. 2$b$, there is no indication that for nearby clusters the $C_{\rm 3D}$ values are higher than average. Secondly, there is a general tendency to select preferentially the richer clusters at higher redshifts, and a bias could therefore exist against the less rich systems at higher redshifts. Although the systems with the highest values of $C_{\rm 3D}$ are indeed found near our redshift limit, there is no evidence in Fig. 2$b$ that there is a strong bias against systems with $C_{\rm 3D} \approx 50$ near the redshift limit. Thirdly, the full problem of the superposition of two rich systems along the line of sight is not appreciated in the simple definition of $C_{\rm 3D}$, and it is certainly possible that if two $R_{\rm ACO} \geq 1$ systems are observed in superposition the most distant one may not be recognized as such. Fortunately, the probability of such a situation to occur is low. As is clear from Fig. 2$c$, there is no tendency for clusters with a high total count $C_{\rm ACO} + C_{\rm bck}$ to have a smaller value of $f_{\rm main}$, as would be expected if superposition contributed significantly to the richness. As a matter of fact, given the density of $R\geq 1$ clusters (see next Section), we expect that for our sample of 128 clusters there is a probability of about 1% that a superposition of two $R\geq 1$ clusters occurs in our data. This is consistent with the fact that in only one case, viz. that of A2500, we observe a secondary system with a value of $C_{\rm 3D} > 50$. In principle, the sample of clusters with $C_{\rm 3D}\geq 50$ is to be preferred over the one with $C_{\rm ACO}\geq 50$, as in the former the effects of superposition have been accounted for in a proper way. However, $C_{\rm 3D}$ cannot be used as the main selection criterion for a cluster sample, because it requires $f_{\rm main}$ to be available for all clusters. From Fig. 1$c$ it appears that, in a statistical sense, a cluster sample with $C_{\rm ACO} \ge 50$ can be used as a substitute for a sample with $C_{\rm 3D}\ga 50$ as, apart from the large scatter, the two richnesses are statistically equivalent. As a result of the large scatter in Fig. 1$c$, one can define a subsample of clusters from the $C_{\rm ACO} \ge 50$ sample that is complete in terms of $C_{\rm 3D}$ only if one limits the subsample to systems with $C_{\rm 3D} \ge 75$. The Completeness of the Sample and the Density of Rich Clusters =============================================================== In the following discussion and in the remaining sections of this paper we will use the term “cluster” to refer to the main system in Tab. 1$a$, i.e. the system with the largest number of redshifts in each pencil beam, unless we explicitly state otherwise. Hence, the 14 secondary systems in Tab. 1$a$ are not included, nor are the systems in Tab.1$b$, as the latter are not in the ‘cone’ defined in Section 2.2, or have $z > 0.1$. We have estimated the completeness of our cluster sample with respect to redshift from the distribution of the number of clusters in 10 concentric shells, each with a volume equal to one-tenth of the total volume out to $z=0.1$. The result is shown in Fig. 3. The dashed line shows the distribution for all 128 $C_{\rm ACO}\geq 50$ clusters out to a redshift of 0.1. The solid line represents the subset of 80 clusters with at least 10 redshifts (for which a velocity dispersion is therefore available). Finally, the dotted line shows the distribution for the subset of 33 clusters with $N\geq 10$ [*and*]{} $C_{\rm 3D}\geq 75$. Note that in Tab. 1a there are 34 clusters with $C_{\rm 3D}\geq 75$ but one of these, A2933, has only 9 redshifts in the main system, whereas the total number of redshifts measured was sufficient to estimate $f_{\rm main}$ and, hence, $C_{\rm 3D}$. From Fig. 3 it appears that the sample of 128 clusters with $C_{\rm ACO} \ge 50$ has essentially uniform density, except for a possible ($\approx 2\sigma$) ‘excess’ near $z=0.06$, and an apparent ‘shortage’ of clusters in the outermost bins. The ‘excess’ is at least partly due to the fact that several of the clusters in the Horologium-Reticulum and the Pisces-Cetus superclusters are in our cluster sample (see Paper I). As we will discuss in detail in the next Section, the ‘shortage’ towards $z = 0.1$ is probably due to a combination of two effects. Firstly, some clusters that should have been found by Abell et al. to have $R_{\rm ACO} \ge 1$ and $m_{10}\leq 16.9$ were not. Secondly, near the redshift limit of our sample Galactic obscuration may have caused some clusters to be excluded from the sample that they do belong to. The subset of 80 clusters for which at least 10 redshifts are available appears to have essentially uniform density in the inner half of the volume, but a significantly lower density in the outer half. This apparent decrease is due to the fact that, for obvious reasons, the average number of [*measured*]{} redshifts decreases with increasing redshift; so much so that for $z\ga 0.08$ the fraction of clusters with less than 10 redshifts is about 40%. Finally, the density of the subset of 33 clusters with $C_{\rm 3D}\geq 75$ appears constant out to a redshift of 0.1. This is consistent with the fact that none of the selection effects that operate for the two other samples are expected to affect the richest clusters. The Sample of 128 Clusters with $R_{\rm ACO} \geq 1$ ---------------------------------------------------- In constructing our ‘complete’ sample, we have applied a limit $m_{10}\le 16.9$ for the cluster candidates without a spectroscopic redshift. This limit was chosen so that we would include essentially all clusters with $z < 0.1$. It is possible that a few clusters have been missed, but it is very difficult to estimate from first principles how many clusters with $z \le 0.1$ have been missed due to the $m_{10}$ limit, and we will not try to make a separate estimate for this effect. However, it is important to realize that the few clusters that we may have missed as a result of the $m_{10}$ limit are unlikely to have $z < 0.08$. It is possible that some clusters that should have been included were either not recognized by Abell (1958) or Abell et al. (1989), or have had their richness underestimated and have thus not made it into our sample. To a large extent, the magnitude of this effect can be estimated from a comparison with cluster catalogues based on machine scanning of plates. Below, we will describe such an estimate. As with the $m_{10}$ limit, it is likely that clusters that have been missed for this reason are mostly in the further half of the volume. Recently, two cluster catalogues that are based on galaxy catalogues obtained with machine scans of photographic plates have become available, namely the Edinburgh-Durham Cluster Catalogue (EDCC) by LNCG, and the APM cluster catalogue by Dalton et al. (1992). We now proceed to estimate, from a comparison with the EDCC, how many clusters with $R_{\rm ACO} \geq 1$ (or $C_{\rm ACO} \ge 50$) and $m_{10}\le 16.9$ may have been missed by Abell et al. (1989). In the following we will assume that the $C_{\rm ACO} \geq 50$ criterion translates into a limit of $C_{\rm EDCC} \geq 30$ in the EDCC richness count. This assumption is supported by several pieces of evidence. First, the shift between the distributions of richness count (see Fig. 3 in LNCG) supports this conclusion, and in particular the respective richness values at which the incompleteness sets in. Second, it is also consistent with the apparent offset in the relation between $C_{\rm ACO}$ and $C_{\rm EDCC}$ (see Fig. 6 in LNCG). The ‘offset’ between the two richness counts is probably largely due to different methods used in correcting for the field. Third, the very large spread in the relation between the two richness counts, the reason for which is not so obvious, results in about one-third of the ACO clusters with $C_{\rm ACO}\geq 50$ having a count $C_{\rm EDCC} < 30$. Conversely, about one-third of the clusters in the ACO catalogue for which LNCG obtained a count of more than 30 does not meet the $R_{\rm ACO} \geq 1$ criterion (i.e. has $C_{\rm ACO} < 50$). One can now try to estimate how many clusters with $R_{\rm ACO} \geq 1$ in our volume have been missed by ACO. Note that the complementary question, namely how many clusters in the ACO catalogue with $R_{\rm ACO} \geq 1$ and $m_{10} \leq 16.9$ do not exist according to LNCG, is not relevant for the present argument, as such ACO cluster candidates will have been shown by spectroscopy to be non-existent (there are probably one or two examples in the ENACS). As to the first question we find, from Fig. 10 in LNCG that of the clusters in the EDCC without a counterpart in the ACO catalogue, only 5 have $m_{10}(b_J) \leq 17.7$ (which corresponds to $m_{10}(V) \leq 16.9$) [*as well as*]{} a richness count $C_{\rm EDCC} \geq 30$ (which we assume to correspond to $C_{\rm ACO}\geq 50$). Note that the EDCC is at high galactic latitude (with $b \la -45\degr$), so that Galactic obscuration does not play a rôle in this comparison. Among these 5 clusters, there are two for which the richness is uncertain, but unlikely to be less than 30. We therefore conclude that these 5 clusters are most likely true $R\geq 1$ clusters that were missed by ACO (for whatever reason). Two of these 5 clusters have $m_{10}(b_J) \leq 17.1$ while the others have $m_{10}(b_J) > 17.3$. We assume therefore that 2 clusters with $z \la 0.08$ have been missed in the solid angle of the EDCC by Abell et al. (1989), and that the other 3 clusters missed have $0.08 \la z \leq 0.1$. As the solid angle of our sample is 5.1 times larger than that of the EDCC, we estimate that from our sample 10 $R_{\rm ACO} \geq 1$ clusters with $z \la 0.08$, and 15 $R_{\rm ACO} \geq 1$ clusters with $0.08 \la z \leq 0.1$ are missing. At first sight it might seem that these numbers should be reduced by one-third, because of the fact that only two-thirds of the $C_{\rm EDCC}\geq 30$ clusters have $C_{\rm ACO}\geq 50$. However, that would ignore the fact that among the clusters with $C_{\rm EDCC} < 30$, a certain fraction has $C_{\rm ACO}\geq 50$, of which a few are also likely to have been missed by Abell et al. (1989). On the other hand, we consider these estimates of the number of clusters missing from our sample as upper limits, for the following reason. Near the richness completeness limit of a cluster sample there is some arbitrariness in accepting and rejecting clusters due to the uncertainties in the richness estimates. Because the number of clusters increases with decreasing $C_{\rm ACO}$, it is likely that we have accepted slightly more clusters than we should have done, as a result of the noise in the $C_{\rm ACO}$ estimates. From these arguments we estimate the true number of $C_{\rm ACO}\geq 50$ clusters in the near half of the volume to be between 74 and 84, or 79 $\pm$ 5 (which then implies a total number of 158 $\pm$ 10 such clusters out to $z = 0.1$ assuming a constant space density). The 79 $\pm$ 5 $C_{\rm ACO}\geq 50$ clusters represent a space density of $8.6\,\pm\,0.6\,\times10^{-6}\,h^3$ Mpc$^{-3}$. This is slightly higher than most previous estimates of the density of $R_{\rm ACO}\geq 1$ clusters (e.g. by Bahcall & Soneira 1983, Postman et al. 1992, Peacock & West 1992, and Zabludoff et al. 1993, hereafter ZGHR). The difference with other work is largely due to our correction of the intrinsic incompleteness of the ACO catalogue on the basis of the comparison with the EDCC. Note that our value is quite a bit lower than that obtained by Scaramella et al. (1991) for the Southern ACO clusters. These authors found a density of $12.5\times10^{-6}\,h^3$ Mpc$^{-3}$, which does not seem to be consistent with our data. In the determination of the distribution of velocity dispersions for the $R_{\rm ACO} \geq 1$ clusters (in Section 6), we will assume that the incompleteness of the $R_{\rm ACO} \geq 1$ sample can be corrected for simply by adjusting the density of clusters by the factor 158/80 (as we have velocity dispersions for only 80 out of an implied total of 158 clusters). This means that we will assume that the incompleteness only affects the number of clusters, and that our estimate of the $\sigma_V$ distribution for $0.08 \la z \leq 0.1$ is not biased with respect to that found for $z \la 0.08$. Our assumption that the total number of $R_{\rm ACO} \geq 1$ clusters is 158 $\pm$ 10 immediately implies that we have missed between 20 and 30 clusters in the outer half of our volume. It is not easy to account for this number unambiguously from first principles. However, the number does not seem implausible. Earlier, we estimated from a comparison with the EDCC that between 10 and 20 clusters have probably been missed by Abell et al. (1989) in the outer half of the volume (for whatever reason). This leaves between about 10 and 20 clusters to be accounted for by two effects: namely the $m_{10}$ limit that we imposed in the definition of the sample, and the effects of Galactic obscuration. Galactic obscuration may indeed have caused some clusters at low latitudes and close to the redshift limit to have been left out of the sample. Note, however, that Peacock & West (1992) argue quite convincingly that the effects of Galactic obscuration do not operate below $z\approx 0.08$, a conclusion that seems well supported by the data in Fig. 3. For $R\geq 1$ clusters at latitudes $|b|\geq 30\degr$ and with distance class $D \leq 4$ (i.e. $m_{10}\la 16.4$) Bahcall & Soneira (1983) and Postman et al. (1992) propose a cluster selection function varying with galactic latitude as $$P(b)=10^{0.32(1-\csc|b|)}.$$ This function also seems to give an acceptable description for Southern clusters with distance class 5 and 6, and is supposed to be largely caused by the effects of Galactic obscuration. For our sample, this would imply that we have missed about 13% of the estimated total of 158, or about 21 clusters, as a result of Galactic obscuration. In the light of the result of Peacock & West (1992), as well as the data in Fig. 3, all these missing clusters must have $z \ga 0.08$. Within the uncertainties, our interpretation of the observed redshift distribution of $R_{\rm ACO} \geq 1$ clusters thus seems to be consistent with all available information. The Sample of 33 Clusters with $C_{\rm 3D} \ge 75$ and $N \ge 10$ --------------------------------------------------------- In Section 3 we argued, on the basis of the data in Fig. 1$c$, that our cluster sample with $C_{\rm ACO} \ge 50$ probably is an acceptable substitute for a complete sample with $C_{\rm 3D} \ge 50$. The reason for this is that the two richness values scatter around the $C_{\rm ACO} = C_{\rm 3D}$-line, while the scatter (even though it is appreciable) appears quite symmetric around this line. From the same Figure it is also clear that it is not practically feasible to construct a sample complete down $C_{\rm 3D} = 50$ on the basis of the ACO catalogue. That would require the ACO catalogue to be complete down to $C_{\rm ACO} \approx 20$, given the width of the $f_{\rm main}$ distribution. However, the data in Fig. 1$c$ also show that from our $C_{\rm ACO} \ge 50$ complete sample it is possible to construct a subsample that is complete with respect to intrinsic richness for $C_{\rm 3D} \ge 75$. In Tab. 1$a$ there are 34 clusters with $C_{3D} \ge 75$, for one of which no velocity dispersion could be determined. In addition, there are 33 clusters in Tab. 1$a$ that have $C_{\rm ACO} + C_{\rm bck}>75.0$ but for which $f_{\rm main}$ is not available. Using the distribution of $f_{\rm main}$ given in Fig. 1$a$, we estimate that 10.2 of these would turn out to have $C_{\rm 3D} \ge 75$ if we measured their $f_{\rm main}$. This brings the estimated total number of clusters with $C_{\rm 3D} \ge 75$ in Tab. 1$a$ to 44.2. Finally, one must add an estimated contribution to this sample of $C_{\rm 3D}\ga 75$ clusters that have probably been missed by Abell et al.(1989). As before, the comparison between ACO and EDCC allows us to estimate this contribution. In principle, one would want to estimate the number of clusters missed with $C_{\rm 3D} \ge 75$. As the richnesses of 2 of the 5 clusters missed by ACO in the solid angle of the EDCC are uncertain, this cannot be done. Therefore, we will assume that the distribution over richness of the 5 clusters missed is the same as that of all clusters in our sample. Hence, as 44.2 of the 128 clusters with $C_{\rm ACO}$ have $C_{\rm 3D} \ge 75$, we estimate that 1.8 of the 5 missing clusters have $C_{\rm 3D} \ge 75$. Taking into account the ratio of the solid angles (see Section 4.1) this implies that 9.2 clusters with $C_{\rm 3D} \ge 75$ have been missed by ACO in our ‘cone’ volume. This brings the estimated total number of such clusters in the ‘cone’ to $53.4 \pm 5$, which represents a density of $2.9 \pm 0.3 \times10^{-6}\,h^3$ Mpc$^{-3}$. Some Remarks on the Quality of the ACO Catalogue ------------------------------------------------ Since serious doubts have been raised over the completeness and reliability of the ACO catalogue, it may be useful to summarize here our findings about its quality. As was shown in Paper I, the redshift data from the ENACS show that almost all $R_{\rm ACO} \geq 1$ cluster candidates with $m_{10}(V) \leq 16.9$ and $b\leq -30\degr$ correspond to real systems that are compact in redshift space. In only about 10% of the cases an $R_{\rm ACO} \geq 1$ cluster candidate appears to be the result of a superposition of two almost equally rich (but relatively poorer) systems. Comparison between the EDCC and ACO catalogues shows that at most $\approx$15% (i.e. $25/158$) of the $C_{\rm EDCC} \geq 30$ clusters (which are expected to have $C_{\rm ACO} \geq 50$) with $m_{10}(V) \leq 16.9$ in the EDCC do not appear in the ACO catalogue. From this, one can conclude that [*the ACO catalogue is at least 85% complete*]{} for $R_{\rm ACO} \ge 1$ clusters out to a redshift $z \approx 0.1$ (see also Briel & Henry 1993). Out to $z = 0.08$ the completeness is even higher, viz. 94%. If one takes into account the effects of Galactic obscuration the overall completeness of the ACO catalogue for $|b|\geq 30\degr$ apparently decreases to about 80% (viz. $128/158$). On average, about three quarters of the galaxies in the direction of $R_{\rm ACO} \geq 1$ clusters with $z\la 0.1$ are in the main system, i.e. the effect of fore- and background contamination is substantial. However, if one takes into account the effect of field contamination by deriving $C_{\rm 3D}$, the intrinsic 3-D richness of the clusters, it appears that the field-corrected ACO richness is statistically equivalent to the intrinsic richness. This means that one can use a complete sample of clusters with $C_{\rm ACO} \ge 50$ to investigate the statistical properties of a sample of clusters complete down to $C_{\rm 3D} \approx 50$. The relation between $C_{\rm ACO}$ and $C_{\rm 3D}$ however shows a large spread; as a result it is not possible to select from the $C_{\rm ACO} \ge 50$ cluster sample a subsample that is complete with respect to $C_{\rm 3D}$ for values of $C_{\rm 3D}$ less than about 75. We have thus demonstrated that our sample of 128 clusters in the ACO catalogue with $C_{\rm ACO} \ge 50$ and $z\le 0.1$ can be used as a statistical sample for the study of the properties of clusters of galaxies. The subsample of 33 clusters with $C_{\rm 3D} \ge 75$ and $N \ge 10$ is truly complete with respect to $C_{\rm 3D}$ and can therefore be used to check the results from the larger sample. The Estimation of the Velocity Dispersions ========================================== For a determination of the distribution of cluster velocity dispersions, one must address several points. First, it is very important that the individual estimates of the global velocity dispersions are as unbiased as possible, as any bias may systematically alter the shape and amplitude of the distribution. For example, when we identified the systems in velocity space using a fixed velocity gap, we did not discuss the plausibility of membership of individual galaxies. Before calculating the global velocity dispersion we must take special care to remove fore- and background galaxies that cannot be members of the system on physical grounds. Leaving such non-members in the system will in general lead to an overestimation of the global velocity dispersion. Secondly, it has been shown that the velocity dispersion may vary with position in the cluster, so that the global velocity dispersion can depend on the size of the aperture within which it is calculated. Finally, radial velocities are generally measured only for a bright subset of the galaxy population. If luminosity segregation is present this will generally cause the velocity dispersion to be underestimated. The Removal of Interloper Galaxies ---------------------------------- It is well-known that in determining velocity dispersions one has to be very careful not to overestimate $\sigma_V$ as a result of the presence of non-members or ‘outliers’. Recently, den Hartog & Katgert (1995) have shown that velocity dispersions will be overestimated due to the presence of ‘interlopers’, i.e. due to galaxies that have ‘survived’ the 1000 km/s fixed-gap criterion for membership, but that are nevertheless unlikely to be members of the cluster. For the removal of such interlopers, these authors developed an iterative procedure that employs the combined positional and velocity information to identify galaxies that are probably not cluster members. The procedure starts by estimating a mass profile from an application of the virial theorem to galaxies in concentric (cylindrical) cross-sections through the cluster with varying radii. Subsequently, for each individual galaxy one investigates whether the observed line-of-sight velocity is consistent with the galaxy being on a radial orbit with a velocity less than the escape velocity, or on a bound circular orbit. If the observed velocity is inconsistent with either of these extreme assumptions about the shape of the orbit, the galaxy is flagged as an interloper (i.e. a non-member), and not used in the computation of the mass profile in the next iteration step. This procedure is repeated until the number of member galaxies becomes stable, which usually happens after only a few iteration steps. In order to ensure that the definition of an interloper and the value of the cluster velocity dispersion is independent of the redshift of the cluster, it is necessary to convert the velocities of galaxies with respect to the cluster to the rest frame of the cluster (e.g.Danese et al. 1980). Because the elimination of interlopers changes the estimated average cluster redshift (but only slightly) this correction is applied to the original data in each iteration step. The procedure has been tested on the set of 75 model clusters presented by Van Kampen (1994). This set of model clusters is designed to mimic a sample complete with respect to total mass in a volume that is comparable to that of our $z \le 0.1$ sample. The initial conditions were generated for an $\Omega=1$ CDM scenario. Individual cluster models have reasonable mass resolution and contain dark matter particles as well as soft galaxy particles that are formed according to a prescription that involves percolation and a virial condition. A typical simulation has a volume of about $(30$[ Mpc]{}$)^3$ and contains $O(10^5)$ particles. In these models the membership of galaxies follows unequivocally from the position with respect to the turn-around radius. It appears that the interloper removal works very well: in the central region (i.e. within the Abell radius) 90% of the non-members are indeed removed, and those that are not removed have a velocity dispersion that is essentialy equal to that of the member galaxies. In the same region only 0.4% of the cluster members is inadvertently removed. Because the procedure by which we removed interlopers requires a reliable position for the cluster centre and a reasonably well-determined mass profile, we have applied it only to the 28 clusters for which at least 50 redshifts are available. This means that the density of systems with the largest velocity dispersions may still be somewhat overestimated, as some of the largest dispersions are for systems with less than 50 redshifts. As a result the dispersion distribution could in reality fall off even slightly steeper towards high dispersions at the high end than it appears to do. However, for the clusters with less than 50 redshifts we have used the robust biweight estimator for the velocity dispersion (see Beers et al.1990), so that the influence of unremoved interlopers in the tails of the velocity distribution is strongly reduced. We have used as many redshifts as possible for each cluster. In 5 cases (i.e. for A0119, A0151, A2717, A3128, A3667) we have combined existing data from the literature with the new redshifts obtained in the ENACS. Before combining the two sets of data we have investigated the consistency of the two redshift scales. The comparison is made for galaxies of which the position is the same in both surveys to within 20. The redshift scales generally agree to within the uncertainties (see also Tab. 2 in Paper I). In Fig. 4 we show, for the 28 systems with at least 50 members, the decrease of the global velocity dispersion as a function of the value of the dispersion before the interlopers were removed. For dispersions below about 900 km/s the reduction is at most about 10%. It is clear that the decrease can be much larger for the largest dispersions, with reductions of as much as 25 to 30%. The point near the upper right-hand corner refers to A151, before and after the separation of the 2 low redshift systems. In two clusters, A85 and A3151, the interloper removal failed to delete a group of interlopers. In the wedge diagrams the two groups were clearly compact in velocity and spatial extent, with a velocity offset of $\approx 2000$ km/s with respect to the main system. We have removed these groups by hand. Our analysis indicates that systems with global velocity dispersions larger than 1200 km/s have such low space density (if they exist at all) that in our volume they do not occur. This would seem to be at variance with the result of ZGHR, who find one cluster, A2152, with a dispersion of 1346 km/s in a sample of 25 clusters, in a volume that is about a factor of 5 smaller than ours. However, it must be said that, had the cluster A2152 appeared in our sample (with the same number of redshifts that ZGHR had available, viz. 21), we would probably also have found a high velocity dispersion. However, had 50 redshifts been available, we would almost certainly have eliminated quite a few interlopers, and would thereby probably have reduced the dispersion substantially. Therefore, a distribution of velocity dispersions becomes less biased towards high dispersion values, when the average number of redshifts per cluster increases. The absence of systems with very large velocity dispersions in our sample is due to the large fraction of systems for which we could eliminate non-members using a physical criterion. As we have discussed extensively in Paper I, it is very unlikely that the absence of very large velocity dispersions in our sample is due to the method by which we defined the systems in the first place. If we had used the method of ZGHR to define the systems (see Tab. 6 in Paper I for detailed information) one or two systems which we have broken up would have remained single. However, when we identify our clusters with the ZGHR method and subsequently remove the interlopers, we find essentially the same clusters with the same velocity dispersions. It thus appears that quite a few of the galaxies in the clusters for which ZGHR find a very large velocity dispersion are unlikely to be members of the system. The Effects of Aperture Variations and Luminosity Segregation ------------------------------------------------------------- Another possible bias in estimating velocity dispersions is due to the fact that the velocity dispersion frequently varies significantly with distance from the cluster centre (see e.g. den Hartog and Katgert, 1995). Differences in the physical size of the aperture within which the velocities are measured are inevitable for a sample of clusters with redshifts between 0.02 and 0.1. However, the sizes of the apertures used in the observations have a dispersion of only about 30% around the average value of about 0.9 [ Mpc]{}. On the basis of the velocity dispersion profiles discussed by den Hartog and Katgert (ibid.) and in agreement with Girardi et al. (1993) we estimate that corrections for variations in the aperture in practice are at most about 10% (and can be both positive and negative). They are thus substantially smaller than the largest corrections applied to the dispersions due to interloper removal (which are exclusively negative). We have decided not to attempt to apply corrections for aperture variations based on an average velocity dispersion profile, as this will only introduce noise and is not expected to change the results in a systematic way. den Hartog and Katgert (ibid.) found signs of luminosity segregation in approximately 20% of the clusters in their sample. Hence, it is necessary to check that our velocity dispersions are not biased in a systematic way as a result of the fact that we have sampled for many clusters only the brightest galaxies in the central regions of the clusters. Luminosity segregation is a manifestation of the physical processes of mass segregation (heavy galaxies move slower and their distribution is more centrally concentrated than that of light galaxies) and velocity bias (the velocity dispersion of the galaxies is lower than that of the dark matter particles). The reality of mass segregation and velocity bias is still a matter of dispute (see e.g.Carlberg 1994, versus e.g. Katz et al. 1992, Biviano et al. 1992, Lubin & Bahcall 1993, and Van Kampen 1995). Moreover, significant mass segregation may not be readily observable if it is accompanied by significant variations in [–ratio]{}. We have tested for the presence of luminosity segregation in our cluster sample, and will discuss the results in a forthcoming paper. It appears that luminosity segregation exists, but that it is exclusively linked to the very brightest cluster galaxies, which appear to move very much slower than the other galaxies. As our velocity dispersions are based on at least 10 (but often many more) redshifts, the effect of luminosity segregation is completely negligible in the context of the present discussion. The Distribution of Velocity Dispersions ======================================== By virtue of the significant (but very broad) correlation between richness and velocity dispersion, the largest velocity dispersions are in general found in the systems with the highest richnesses, whereas the low dispersions are found preferentially in the poorer systems. As a result, any sample of clusters presently available is biased against low velocity dispersions, because of the lower limit in richness that defines the sample. This means that any observed (and predicted!) distribution of cluster velocity dispersions refers to a specific richness limit. As a matter of fact, the distribution will in principle be biased for velocity dispersions smaller than the largest value found at the richness limit; above the latter value the distribution is unbiased. Our estimates of the apparent distributions of $\sigma_V$ are shown in Figs. 5$a$ and $b$. They refer to the two subsets of clusters: viz. the complete sample of 80 clusters with $z\leq 0.1$ and $C_{\rm ACO}\geq 50$ and the complete subsample of 33 clusters with $C_{\rm 3D}\geq 75$. As we discussed before, the latter is truly complete with respect to intrinsic 3-D richness (i.e. fore- and background contamination has been taken into account) and the completeness with respect to redshift is beyond suspicion. We also found that the sample of 80 $C_{\rm ACO}\geq 50$ clusters can be used as a substitute for a $C_{\rm 3D} \ge 50$ sample. However, as it is somewhat incomplete near the redshift limit we can derive an unbiased estimate of the $\sigma_V$ distribution only if there is no systematic change in our sample of $\sigma_V$ with redshift. In Fig. 6 we show that there is indeed no evidence for a significant correlation of velocity dispersion with redshift in our sample. The lack of systems with $\sigma_V \ga$ 900 km/s below a redshift of about 0.05 is not considered significant in view of the small volume sampled. It is also encouraging that there is no clear bias against systems with low values of $\sigma_V$, between $z=0.08$ and $z=0.1$. Therefore, the data in Fig. 6 indicate that the result for the sample of 80 systems with $z\leq 0.1$ and with $C_{\rm ACO}\geq 50$ should be equally reliable as that for the subsample of 33 $C_{\rm 3D}\geq 75$ systems, with the advantage of larger statistical weight. In Fig. 7 we show the cumulative distribution of $\sigma_V$ for the sample of 80 clusters with $C_{\rm ACO} \ge 50$ (full-drawn line) and for the subsample of 33 clusters with $C_{\rm 3D} \ge 75$ (dashed line). Note that the densities follow directly from the space densities that we derived in Sections 4.1 and 4.2, and have thus not been scaled to some external cluster density (as is sometimes done in the literature). For comparison, and to illustrate the effect of the interloper removal, we also show the cumulative distribution of $\sigma_V$ for the 80 cluster sample, when no interlopers are removed, i.e with $\sigma_V$ for the clusters with 50 redshifts or more determined only with the robust biweight estimator (dotted line). It is clear that without interloper removal the distribution of $\sigma_V$ is significantly biased for $\sigma_V \ga 800$ km/s. The (interloper-corrected) distributions for the two samples agree very well for $\sigma_V \ga 900$ km/s. This is not surprising because, as we will see below, there are hardly any clusters with $\sigma_V \ge 900$ km/s that have $C_{\rm 3D} < 75$. The good agreement therefore just shows that the ratio of the space densities derived in Sections 4.1 and 4.2 is quite good. The two distributions also illustrate the bias against low values of $\sigma_V$. The value of $\sigma_V$ at which the bias sets in and the magnitude of the bias are seen to depend on the richness completeness limit of the sample in a way that is consistent with the discussion at the beginning of this Section. For the sample with $C_{\rm 3D} \ge 75$ the incompleteness starts at $\sigma_V \approx 900$ km/s, while for the sample with $C_{\rm ACO} \ge 50$ it starts at $\sigma_V \approx 800$ km/s. For $\sigma_V\geq 800$ km/s the cumulative distribution for the sample of 80 clusters can be parametrized as follows: $$\log n(>\sigma_V) = -5.6 -0.0036 (\sigma_V - 800 \mbox{\,km/s}) \quad\mbox{$h^3$ Mpc$^{-3}$}$$ For $\sigma_V < 800$ km/s the same distribution also seems to be described fairly accurately by a power law, but the significance of that fit is much less apparent because of the bias that is likely to increase with decreasing $\sigma_V$. ZGHR have tried to correct the bias against low-velocity dispersion systems by combining clusters and dense groups. Indeed, it appears that continuation of the above power law fit down to $\sigma_V$ = 700 km/s would predict, within the errors, the correct density $n(>700\,\mbox{km/s})$ for the combination of clusters and dense groups. However, as the definition and selection of dense groups is different from that of rich clusters, it is not unlikely that the intrinsic properties of the groups, such as $\sigma_V$, as well as their spatial density may differ systematically from that of the clusters. Also, for the dense groups a similar bias operates as for the clusters. Combination of the two $\sigma_V$ distributions is therefore not without problems. It is of some interest to have a closer look at the values of $\sigma_V$ below which the two distributions are biased as a result of the lower limits in $C_{\rm ACO}$ and $C_{\rm 3D}$ that define the samples. These values of $\sigma_V$ are the maximum values found near the cut-off in richness, and they can be estimated from Fig. 8, in which we show several distributions of $\sigma_V$ against richness. From the distribution of $\sigma_V$ against $C_{\rm ACO}$, shown in Fig. 8$a$, it appears that the maximum $\sigma_V$ near the richness limit $C_{\rm ACO}=50$ is about 800 km/s. In Fig. 8$b$ we show $\sigma_V$ vs. richness for the subset of ENACS clusters for which either LNCG (squares) or Dalton (priv. comm.; diamonds) give an alternative, machine-based estimate of the richness. In Fig. 8$b$ the ordinate is $C_{\rm EDCC}+20$ rather than $C_{\rm EDCC}$, because there seems to be a systematic offset between $C_{\rm ACO}$ and $C_{\rm EDCC}$ of about 20 (see Section 4.1). It is clear that for a richness limit of 50 in the machine-based counts, the bias is again absent only for $\sigma_V \ga 800$ km/s. In Fig. 8$c$ we show velocity dispersion vs. richness count $C_{\rm APM}$ from the APM cluster catalogue, for the 37 clusters in the APM catalogue of which the positions coincide with that of a cluster in our sample to within half an Abell radius. Note that Dalton et al.(1994) have calculated the richness inside half an Abell radius and within a variable magnitude interval based on the luminosity function in the region of the cluster, in order to be less sensitive to interlopers. As a result $C_{\rm APM}$ is systematically lower than $C_{\rm ACO}$, and the richness limit $C_{\rm ACO}=50$ corresponds to $C_{\rm APM}=35$ (Efstathiou et al. 1992a). From Fig. 8$c$ we conclude that the bias against low velocity dispersions sets in at $\sigma_V \approx 800$ km/s. From the fact that the three left-hand panels in Fig. 8 are qualitatively very similar we conclude that the large spread in Fig. 8$a$ is not primarily due to errors in the values of $C_{\rm ACO}$, as a similar spread is seen for the two other catalogues. Therefore, we conclude that the large spread in velocity dispersion for a fixed value of 2-D richness is probably (at least partially) intrinsic to the clusters. In Fig. 8$d$ we show the relation between $\sigma_V$ and $C_{\rm 3D}$, with the latter based on $C_{\rm ACO}$. It appears that the relation is less broad than that in Fig. 8$a$. Apparently, the correction for superposition effects (which we could only apply thanks to the redshift information) results in a fairly significant decrease of the spread in the relation. In Fig. 8$d$ the existence of an upper limit to the velocity dispersion of $\approx$900 km/s at the richness limit of 75 is very clearly illustrated. The spread in the relation between $\sigma_V$ and $C_{\rm 3D}$ in Fig. 8$d$ is probably not primarily due to errors in the values $C_{\rm ACO}$. This is supported by the data in Figs. 8$e$ and 8$f$, where we show the relation between $\sigma_V$ and machine-based counts that have been corrected for superposition effects. We conclude therefore that the scatter between $\sigma_V$ and richness (in whichever way it is measured) must largely be intrinsic. In other words: a given velocity dispersion may be found in clusters of quite different richnesses, while clusters of a given richness span a large range of velocity dispersion. Discussion ========== In the following we will make two types of comparison of the results obtained here with earlier results. First we will compare with other determinations of the cumulative distribution of cluster velocity dispersions $n(>\sigma_V)$, as well as with the distributions of cluster X-ray temperatures $n(>T_X)$. Subsequently, we will discuss the relation between our result and some model predictions for $n(>\sigma_V)$ from the literature. Comparison with Other Data -------------------------- There are several other determinations of $n(>\sigma_V)$ in the literature. Recent papers on the subject are e.g. those by Girardi et al. (1993), and by ZGHR. The result of Girardi et al. (1993) is based on a compilation of redshifts for cluster galaxies. As a result, the amplitude of $n(>\sigma_V)$ is not known in absolute terms, but has been inferred from the integrated fraction of clusters together with an external estimate of the total density of rich clusters. Collins et al. (1995) also present a distribution of $\sigma_V$ that is not normalized. On the contrary, ZHGR present, like we do, an estimate of $n(>\sigma_V)$ with a calibrated space density. A comparison with the results of Girardi et al. (1993) and ZGHR is given in Fig. 9$a$, where the result of Girardi et al. (1993) has been scaled to the density of rich clusters derived in Section 4.1, rather than that given by Bahcall and Soneira (1983). Although the previous estimates of $n(>\sigma_V)$ involved clipping of ‘outliers’, none employed the removal of interlopers as described in Section 5.1 , and therefore it is not too surprising that for $\sigma_V \ga 900$ km/s our result is systematically lower than the other two. Girardi et al. (1993) obtain a similar slope but a (perhaps not very certain) amplitude that is at least two times higher than ours. We do not show the result of Collins et al. separately as it appears to agree with that of Girardi et al. On the other hand, the result of ZGHR agrees very nicely with ours for $\sigma_V \la 900$ km/s, but for larger values of $\sigma_V$ they obtain a slope that is definitely less steep than ours. Our upper limit on the occurence of clusters with $\sigma_V \ga$ 1200 km/s is much more severe than any previous result based on optical data, namely that the space density of such clusters is less than one in our survey volume of $1.8\times10^7\,h^{-3} {\rm Mpc}^3$. As we discussed above, this is almost entirely due to our removal from the redshift data of those interlopers that can only be recognized on the basis of the combination of radial velocity [*and*]{} projected position within the cluster. In Fig. 9$b$ we compare our result with distributions of the cluster X-ray temperature $T_X$ by Henry & Arnaud (1991) and by Edge et al. (1990). In transforming the $T_X$ scale into a $\sigma_V$ scale we assumed that $\sigma_V^2 = (kT_X/ \mu m_H)$, where $\mu$ and $m_H$ have their usual meaning. The reason for the discrepancy between the two X-ray results is not known. ZGHR have suggested that the discrepancy is due to differences in normalization caused by different fitting procedures, sample size and sample completeness. The agreement between our result and that of Henry & Arnaud (1991) is excellent for $\sigma_V \ga 800$ km/s. Both the amplitude and the slope agree very well, and to us this suggests that the removal of interlopers is necessary, and that our removal procedure is adequate. It also suggests that the velocity dispersions in excess of 1200 km/s, found by others, must indeed almost all be overestimates caused by interlopers. Interestingly, the two results start to diverge below $\approx$800 km/s. Although one cannot claim that $n(>T_X)$ is very well determined in that range there is at least no contradiction with the conclusion that we reached in Section 6, namely that our $n(>\sigma_V)$ must start to become underestimated below $\approx$800 km/s as a result of the richness limit of our cluster sample. The extremely good agreement between our $n(>\sigma_V)$ and the $n(>T_X)$ by Henry & Arnaud (1991) for $\sigma_V \ga$ 800 km/s, for an assumed value of $\beta = \sigma_V^2 /(kT_X / \mu m_H) = 1$ strongly suggests that X-ray temperatures and velocity dispersions statistically measure the same cluster property. This is in agreement with earlier results of Lubin & Bahcall (1993), Gerbal et al. (1994), and den Hartog & Katgert (1995) who also find that it is not necessary that the ratio of ‘dynamical’ and X-ray temperatures differs from 1.0. Of course, in our case this statement only refers to the [*sample*]{} of clusters, and it has not been proven to be valid for individual clusters. On the basis of the data in Fig. 9$b$ we conclude that the [*average*]{} value of $\beta$ must lie between 0.7 (the value required if the upper range of our $n(>\sigma_V)$ determinations must coincide with the result of Edge et al. 1990), and 1.1 (the value required if the lower range of our data must coincide with the result of Henry & Arnaud 1991). Requirements for Useful Comparison with Models ---------------------------------------------- There are quite a few papers in the literature in which model calculations of clusters of galaxies are presented from which one can, in principle, derive model predictions of $n(>\sigma_V)$. These models are generally of two kinds. First, there are numerical (or analytical) models of a sufficiently large cosmological volume, containing a sample of clusters, each of which is modeled with relatively low resolution. In this case one can obtain a direct estimate of $n(>\sigma_V)$, the normalization of which is unambiguous. A good example of this type of model was described by FWED. Secondly, sets of higher-resolution cluster simulations may be created for which the global properties are distributed as predicted for an arbitrarily chosen, large cosmological volume. In this case, the normalization of $n(>\sigma_V)$ depends on the details of the selection of the set of cluster models. An example of the latter has been described by Van Kampen (1994). Of course, in both cases, the resulting predictions are valid only for the chosen scenario of large-scale structure formation. We will limit ourselves here to a brief discussion of various aspects of the comparison between observations and models, and demonstrate the use of our result in a comparison with the models of FWED and Van Kampen (1994). A meaningful comparison between observations and models requires that one derives from the models a prediction of exactly the same quantity as one has observed. As we discussed above, our $\sigma_V$ estimates refer to a [*cylinder*]{} with an average radius of 1.0 [ Mpc]{}and a depth of (about) twice the turn-around radius of the cluster. From their models, FWED have calculated the line-of-sight velocity dispersion within a [*sphere*]{} with radius equal to the Abell radius. As the latter excludes the, mostly slowly moving, galaxies that are near the turn-around radius, the $\sigma_V$ values in a sphere are expected to be systematically higher than in a cylinder with the same radius, by as much as 10 %. On the other hand, the value of $\sigma_V$ also depends on the radius of the cylinder or sphere. On average, $\sigma_V$ is expected to decrease with increasing radius of the cylinder because, on average, the velocity dispersion tends to decrease with distance from the cluster centre. In a comparison between our data and the model prediction of FWED (who use a sphere with radius 1.5 [ Mpc]{}as compared to our cylinder with radius 1.0 [ Mpc]{}) we will assume that the two effects compensate almost exactly. We conclude this from a direct comparison between the two quantities based on the models of Van Kampen (1995). As the models by FWED and Van Kampen use the same $\Omega=1$ CDM formation scenario, we assume that this conclusion is also valid for the FWED models. The values of the global $\sigma_V$ may depend fairly strongly on the details of the integration scheme in the N-body simulations. For instance, the models of FWED do not have much resolution on the scale of galaxies, since huge volumes (of the order of the volume of the ENACS) had to be simulated with $O(2\cdot10^5)$ particles. As a result, the scale-length for force softening is well over 100 kpc. Van Kampen (1995) has studied the effect of the softening scale-length on $\sigma_V$ and finds that for the FWED scale-length the velocity dispersions are 15–20% smaller than for a softening-length of 20 kpc. Another aspect of the comparison is the identification of the clusters in (particularly) the large-scale simulations. FWED identified ‘galaxies’, at the end of the simulation, as peaks in the density field, without altering the dynamical properties of the constituent dark particles. First, it is not clear whether galaxies form solely or preferentially from peaks in the initial density field (see e.g. Van de Weygaert & Babul 1994, and Katz et al. 1994). Secondly, Van Kampen (1995) found that the spatial distribution of the ‘galaxies’ in his models can differ substantially from that of the dark matter. The galaxy identification ‘recipe’ can thus influence the definition of the clusters and of the cluster sample, as a cluster is identified through the number of galaxies inside an Abell radius. Finally, it is possible that in clusters the velocity dispersion of the galaxies is 10 $-$ 20% lower than that of the dark matter, as a result of velocity bias (see e.g. Carlberg 1994 and Summers 1993). The reality of velocity bias is still controversial (see e.g. Katz et al. 1992, Lubin & Bahcall 1993 and Van Kampen 1995), and one must be careful to derive the velocity dispersion of the [*galaxies*]{} from the models. Comparison with Selected Model Predictions ------------------------------------------ In Fig. 10 we compare our estimate of $n(>\sigma_V)$ to the model predictions from FWED and Van Kampen (1994), which both assume an $\Omega=1$ CDM formation scenario. In Fig. 10$a$ we compare our result with the predictions by FWED, who identified the $R\geq 1$ clusters in their models as groups of dark and luminous particles for which the luminosity inside a sphere with Abell radius exceeds 42 $L^*$. We corrected the FWED velocity dispersions for the effects of the fairly large softening parameter by multiplying them by a factor of 1.18. We assumed that the differences related to the use of spherical and cylindrical volumes, as well as different sizes of the aperture, compensate. For a suitable choice of the bias parameter the observations and predictions can be made to agree fairly well, although one could argue that for $\sigma_V \ga$ 800 km/s the slope of the observed $n(>\sigma_V)$ is steeper than that of the predicted $n(>\sigma_V)$ for any bias parameter in the range from 2.0 to 2.5. This may be (partly) due to the fact that FWED convolved their result with assumed errors in $\sigma_V$ of about 20 %, which is probably a factor of two larger than the errors in our $\sigma_V$ estimates for $\sigma_V\ga 800$ km/s. In Fig. 10$b$ we make the comparison with the predictions by Van Kampen (1994), who applied a $C_{\rm 3D}$ lower limit for identifying the clusters to be included in the comparison. We have scaled his results to the density of rich clusters derived in Section 4.1. For his models we show the distributions of $\sigma_V$ for the galaxies, but for $b = 2.2$ we also show the distribution for the dark matter; it is clear that the galaxies and dark matter give essentially the same $n(>\sigma_V)$. FWED and Van Kampen (1994) seem to predict different amplitudes of $n(>\sigma_V)$ for the same values of the bias parameter. We do not consider this the proper place to investigate possible explanations for the difference. Suffice it to say that the difference between the cluster identification schemes may well be one of the causes. The observations and predictions can be made to agree fairly well, although one could again argue that for $\sigma_V \ga$ 800 km/s the slope of the observed $n(>\sigma_V)$ is significantly steeper than that of the predicted $n(>\sigma_V)$. From both comparisons we see that for the standard $\Omega=1$ CDM model a large bias parameter is indicated (between 2.0 and 2.5 for the FWED models and between 2.4 and 2.8 for the models by Van Kampen 1994). For the commonly accepted low value of the bias parameter of about 1.0, the models clearly predict too many clusters with large velocity dispersions. Also, the relative proportions of high- and low-$\sigma_V$ clusters do not seem to be right. Our result confirms the conclusions by FWED and White et al. (1993) that the distributions of the velocity dispersions or masses of rich clusters do not support $\Omega$ = 1 CDM models with low values of the bias parameter. The high values of the bias parameter, that one infers from the comparisons in Fig. 10, are in conflict with the results for the normalization of the $\Omega=1$ CDM models on larger scales, from comparisons with e.g. the COBE data (Wright et al.1992, Efstathiou et al. 1992b), the power spectrum analysis of the QDOT survey (Feldman et al. 1994) and the recent analyses of large-scale streaming (Seljak & Bertschinger 1994). The important conclusion is therefore that, for $\sigma_V \ga 800$ km/s our observed distribution $n(>\sigma_V)$ provides a very powerful constraint for cosmological scenarios of structure formation. It will not be too long before detailed predictions based on the currently fashionable (or other) alternative scenarios (be it low-density, tilted-spectrum, vacuum-dominated or neutrino-enriched CDM) can be compared, in a proper way, to the observational constraints. Even though it is worthwhile to try and obtain unbiased estimates of $n(>\sigma_V)$ for $\sigma_v \la $800 km/s, it would seem that the high$-\sigma_V$ tail of the distribution has the largest discriminating power. Summary and Conclusions ======================= We have obtained a statistically reliable distribution of velocity dispersions which, for $\sigma_V \ga 800$ km/s, is free from biases and systematic errors, while below 800 km/s it is biased against low values of $\sigma_V$ in a way that is dictated by the richness limit of our sample, viz. $C_{\rm ACO} \geq 50$. The observed distribution $n(>\sigma_V)$ offers a reliable constraint for cosmological scenarios, provided model predictions are based on line-of-sight velocity dispersions for all galaxies inside the turn-around radius and inside a projected aperture of 1.0 [ Mpc]{}, and provided the clusters are selected according to a richness limit that mimics the limit that defines the observed cluster sample. The sample of ACO clusters with $|b|>30\degr$, $C_{\rm ACO}\geq 50$ and $z\leq 0.1$ is $\approx$85% complete. We find that the density of clusters with an apparent richness $C_{\rm ACO} \geq 50$ is $8.6 \pm 0.6 \times 10^{-6}\,h^3$ Mpc$^{-3}$, which is slightly higher than earlier determinations (e.g. by Bahcall & Soneira 1983, Peacock and West 1992, and ZGHR). We show that one can define a complete subsample of the $C_{\rm ACO} \geq 50$ sample that contains all clusters with an intrinsic 3-D richness $C_{\rm 3D} \ge 75$; the density of the latter is $2.9 \pm 0.3 \times10^{-6}\,h^3$ Mpc$^{-3}$. We find that cluster richness is a bad predictor of the velocity dispersion (whether it is based on ACO or machine counts) due to the very broad correlation between the two cluster properties. It appears that the spread in this correlation must be largely intrinsic, i.e. not due to measurement errors. As a result, all samples of clusters that are selected to be complete with respect to richness are biased against low-$\sigma_V$ systems. The space density of clusters with $\sigma_{V}>1200$ km/s is less than $0.54\times10^{-7}\,h^3\,{\rm Mpc}^{-3}$. This is in accordance with the limits from the space density of hot X-ray clusters. From the good agreement between $n(>\sigma_V)$ and $n(>T_X)$ we conclude that $\beta = \sigma_V^2 /(kT_X / \mu m_H) \approx 1$ and that X-ray temperature and velocity dispersion are statistically measuring the same cluster property. For the low values of the bias parameter ($b\approx 1.0$) that are implied by the large-scale normalization of the standard $\Omega=1$ CDM scenario for structure formation this model appears to predict too many clusters with high velocity dispersions. Approximate agreement between observations and the $\Omega=1$ CDM model can be obtained for bias parameters in the range $2\la b \la 3$, in agreement with the earlier conclusions by FWED or White et al.(1993). [We thank Eelco van Kampen for helpful discussions and for allowing us to use his unpublished cluster models. Gavin Dalton is gratefully acknowledged for making available a machine-readable version of his Ph.D. thesis, as well as four alternative cluster catalogues based on the APM galaxy catalogue. We thank Mike West for providing us with his compilation of cluster redshifts. We thank the referee, L. Guzzo, for several useful comments and for pointing out an error in the manuscript, the correction of which has led to an important improvement of the paper. The cooperation between the members of the project was financially supported by the following organizations: INSU, GR Cosmologie, Univ. de Provence, Univ. de Montpellier (France), CNRS-NWO (France and the Netherlands), Leiden Observatory, Leids Kerkhoven-Bosscha Fonds (the Netherlands), Univ. of Bologna, Univ. of Trieste (Italy), the Swiss National Science Foundation, the Ministerio de Educacion y Ciencia (Spain), CNRS-CSIC (France and Spain) and by the EC HCM programme.]{} Abell, G.O., 1958, ApJS, 3, 211 Abell, G.O., Corwin, H.G., Olowin, R.P., 1989, ApJS, 70, 1 Bahcall, N.A., Cen, R., 1992, ApJ, 398, L81 Bahcall, N.A., Soneira, R.M., 1983, ApJ, 270, 20 Beers, T.C., Flynn, K., Gebhardt, K., 1990, AJ, 100, 32 Biviano, A., Girardi, M., Giuricin, G., Mardirossian, F., Mezzetti, M., 1992, ApJ, 396, 35 Biviano, A., Girardi, M., Giuricin, G., Mardirossian, F., Mezzetti, M., 1993, ApJ 411, L13 Briel, U., Henry, G.P., 1993, A&A, 278, 379 Carlberg, R.G., 1994, ApJ, 433, 468 Collins, C.A., Guzzo, L., Nichol, R.C., Lumsden, S.L., 1995, MNRAS, 274, 1071 Dalton, G.B., 1992, PhD thesis, Oxford Dalton, G.B., Efstathiou, G., Maddox, S.J., Sutherland, W.J., 1992, ApJ, 390, L1 Dalton, G.B., Efstathiou, G., Maddox, S.J., Sutherland, W.J., 1994, MNRAS, 269, 151 Danese, L., De Zotti, G., Di Tullio, G., 1980, A&A, 82, 322 Edge, A.C., Stewart, G.C., 1991, MNRAS, 252, 414 Edge, A.C., Stewart, G.C., Fabian, A., Arnaud, K.A., 1990, MNRAS, 245, 559 Efstathiou, G., Dalton, G.B., Sutherland, W.J., Maddox, S.J., 1992a, MNRAS, 257, 125 Efstathiou, G., Bond, J.R., White, S.D.M., 1992b, MNRAS, 258, 1p Feldman, H.A., Kaiser, N., Peacock, J.A., 1994, ApJ, 426, 23 Frenk, C.S., White, S.D.M., Efstathiou, G., Davis, M., 1990, ApJ, 351, 10 (FWED) Gerbal, D., Durret, F., Lachièze-Rey, M., 1994, A&A, 288, 746 Girardi, M., Biviano, A., Giuricin, G., Mardirossian, F., Mezzetti, M., 1993, ApJ, 404, 38 den Hartog, R.H., Katgert, P., 1995, submitted to MNRAS Henry, J.P., Arnaud, K.A., 1991, ApJ, 372, 410 Katgert, P., Mazure, A., Perea, J., et al., 1995,TO BE FILLED IN BY EDITOR AANDA Katz, N., Hernquist, L., Weinberg, D.W., 1992, ApJ, 399, L109 Katz, N., Quinn, T., Bertschinger, E., Gelb, J.M., 1994, MNRAS, 270, L71 Lubin, L.M., Bahcall, N.A., 1993, ApJ, 415, L17 Lumsden, S.L., Nichol, R.C., Collins, C.A., Guzzo, L., 1992, MNRAS, 258, 1 (LNCG) Metzler, C., Evrard, A., 1994, ApJ, 437, 564 Peacock, J.A., West. M.J., 1992, MNRAS, 259, 494 Pierre, M., Bohringer, H., Ebeling,H., et al., 1994, A&A, 290, 725 Postman, M., Huchra, J.P., Geller, M.J., 1992, ApJ, 384, 404 Quintana, H.R., Ramírez, A., 1995, ApJS, 96, 343 Scaramella, R., Zamorani, G., Vettolani, G., Chincarini, G., 1991, AJ, 101, 342 Seljak, U., Bertschinger, E., 1994, ApJ, 427, 523 Struble, M.F., Rood, H.J., 1991, ApJS, 77, 363 Summers, F.J., 1993, PhD thesis, Berkeley Van Kampen, E., 1994, PhD thesis, Leiden Observatory Van Kampen, E., 1995, MNRAS, 273, 295 Van de Weygaert, R., Babul, A., 1994, ApJ, 425, L59 Walsh, D., Miralda-Escudé, J., 1995, MNRAS, in press White, S.D.M., Efstathiou, G., Frenk, C.S., 1993, MNRAS, 262, 1023 Wright, E.L., Meyer, S.S., Bennett, C.L., Boggess, N.W., et al.1992, ApJ, 396, L13 Zabludoff, A.I., Geller, M.J., Huchra, J.P., Ramella, M., 1993, AJ, 106, 1301 (ZGHR) [^1]: Tables 1$a$ and 1$b$ are also available in electronic form at the CDS via anonymous ftp 130.79.128.5 [^2]: Based on observations collected at the European Southern Observatory (La Silla, Chile)
{ "pile_set_name": "ArXiv" }
--- abstract: 'Interactive theorem proving requires a lot of human guidance. Proving a property involves (1) figuring out why it holds, then (2) coaxing the theorem prover into believing it. Both steps can take a long time. We explain how to use *GL*, a framework for proving finite ACL2 theorems with BDD- or SAT-based reasoning. This approach makes it unnecessary to deeply understand why a property is true, and automates the process of admitting it as a theorem. We use GL at Centaur Technology to verify execution units for x86 integer, MMX, SSE, and floating-point arithmetic.' author: - 'Sol Swords and Jared Davis\' bibliography: - 'paper.bib' title: 'Bit-Blasting ACL2 Theorems' --- Introduction {#sec:introduction} ============ In hardware verification you often want to show that some circuit implements its specification. Many of these problems are in the scope of fully automatic decision procedures like SAT solvers. When these tools can be used, there are good reasons to prefer them over *The Method* [@00-kaufmann-car] of traditional, interactive theorem proving. For instance, these tools can: - Reduce the level of human understanding needed in the initial process of developing the proof; - Provide clear counterexamples, whereas failed ACL2 proofs can often be difficult to debug; and - Ease the maintenance of the proof, since after the design changes they can often find updated proofs without help. *GL* [@10-swords-dissertation] is a framework for proving *finite* ACL2 theorems—those which, at least in principle, could be established by exhaustive testing—by bit-blasting with a Binary Decision Diagram (BDD) package or a SAT solver. These approaches have much higher capacity than exhaustive testing. We are using GL heavily at Centaur Technology [@11-slobodova-framework; @10-hardin-centaur; @09-hunt-fadd]. So far, we have used it to verify RTL implementations of floating-point addition, multiplication, and conversion operations, as well as hundreds of bitwise and arithmetic operations on scalar and packed integers. This paper is an introduction to GL and a practical guide for using it to prove ACL2 theorems. For a comprehensive treatment of the implementation of GL, see Swords’ dissertation [@10-swords-dissertation]. Additional details about particular commands can be found in the online documentation with `:doc gl`. GL is the successor of Boyer and Hunt’s [@09-boyer-g] *G* system (Section \[sec:related\]), and its name stands for *G in the Logic*. The G system was written as a raw Lisp extension of the ACL2 kernel, so using it meant trusting this additional code. In contrast, GL is implemented as ACL2 books and its proof procedure is formally verified by ACL2, so the only code we have to trust besides ACL2 is the ACL2(h) extension that provides hash-consing and memoization [@06-boyer-acl2h]. Like the G system, GL can prove theorems about ordinary ACL2 definitions; you are not restricted to some small subset of the language. How does GL work? You can probably imagine writing a bit-based encoding of ACL2 objects. For instance, you might represent an integer with some structure that contains a 2’s-complement list of bits. GL uses an encoding like this, except that Boolean expressions take the place of the bits. We call these structures *symbolic objects* (Section \[sec:symbolic-objects\]). GL provides a way to effectively compute with symbolic objects; e.g., it can “add” two integers whose bits are expressions, producing a new symbolic object that represents their sum. GL can perform similar computations for most ACL2 primitives. Building on this capability, it can *symbolically execute* terms (Section \[sec:symbolic-execution\]). The result of a symbolic execution is a new symbolic object that captures all the possible values the result could take. Symbolic execution can be used as a proof procedure (Section \[sec:proving-theorems\]). To prove a theorem, we first symbolically execute its goal formula, then show the resulting symbolic object cannot represent `nil`. GL provides a `def-gl-thm` command that makes it easy to prove theorems with this approach (Section \[sec:def-gl-thm\]). It handles all the details of working with symbolic objects, and only needs to be told how to represent the variables in the formula. Like any automatic procedure, GL has a certain capacity. But when these limits are reached, you may be able to increase its capacity by: - Optimizing its symbolic execution strategy to use more efficient definitions (Section \[sec:optimization\]), - Decomposing difficult problems into easier subgoals using an automatic tool (Section \[sec:def-gl-param-thm\]), or - Using a SAT backend (Section \[sec:aig-mode\]) that outperforms BDDs on some problems. There are also some good tools and techniques for debugging failed proofs (Section \[sec:debugging\]). Example: Counting Bits {#sec:counting-bits} ---------------------- Let’s use GL to prove a theorem. The following C code, from Anderson’s *Bit Twiddling Hacks* [@11-anderson-bit-hacks] page, is a fast way to count how many bits are set in a 32-bit integer. $$\begin{array}{l} \texttt{v = v - ((v >> 1) \& 0x55555555);} \\ \texttt{v = (v \& 0x33333333) + ((v >> 2) \& 0x33333333);} \\ \texttt{c = ((v + (v >> 4) \& 0xF0F0F0F) * 0x1010101) >> 24;} \\ \end{array}$$ We can model this in ACL2 as follows. It turns out that using arbitrary-precision addition and subtraction does not affect the result, but we must take care to use a 32-bit multiply to match the C code. $$\begin{array}{l} \texttt{(defun 32* (x y)} \\ \texttt{~~(logand (* x y) (1- (expt 2 32))))} \\ \texttt{} \\ \texttt{(defun fast-logcount-32 (v)} \\ \texttt{~~(let* ((v (- v (logand (ash v -1) \#x55555555)))} \\ \texttt{~~~~~~~~~(v (+ (logand v \#x33333333) (logand (ash v -2) \#x33333333))))} \\ \texttt{~~~~(ash (32* (logand (+ v (ash v -4)) \#xF0F0F0F) \#x1010101) -24)))} \\ \end{array}$$ We can then use GL to prove `fast-logcount-32` computes the same result as ACL2’s built-in `logcount` function for all unsigned 32-bit inputs. $$\begin{array}{l} \texttt{(def-gl-thm fast-logcount-32-correct} \\ \texttt{~~:hyp~~~(unsigned-byte-p 32 x)} \\ \texttt{~~:concl (equal (fast-logcount-32 x)} \\ \texttt{~~~~~~~~~~~~~~~~(logcount x))} \\ \texttt{~~:g-bindings `((x ,(g-int 0 1 33))))} \\ \end{array}$$ The `:g-bindings` form is the only help GL needs from the user. It tells GL how to construct a symbolic object that can represent every value for `x` that satisfies the hypothesis (we explain what it means in later sections). No arithmetic books or lemmas are required—we actually don’t even know why this algorithm works. The proof completes in 0.09 seconds and results in the following ACL2 theorem. $$\begin{array}{l} \texttt{(defthm fast-logcount-32-correct} \\ \texttt{~~(implies (unsigned-byte-p 32 x)} \\ \texttt{~~~~~~~~~~~(equal (fast-logcount-32 x)} \\ \texttt{~~~~~~~~~~~~~~~~~~(logcount x)))} \\ \texttt{~~:hints ((gl-hint ...)))} \\ \end{array}$$ Why not just use exhaustive testing? We wrote a fixnum-optimized exhaustive-testing function that can cover the $2^{32}$ cases in 143 seconds. This is slower than GL but still seems reasonable. On the other hand, exhaustive testing is clearly incapable of scaling to the 64-bit and 128-bit versions of this algorithm, whereas GL completes the proofs in 0.18 and 0.58 seconds, respectively. Like exhaustive testing, GL can generate counterexamples to non-theorems. At first, we didn’t realize we needed to use a 32-bit multiply in `fast-logcount-32`, and we just used an arbitrary-precision multiply instead. The function still worked for test cases like `0`, `1`, `#b111`, and `#b10111`, but when we tried to prove its correctness, GL showed us three counterexamples, `#x80000000`, `#xFFFFFFFF`, and `#x9448C263`. By default, GL generates a first counterexample by setting bits to 0 wherever possible, a second by setting bits to 1, and a third with random bit settings. Example: UTF-8 Decoding {#sec:utf-8} ----------------------- Davis [@06-davis-input] used exhaustive testing to prove lemmas toward the correctness of UTF-8 processing functions. The most difficult proof carried out this way was a well-formedness and inversion property for four-byte UTF-8 sequences, which involved checking $2^{32}$ cases. Davis’ proof takes 67 seconds on our computer. It involves four testing functions and five lemmas about them; all of this is straightforward but mundane. The testing functions are guard-verified and optimized with `mbe` and type declarations for better performance. We used GL to prove the same property. The proof (included in the supporting materials) completes in 0.17 seconds and requires no testing functions or supporting lemmas. Getting GL ---------- GL is included in ACL2 4.3, and the development version is available from the ACL2 Books repository, <http://acl2-books.googlecode.com/>. Note that using GL requires ACL2(h), which is best supported on 64-bit Clozure Common Lisp. BDD operations can be memory intensive, so we recommend using a computer with at least 8 GB of memory. Instructions for building GL can be found in `centaur/README`, and it can be loaded with $$\texttt{(include-book "centaur/gl/gl" :dir :system)}.$$ GL Basics {#sec:gl-basics} ========= At its heart, GL works by manipulating Boolean expressions. There are many ways to represent Boolean expressions. GL currently supports a hons-based BDD package [@06-boyer-acl2h] and also has support for using a hons-based And-Inverter Graph (AIG) representation with an external SAT solver. For any particular proof, the user can choose to work in *BDD mode* (the default) or *AIG mode*. Each representation has strengths and weaknesses, and the choice of representation can significantly impact performance. We give some advice about choosing proof modes in Section \[sec:aig-mode\]. The GL user does not need to know how BDDs and AIGs are represented; in this paper we just adopt a conventional mathematical syntax to describe Boolean expressions, e.g., ${\ensuremath{\mathit{true}}\xspace}$, ${\ensuremath{\mathit{false}}\xspace}$, $A \wedge B$, $\neg C$, etc. Symbolic Objects {#sec:symbolic-objects} ---------------- GL groups Boolean expressions into *symbolic objects*. Much like a Boolean expression can be evaluated to obtain a Boolean value, a symbolic object can be evaluated to produce an ACL2 object. There are several kinds of symbolic objects, but numbers are a good start. GL represents symbolic, signed integers as $$\texttt{(:g-number~$\mathit{lsb\textrm{-}bits}$)},$$ where *lsb-bits* is a list of Boolean expressions that represent the two’s complement bits of the number. The bits are in lsb-first order, and the last, most significant bit is the sign bit. For instance, if $p$ is the following `:g-number`, $$p = \texttt{(:g-number (}{\ensuremath{\mathit{true}}\xspace}{\texttt{ }}{\ensuremath{\mathit{false}}\xspace}{\texttt{ }}A \wedge B {\texttt{ }}{\ensuremath{\mathit{false}}\xspace}\texttt{))},$$ then $p$ represents a 4-bit, signed integer whose value is either 1 or 5, depending on the value of $A \wedge B$. GL uses another kind of symbolic object to represent ACL2 Booleans. In particular, $$\texttt{(:g-boolean~.~$\mathit{val}$)}$$ represents `t` or `nil` depending on the Boolean expression *val*. For example, $$\texttt{(:g-boolean~.~$\neg(A \wedge B)$)}$$ is a symbolic object whose value is `t` when $p$ has value 1, and `nil` when $p$ has value 5. GL has a few other kinds of symbolic objects that are also tagged with keywords, such as `:g-var` and `:g-apply`. But an ACL2 object that does not have any of these special keywords within it is *also* considered to be a symbolic object, and just represents itself. Furthermore, a cons of two symbolic objects represents the cons of the two objects they represent. For instance, $$\texttt{(1~.~(:g-boolean~.~$A \wedge B$))}$$ represents either `(1 . t)` or `(1 . nil)`. Together, these conventions allow GL to avoid lots of tagging as symbolic objects are manipulated. One last kind of symbolic object we will mention represents an if-then-else among other symbolic objects. Its syntax is $$\texttt{(:g-ite~${\ensuremath{\mathit{test}}}$~${\ensuremath{\mathit{then}}}$~.~${\ensuremath{\mathit{else}}}$)},$$ where ${\ensuremath{\mathit{test}}}$, ${\ensuremath{\mathit{then}}}$, and ${\ensuremath{\mathit{else}}}$ are themselves symbolic objects. The value of a `:g-ite` is either the value of ${\ensuremath{\mathit{then}}}$ or of ${\ensuremath{\mathit{else}}}$, depending on the value of ${\ensuremath{\mathit{test}}}$. For example, $$\begin{array}{l} \texttt{(:g-ite~(:g-boolean~.~$A$)} \\ \texttt{~~~~~~~~(:g-number~($B$~$A$~{\ensuremath{\mathit{false}}\xspace}))} \\ \texttt{~~~~~~~~.~\#\textbackslash{}C)} \end{array}$$ represents either 2, 3, or the character `C`. GL doesn’t have a special symbolic object format for ACL2 objects other than numbers and Booleans. But it is still possible to create symbolic objects that take any finite range of values among ACL2 objects, by using a nesting of `:g-ite`s where the tests are `:g-boolean`s. Computing with Symbolic Objects {#sec:symbolic-execution} ------------------------------- Once we have a representation for symbolic objects, we can perform symbolic executions on those objects. For instance, recall the symbolic number $p$ which can have value 1 or 5, $$p = \texttt{(:g-number (}{\ensuremath{\mathit{true}}\xspace}{\texttt{ }}{\ensuremath{\mathit{false}}\xspace}{\texttt{ }}A \wedge B {\texttt{ }}{\ensuremath{\mathit{false}}\xspace}\texttt{))}.$$ We might symbolically add 1 to $p$ to obtain a new symbolic number, say $q$, $$q = \texttt{(:g-number (}{\ensuremath{\mathit{false}}\xspace}{\texttt{ }}{\ensuremath{\mathit{true}}\xspace}{\texttt{ }}A \wedge B {\texttt{ }}{\ensuremath{\mathit{false}}\xspace}\texttt{))},$$ which represents either 2 or 6. Suppose $r$ is another symbolic number, $$r = \texttt{(:g-number (}A {\texttt{ }}{\ensuremath{\mathit{false}}\xspace}{\texttt{ }}{\ensuremath{\mathit{true}}\xspace}{\texttt{ }}{\ensuremath{\mathit{false}}\xspace}\texttt{))},$$ which represents either 4 or 5. We might add $q$ and $r$ to obtain $s$, $$s = \texttt{(:g-number (}A {\texttt{ }}{\ensuremath{\mathit{true}}\xspace}{\texttt{ }}\neg(A \wedge B) {\texttt{ }}A \wedge B {\texttt{ }}{\ensuremath{\mathit{false}}\xspace}\texttt{))},$$ whose value can be 6, 7, or 11. Why can’t $s$ be 10 if $q$ can be 6 and $r$ can be 4? This combination isn’t possible because $q$ and $r$ involve the same expression, $A$. The only way for $r$ to be 4 is for $A$ to be false, but then $q$ must be 2. The underlying algorithm GL uses for symbolic additions is just a ripple-carry addition on the Boolean expressions making up the bits of the two numbers. Performing a symbolic addition, then, means constructing new BDDs or AIGs, depending on which mode is being used. GL has built-in support for symbolically executing most ACL2 primitives. Generally, this is done by cases on the types of the symbolic objects being passed in as arguments. For instance, if we want to symbolically execute `consp` on $s$, then we are asking whether a `:g-number` may ever represent a cons, so the answer is simply `nil`. Similarly, if we ever try to add a `:g-boolean` to a `:g-number`, by the ACL2 axioms the `:g-boolean` is simply treated as 0. Beyond these primitives, GL provides what is essentially a McCarthy-style interpreter [@60-mccarthy-recursive] for symbolically executing terms. By default, it expands function definitions until it reaches primitives, with some special handling for `if`. For better performance, its interpretation scheme can be customized with more efficient definitions and other optimizations, as described in Section \[sec:optimization\]. Proving Theorems by Symbolic Execution {#sec:proving-theorems} -------------------------------------- To see how symbolic execution can be used to prove theorems, let’s return to the bit-counting example, where our goal was to prove $$\begin{array}{l} \texttt{(implies (unsigned-byte-p 32 x)} \\ \texttt{~~~~~~~~~(equal (fast-logcount-32 x)} \\ \texttt{~~~~~~~~~~~~~~~~(logcount x)))}. \\ \end{array}$$ The basic idea is to first symbolically execute the above formula, and then check whether it can ever evaluate to `nil`. But to do this symbolic execution, we need some symbolic object to represent `x`. We want our symbolic execution to cover all the cases necessary for proving the theorem, namely all `x` for which the hypothesis `(unsigned-byte-p 32 x)` holds. In other words, the symbolic object we choose needs to be able to represent any integer from 0 to $2^{32}-1$. Many symbolic objects cover this range. As notation, let $b_0,b_1,\dots$ represent independent Boolean variables in our Boolean expression representation. Then, one suitable object is: $$\texttt{(:g-number ($b_0$~$b_1$~$\dots$~$b_{31}$~$b_{32}$))}.$$ Why does this have 33 variables? The final bit, $b_{32}$, represents the sign, so this object covers the integers from $-2^{32}$ to $2^{32}-1$. We could instead use a 34-bit integer, or a 35-bit integer, or some esoteric creation involving `:g-ite` forms. But perhaps the best object to use would be: $${\ensuremath{x_{\mathit{best}}}\xspace}= \texttt{(:g-number ($b_0$~$b_1$~$\dots$~$b_{31}$~${\ensuremath{\mathit{false}}\xspace}$))},$$ since it covers exactly the desired range using the simplest possible Boolean expressions. Suppose we choose [$x_{\mathit{best}}$]{}to stand for `x`. We can now symbolically execute the goal formula on that object. What does this involve? First, `(unsigned-byte-p 32 x)` produces the symbolic result `t`, since it is always true of the possible values of [$x_{\mathit{best}}$]{}. It would have been equally valid for this to produce `(:g-boolean . {\ensuremath{\mathit{true}}\xspace})`, but GL prefers to produce constants when possible. Next, the `(fast-logcount-32 x)` and `(logcount x)` forms each yield `:g-number` objects whose bits are Boolean expressions in the variables $b_0, \dots, b_{31}$. For example, the least significant bit will be an expression representing the XOR of all these variables. Finally, we symbolically execute `equal` on these two results. This compares the Boolean expressions for their bits to determine if they are equivalent, and produces a symbolic object representing the answer. So far we have basically ignored the differences between using BDDs and AIGs as our Boolean expression representation. But here, the two approaches produce very different answers: - Since BDDs are canonical, the expressions for the bits of the two numbers are syntactically equal, and the result from `equal` is simply `t`. - With AIGs, the expressions for the bits are semantically equivalent but not syntactically equal. The result is therefore `(:g-boolean . \phi)`, where $\phi$ is a large Boolean expression in the variables $b_0, \dots, b_{31}$. The fact that $\phi$ always evaluates to [$\mathit{true}$]{}is not obvious just from its syntax. At this point we have completed the symbolic execution of our goal formula, obtaining either `t` in BDD mode, or this `:g-boolean` object in AIG mode. Recall that to prove theorems using symbolic execution, the idea is to symbolically execute the goal formula and then check whether its symbolic result can represent `nil`. If we are using BDDs, it is obvious that `t` cannot represent `nil`. With AIGs, we simply ask a SAT solver whether $\phi$ can evaluate to [$\mathit{false}$]{}, and find that it cannot. This completes the proof. GL automates this proof strategy, taking care of many of the details relating to creating symbolic objects, ensuring that they cover all the possible cases, and ensuring that `nil` cannot be represented by the symbolic result. When GL is asked to prove a non-theorem, it can generate counterexamples by finding assignments to the Boolean variables that cause the result to become `nil`. Using DEF-GL-THM {#sec:def-gl-thm} ================ The `def-gl-thm` command is the main interface for using GL to prove theorems. Here is the command we used in the bit-counting example. $$\begin{array}{l} \texttt{(def-gl-thm fast-logcount-32-correct} \\ \texttt{~~:hyp~~~(unsigned-byte-p 32 x)} \\ \texttt{~~:concl (equal (fast-logcount-32 x)} \\ \texttt{~~~~~~~~~~~~~~~~(logcount x))} \\ \texttt{~~:g-bindings `((x ,(g-int 0 1 33))))} \\ \end{array}$$ Unlike an ordinary `defthm` command, `def-gl-thm` takes separate hypothesis and conclusion terms (its `:hyp` and `:concl` arguments). This separation allows GL to use the hypothesis to limit the scope of the symbolic execution it will perform. The user must also provide GL with `:g-bindings` that describe the symbolic objects to use for each free variable in the theorem (Section \[sec:writing-g-bindings\]). What are these bindings? In the `fast-logcount-32-correct` theorem, we used a convenient function, `g-int`, to construct the `:g-bindings`. Expanding this away, here are the actual bindings: $$\texttt{((x (:g-number (0 1 2 $\dots$ 32))))}.$$ The `:g-bindings` argument uses a slight modification of the symbolic object format where the Boolean expressions are replaced by distinct natural numbers, each representing a Boolean variable. In this case, our binding for `x` stands for the following symbolic object: $${\ensuremath{x_{\mathit{init}}}\xspace}= \texttt{(:g-number ($b_0$~$b_1$~$\dots$~$b_{31}$~$b_{32}$))}.$$ Note that [$x_{\mathit{init}}$]{}is not the same object as [$x_{\mathit{best}}$]{}from Section \[sec:proving-theorems\]—its sign bit is $b_{32}$ instead of [$\mathit{false}$]{}, so [$x_{\mathit{init}}$]{}can represent any 33-bit signed integer whereas [$x_{\mathit{best}}$]{}only represents 32-bit unsigned values. In fact, the `:g-bindings` syntax does not even allow us to describe objects like [$x_{\mathit{best}}$]{}, which has the constant [$\mathit{false}$]{}instead of a variable as one of its bits. There is a good reason for this restriction. One of the steps in our proof strategy is to prove *coverage*: we need to show the symbolic objects we are starting out with have a sufficient range of values to cover all cases for which the hypothesis holds (Section \[sec:proving-coverage\]). The restricted syntax permitted by `:g-bindings` ensures that the range of values represented by each symbolic object is easy to determine. Because of this, coverage proofs are usually automatic. Despite these restrictions, GL will still end up using [$x_{\mathit{best}}$]{}to carry out the symbolic execution. GL optimizes the original symbolic objects inferred from the `:g-bindings` by using the hypothesis to reduce the space of objects that are represented. In BDD mode this optimization uses *BDD parametrization* [@99-aagaard-param], which restricts the symbolic objects so they cover exactly the inputs recognized by the hypothesis. In AIG mode we use a lighter-weight transformation that replaces variables with constants when the hypothesis sufficiently restricts them. In this example, either optimization transforms [$x_{\mathit{init}}$]{}into [$x_{\mathit{best}}$]{}. Writing G-Bindings Forms {#sec:writing-g-bindings} ------------------------ In a typical `def-gl-thm` command, the `:g-bindings` should have an entry for every free variable in the theorem. Here is an example that shows some typical bindings. $$\begin{array}{l} \texttt{:g-bindings~'((flag~~~(:g-boolean~.~0))} \\ \texttt{~~~~~~~~~~~~~~(a-bus~~(:g-number~(1~3~5~7~9)))} \\ \texttt{~~~~~~~~~~~~~~(b-bus~~(:g-number~(2~4~6~8~10)))} \\ \texttt{~~~~~~~~~~~~~~(mode~~~(:g-ite~(:g-boolean~.~11)~exact~.~fast))} \\ \texttt{~~~~~~~~~~~~~~(opcode~\#b0010100))} \\ \end{array}$$ These bindings allow `flag` to take an arbitrary Boolean value, `a-bus` and `b-bus` any five-bit signed integer values, `mode` either the symbol `exact` or `fast`, and `opcode` only the value 20.[^1] Within `:g-boolean` and `:g-number` forms, natural number indices take the places of Boolean expressions. The indices used throughout all of the bindings must be distinct, and represent free, independent Boolean variables. In BDD mode these indices have additional meaning: they specify the BDD variable ordering, with smaller indices coming first in the order. This ordering can greatly affect performance. In AIG mode the choice of indices has no particular bearing on efficiency. How do you choose a good BDD ordering? It is often good to interleave the bits of data buses that are going to be combined in some way. It is also typically a good idea to put any important control signals such as opcodes and mode settings before the data buses. Often the same `:g-bindings` can be used throughout several theorems, either verbatim or with only small changes. In practice, we almost always generate the `:g-bindings` forms by calling functions or macros. One convenient function is $$\texttt{(g-int start by n)},$$ which generates a `:g-number` form with `n` bits, using indices that start at `start` and increment by `by`. This is particularly useful for interleaving the bits of numbers, as we did for the `a-bus` and `b-bus` bindings above: $$\begin{array}{l} \texttt{(g-int 1 2 5)} \rightarrow \texttt{(:g-number (1 3 5 7 9))} \\ \texttt{(g-int 2 2 5)} \rightarrow \texttt{(:g-number (2 4 6 8 10))}. \end{array}$$ Proving Coverage {#sec:proving-coverage} ---------------- There are really two parts to any GL theorem. First, we need to symbolically execute the goal formula and ensure it cannot evaluate to `nil`. But in addition to this, we must ensure that the objects we use to represent the variables of the theorem cover all the cases that satisfy the hypothesis. This part of the proof is called the *coverage obligation*. For `fast-logcount-32-correct`, the coverage obligation is to show that our binding for `x` is able to represent every integer from 0 to $2^{32}-1$. This is true of [$x_{\mathit{init}}$]{}, and the coverage proof goes through automatically. But suppose we forget that `:g-number`s use a signed representation, and attempt to prove `fast-logcount-32-correct` using the following (incorrect) g-bindings. $$\texttt{:g-bindings `((x ,(g-int 0 1 32)))}$$ This looks like a 32-bit integer, but because of the sign bit it does not cover the intended unsigned range. If we submit the `def-gl-thm` command with these bindings, the symbolic execution part of the proof is still successful. But this execution has only really shown the goal holds for 31-bit unsigned integers, so `def-gl-thm` prints the message $$\texttt{ERROR: Coverage proof appears to have failed.}$$ and leaves us with a failed subgoal, $$\begin{array}{l} \texttt{(implies (and (integerp x)} \\ \texttt{~~~~~~~~~~~~~~(<= 0 x)} \\ \texttt{~~~~~~~~~~~~~~(< x 4294967296))} \\ \texttt{~~~~~~~~~(< x 2147483648))}. \\ \end{array}$$ This goal is clearly not provable: we are trying to show `x` must be less than $2^{31}$ (from our `:g-bindings`) whenever it is less than $2^{32}$ (from the hypothesis). Usually when the `:g-bindings` are correct, the coverage proof will be automatic, so if you see that a coverage proof has failed, the first thing to do is check whether your bindings are really sufficient. On the other hand, proving coverage is undecidable in principle, so sometimes GL will fail to prove coverage even though the bindings are appropriate. For these cases, there are some keyword arguments to `def-gl-thm` that may help coverage proofs succeed. First, as a practical matter, GL does the symbolic execution part of the proof *before* trying to prove coverage. This can get in the way of debugging coverage proofs when the symbolic execution takes a long time. You can use `:test-side-goals t` to have GL skip the symbolic execution and go straight to the coverage proof. Of course, no `defthm` is produced when this option is used. By default, our coverage proof strategy uses a restricted set of rules and ignores the current theory. It heuristically expands functions in the hypothesis and throws away terms that seem irrelevant. When this strategy fails, it is usually for one of two reasons. 1\. The heuristics expand too many terms and overwhelm ACL2. GL tries to avoid this by throwing away irrelevant terms, but sometimes this approach is insufficient. It may be helpful to disable the expansion of functions that are not important for proving coverage. The `:do-not-expand` argument allows you to list functions that should not be expanded. 2\. The heuristics throw away a necessary hypothesis, leading to unprovable goals. GL’s coverage proof strategy tries to show that the binding for each variable is sufficient, one variable at a time. During this process it throws away hypotheses that do not mention the variable, but in some cases this can be inappropriate. For instance, suppose the following is a coverage goal for `b`: $$\begin{array}{l} \texttt{(implies (and (natp a)} \\ \texttt{~~~~~~~~~~~~~~(natp b)} \\ \texttt{~~~~~~~~~~~~~~(< a (expt 2 15))} \\ \texttt{~~~~~~~~~~~~~~(< b a))} \\ \texttt{~~~~~~~~~(< b (expt 2 15))}. \end{array}$$ Here, throwing away the terms that don’t mention `b` will cause the proof to fail. A good way to avoid this problem is to separate type and size hypotheses from more complicated assumptions that are not important for proving coverage, along these lines: $$\begin{array}{l} \texttt{(def-gl-thm~my-theorem} \\ \texttt{~~:hyp~(and~(type-assms-1~x)} \\ \texttt{~~~~~~~~~~~~(type-assms-2~y)} \\ \texttt{~~~~~~~~~~~~(type-assms-3~z)} \\ \texttt{~~~~~~~~~~~~(complicated-non-type-assms~x~y~z))} \\ \texttt{~~:concl~...} \\ \texttt{~~:g-bindings~...} \\ \texttt{~~:do-not-expand~'(complicated-non-type-assms))}. \end{array}$$ For more control, you can also use the `:cov-theory-add` argument to enable additional rules during the coverage proof, e.g., `:cov-theory-add ’(type-rule1 type-rule2)`. Optimizing Symbolic Execution {#sec:optimization} ============================= The scope of theorems GL can handle is directly impacted by its symbolic execution performance. It is actually quite easy to customize the way certain terms are interpreted, and this can sometimes provide important speedups. GL’s symbolic interpreter operates much like a basic Lisp interpreter. To symbolically interpret a function call, GL first eagerly interprets its arguments to obtain symbolic objects for the actuals. Then GL symbolically executes the function in one of three ways: - As a special case, if the actuals evaluate to concrete objects, then GL may be able to stop symbolically executing and just call the actual ACL2 function on these arguments (Section \[sec:concrete-execution\]). - For primitive ACL2 functions like `+`, `consp`, `equal`, and for some defined functions like `logand` and `ash` where performance is important, GL uses hand-written functions called *symbolic counterparts* that can operate on symbolic objects. The advanced GL user can write new symbolic counterparts (Section \[sec:custom-symbolic-counterparts\]) to speed up symbolic execution. - Otherwise, GL looks up the definition of the function, and recursively interprets its body in a new environment binding the formals to the symbolic actuals. The way a function is written can impact its symbolic execution performance (Section \[sec:redundant-recursion\]). It is easy to instruct GL to use more efficient definitions for particular functions (Section \[sec:preferred-definitions\]). GL symbolically executes functions strictly according to the ACL2 logic and does not consider guards. An important consequence is that when `mbe` is used, GL’s interpreter follows the `:logic` definition instead of the `:exec` definition, since it might be unsound to use the `:exec` version of a definition without establishing the guard is met. Also, while GL can symbolically simulate functions that take user-defined stobjs or even the ACL2 `state`, it does not operate on “real” stobjs; instead, it uses the logical definitions of the relevant stobj operations, which do not provide the performance benefits of destructive operations. Non-executable functions cannot be symbolically executed. Avoiding Redundant Recursion {#sec:redundant-recursion} ---------------------------- Here are two ways to write a list-filtering function. $$\begin{array}{l} \texttt{(defun~filter1~(x)} \\ \texttt{~~(cond~((atom~x)} \\ \texttt{~~~~~~~~~nil)} \\ \texttt{~~~~~~~~((element-okp~(car~x))~~~~~~~~~~~~~~~;;~keep~it} \\ \texttt{~~~~~~~~~(cons~(car~x)~(filter1~(cdr~x))))} \\ \texttt{~~~~~~~~(t~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~;;~skip~it} \\ \texttt{~~~~~~~~~(filter1~(cdr~x)))))} \\ \end{array}$$ This definition can be inefficient for symbolic execution. Suppose we are symbolically executing `filter1`, and the `element-okp` check has produced a symbolic object that can take both `nil` and non-`nil` values. Then, we proceed by symbolically executing both the keep- and skip-branches, and construct a `:g-ite` form for the result. Since we have to evaluate the recursive call twice, this execution becomes exponential in the length of `x`. We can avoid this blow-up by consolidating the recursive calls, as follows. $$\begin{array}{l} \texttt{(defun~filter2~(x)} \\ \texttt{~~(if~(atom~x)} \\ \texttt{~~~~~~nil} \\ \texttt{~~~~(let~((rest~(filter2~(cdr~x))))} \\ \texttt{~~~~~~(if~(element-okp~(car~x))} \\ \texttt{~~~~~~~~~~(cons~(car~x)~rest)} \\ \texttt{~~~~~~~~rest))))} \\ \end{array}$$ This is not a novel observation; Reeber [@07-reeber-dissertation] suggests the same sort of optimization for unrolling recursive functions in SULFA. Of course, `filter1` is probably slightly better for concrete execution since it has a tail call in at least some cases. If we do not want to change the definition of `filter1`, we can simply tell GL to use the `filter2` definition instead, as described in the next section. We currently do not try to automatically apply this kind of optimization, though we may explore this in future work. Preferred Definitions {#sec:preferred-definitions} --------------------- To instruct GL to symbolically execute `filter2` in place of `filter1`, we can do the following: $$\begin{array}{l} \texttt{(defthm~filter1-for-gl} \\ \texttt{~~(equal~(filter1~x)~(filter2~x))} \\ \texttt{~~:rule-classes~nil)} \\ \texttt{} \\ \texttt{(gl::set-preferred-def~filter1~filter1-for-gl)} \\ \end{array}$$ The `gl::set-preferred-def` form extends a table that GL consults when expanding a function’s definition. Each entry in the table pairs a function name with the name of a theorem. The theorem must state that a call of the function is unconditionally equal to some other term. When GL encounters a call of a function in this table, it replaces the call with the right-hand side of the theorem, which is justified by the theorem. So after the above event, GL will replace calls of `filter1` with `filter2`. As another example of a preferred definition, GL automatically optimizes the definition of `evenp`, which ACL2 defines as follows: $$\texttt{(evenp x)} = \texttt{(integerp (* x (/ 2)))}.$$ This definition is basically unworkable since GL provides little support for rational numbers. However, GL has an efficient, built-in implementation of `logbitp`. So to permit the efficient execution of `evenp`, GL proves the following identity and uses it as `evenp`’s preferred definition. $$\begin{array}{l} \texttt{(defthm~evenp-is-logbitp} \\ \texttt{~~(equal~(evenp~x)} \\ \texttt{~~~~~~~~~(or~(not~(acl2-numberp~x))} \\ \texttt{~~~~~~~~~~~~~(and~(integerp~x)} \\ \texttt{~~~~~~~~~~~~~~~~~~(equal~(logbitp~0~x)~nil)))))} \\ \end{array}$$ Executability on Concrete Terms {#sec:concrete-execution} ------------------------------- Suppose GL is symbolically executing a function call. If the arguments to the function are all concrete objects (i.e., symbolic objects that represent a single value), then in some cases the interpreter can stop symbolically executing and just run the ACL2 function on these arguments. In some cases, this can provide a critical performance boost. To actually call these functions, GL essentially uses a case statement along the following lines. $$\begin{array}{l} \texttt{(case~fn} \\ \texttt{~~(cons~~~~~(cons~(first~args)~(second~args)))} \\ \texttt{~~(reverse~~(reverse~(first~args)))} \\ \texttt{~~(member~~~(member~(first~args)~(second~args)))} \\ \texttt{~~...)} \\ \end{array}$$ Such a case statement is naturally limited to calling a fixed set of functions. To allow GL to concretely execute additional functions, you can use `def-gl-clause-processor`, a special macro that defines a new version of the GL symbolic interpreter and clause processor. GL automatically uses the most recently defined interpreter and clause processor. For instance, here is the syntax for extending GL so that it can execute `md5sum` and `sets::mergesort`: $$\texttt{(def-gl-clause-processor my-cp '(md5sum sets::mergesort))}.$$ Full-Custom Symbolic Counterparts {#sec:custom-symbolic-counterparts} --------------------------------- The advanced GL user can write custom symbolic counterparts to get better performance. This is somewhat involved. Generally, such a function operates by cases on what kinds of symbolic objects it has been given. Most of these cases are easy; for instance, the symbolic counterpart for `consp` just returns `nil` when given a `:g-boolean` or `:g-number`. But in other cases the operation can require combining the Boolean expressions making up the arguments in some way, e.g., the symbolic counterpart for `binary-*` implements a simple binary multiplier. Once the counterpart has been defined, it must be proven sound with respect to the semantics of ACL2 and the symbolic object format. This is an ordinary ACL2 proof effort that requires some understanding of GL’s implementation. The most sophisticated symbolic counterpart we have written is an AIG to BDD conversion algorithm [@10-swords-bddify]. This function serves as a symbolic counterpart for AIG evaluation, and at Centaur it is the basis for the “implementation side” of our hardware correctness theorems. This algorithm and its correctness proof are publicly available; see `centaur/aig/g-aig-eval`. Case-Splitting {#sec:def-gl-param-thm} ============== BDD performance can sometimes be improved by breaking a problem into subcases. The standard example is floating-point addition [@98-chen-adders; @99-aagaard-param], which benefits from separating the problem into cases based on the difference between the two inputs’ exponents. For each exponent difference, the two mantissas are aligned differently before being added together, so a different BDD order is necessary to interleave their bits at the right offset. Without case splitting, a single BDD ordering has to be used for the whole problem; no matter what ordering we choose, the mantissas will be poorly interleaved for some exponent differences, causing severe performance problems. Separating the cases allows the appropriate order to be used for each difference. GL provides a `def-gl-param-thm` command that supports this technique. This command splits the goal formula into several subgoals and attempts to prove each of them using the `def-gl-thm` approach, so for each subgoal there is a symbolic execution step and coverage proof. To show the subgoals suffice to prove the goal formula, it also does another `def-gl-thm`-style proof that establishes that any inputs satisfying the hypothesis are covered by some case. Here is how we might split the proof for `fast-logcount-32` into five subgoals. One goal handles the case where the most significant bit is 1. The other four goals assume the most significant bit is 0, and separately handle the cases where the lower two bits are 0, 1, 2, or 3. Each case has a different symbolic binding for `x`, giving the BDD variable order. Of course, splitting into cases and varying the BDD ordering is unnecessary for this theorem, but it illustrates how the `def-gl-param-thm` command works. $$\begin{array}{l} \texttt{(def-gl-param-thm~fast-logcount-32-correct-alt} \\ \texttt{~:hyp~(unsigned-byte-p~32~x)} \\ \texttt{~:concl~(equal~(fast-logcount-32~x)} \\ \texttt{~~~~~~~~~~~~~~~(logcount~x))} \\ \texttt{~:param-bindings} \\ \texttt{~`((((msb~1)~(low~nil))~((x~,(g-int~32~-1~33))))} \\ \texttt{~~~(((msb~0)~(low~0))~~~((x~,(g-int~~0~~1~33))))} \\ \texttt{~~~(((msb~0)~(low~1))~~~((x~,(g-int~~5~~1~33))))} \\ \texttt{~~~(((msb~0)~(low~2))~~~((x~,(g-int~~0~~2~33))))} \\ \texttt{~~~(((msb~0)~(low~3))~~~((x~,(g-int~~3~~1~33)))))} \\ \texttt{~:param-hyp~(and~(equal~msb~(ash~x~-31))} \\ \texttt{~~~~~~~~~~~~~~~~~(or~(equal~msb~1)} \\ \texttt{~~~~~~~~~~~~~~~~~~~~~(equal~(logand~x~3)~low)))} \\ \texttt{~:cov-bindings~`((x~,(g-int~0~1~33))))} \\ \end{array}$$ We specify the five subgoals to consider using two new variables, `msb` and `low`. Here, `msb` will determine the most significant bit of `x`; `low` will determine the two least significant bits of `x`, but only when `msb` is 0. The `:param-bindings` argument describes the five subgoals by assigning different values to `msb` and `low`. It also gives the `g-bindings` to use in each case. We use different bindings for `x` for each subgoal to show how it is done. The `:param-hyp` argument describes the relationship between `msb`, `low`, and `x` that will be assumed in each subgoal. In the symbolic execution performed for each subgoal, the `:param-hyp` is used to reduce the space of objects represented by the symbolic binding for `x`. For example, in the subgoal where $\texttt{msb} = 1$, this process will assign [$\mathit{true}$]{}to $\texttt{x}[31]$. The `:param-hyp` will also be assumed to hold for the coverage proof for each case. How do we know the case-split is complete? One final proof is needed to show that whenever the hypothesis holds for some `x`, then at least one of the settings of `msb` and `low` satisfies the `:param-hyp` for this `x`. That is: $$\begin{array}{l} \texttt{(implies~(unsigned-byte-p~32~x)} \\ \texttt{~~~~~~~~~(or~(let~((msb~1)~(low~nil))} \\ \texttt{~~~~~~~~~~~~~~~(and~(equal~msb~(ash~x~-31))} \\ \texttt{~~~~~~~~~~~~~~~~~~~~(or~(equal~msb~1)} \\ \texttt{~~~~~~~~~~~~~~~~~~~~~~~~(equal~(logand~x~3)~low))))} \\ \texttt{~~~~~~~~~~~~~(let~((msb~0)~(low~0))~...)} \\ \texttt{~~~~~~~~~~~~~(let~((msb~0)~(low~1))~...)} \\ \texttt{~~~~~~~~~~~~~(let~((msb~0)~(low~2))~...)} \\ \texttt{~~~~~~~~~~~~~(let~((msb~0)~(low~3))~...)))} \\ \end{array}$$ This proof is also done in the `def-gl-thm` style, so we need we need one last set of symbolic bindings, which is provided by the `:cov-bindings` argument. AIG Mode {#sec:aig-mode} ======== GL can optionally use And-Inverter Graphs (AIGs) to represent Boolean expressions instead of BDDs. You can choose the mode on a per-proof basis by running `(gl-bdd-mode)` or `(gl-aig-mode)`, which generate `defattach` events. Unlike BDDs, AIGs are non-canonical, and this affects performance in fundamental ways. AIGs are generally much cheaper to construct than BDDs, but to determine whether AIGs are equivalent we have to invoke a SAT solver, whereas with BDDs we just need to use a pointer-equality check. Using an external SAT solver raises questions of trust. For most verification work in industry it is probably sufficient to just trust the solver. But Matt Kaufmann has developed and reflectively verified an ACL2 function that efficiently checks a resolution proof that is produced by the SAT solver. GL can use this proof-checking capability to avoid trusting the SAT solver. This approach is not novel: Weber and Amjad [@09-weber-sat] have developed an LCF-style integration of SAT in several HOL theorem provers, and Darbari, et al [@10-darbari-sat] have a reflectively verified SAT certificate checker in Coq. Recording and checking resolution proofs imposes significant overhead, but is still practical in many cases. We measured this overhead on a collection of AIG-mode GL theorems about Centaur’s MMX/SSE module. These theorems take 10 minutes without proof recording. With proof-recording enabled, our SAT solver uses a less-efficient CNF generation algorithm and SAT solving grows to 25 minutes; an additional 6 minutes are needed to check the recorded proofs. The SAT solver we have been using, an integration of MiniSAT with an AIG package, is not yet released, so AIG mode is not usable “out of the box.” As future work, we would like to make it easier to plug in other SAT solvers. Versions of MiniSAT, PicoSAT, and ZChaff can also produce resolution proofs, so this is mainly an interfacing issue. A convenient feature of AIGs is that you do not have to come up with a good variable ordering. This is especially beneficial if it avoids the need to case-split. On the other hand, BDDs provide especially nice counterexamples, whereas SAT produces just one, essentially random counterexample. Performance-wise, AIGs are better for some problems and BDDs for others. Many operations combine bits from data buses in a regular, orderly way; in these cases, there is often a good BDD ordering and BDDs may be faster than SAT. But when the operations are less regular, when no good BDD ordering is apparent, or when case-splitting seems necessary to get good BDD performance, SAT may do better. For many of our proofs, SAT works well enough that we haven’t tried to find a good BDD ordering. Debugging Failures {#sec:debugging} ================== A GL proof attempt can fail in several ways. In the “best” case, the conjecture is disproved and GL can produce counterexamples to help diagnose the problem. However, sometimes symbolic execution simply runs forever (Section \[sec:performance-problems\]). In other cases, a symbolic execution may produce an indeterminate result (Section \[sec:indeterminate-results\]), giving an example of inputs for which the symbolic execution failed. Finally, GL can run out of memory or spend too much time in garbage collection (Section \[sec:memory-problems\]). We have developed some tools and techniques for debugging these problems. Performance Problems {#sec:performance-problems} -------------------- Any bit-blasting tool has capacity limitations. However, you may also run into cases where GL is performing poorly due to preventable issues. When GL seems to be running forever, it can be helpful to trace the symbolic interpreter to see which functions are causing the problem. To trace the symbolic interpreter, run $$\texttt{(gl::trace-gl-interp~:show-values t)}.$$ Here, at each call of the symbolic interpreter, the term being interpreted and the variable bindings are shown, but since symbolic objects may be too large to print, any bindings that are not concrete are hidden. You can also get a trace with no variable bindings using `:show-values nil`. It may also be helpful to simply interrupt the computation and look at the Lisp backtrace, after executing $$\texttt{(set-debugger-enable t)}.$$ In many cases, performance problems are due to BDDs growing too large. This is likely the case if the interpreter appears to get stuck (not printing any more trace output) and the backtrace contains a lot of functions with names beginning in `q-`, which is the convention for BDD operators. In some cases, these performance problems may be solved by choosing a more efficient BDD order. But note that certain operations like multiplication are exponentially hard. If you run into these limits, you may need to refactor or decompose your problem into simpler sub-problems (Section \[sec:def-gl-param-thm\]). There is one kind of BDD performance problem with a special solution. Suppose GL is asked to prove `(equal spec impl)` when this does not actually hold. Sometimes the symbolic objects for `spec` and `impl` can be created, but the BDD representing their equality is too large to fit in memory. The goal may then be restated with `always-equal` instead of `equal` as the final comparison. Logically, `always-equal` is just `equal`. But `always-equal` has a custom symbolic counterpart that returns `t` when its arguments are equivalent, or else produces a symbolic object that captures just one counterexample and is indeterminate in all other cases. Another possible problem is that the symbolic interpreter never gets stuck, but keeps opening up more and more functions. These problems might be due to redundant recursion (see Section \[sec:redundant-recursion\]), which may be avoided by providing a more efficient preferred definition (Section \[sec:preferred-definitions\]) for the function. The symbolic interpreter might also be inefficiently interpreting function calls on concrete arguments, in which case a `def-gl-clause-processor` call may be used to allow GL to execute the functions directly (Section \[sec:concrete-execution\]). Indeterminate Results {#sec:indeterminate-results} --------------------- Occasionally, GL will abort a proof and print a message saying it found indeterminate results. In this case, the examples printed are likely *not* to be true counterexamples, and examining them may not be particularly useful. One likely reason for such a failure is that some of GL’s built-in symbolic counterparts have limitations. For example, most arithmetic primitives will not perform symbolic computations on non-integer numbers. When “bad” inputs are provided, instead of producing a new `:g-number` object, these functions will produce a `:g-apply` object, which is a type of symbolic object that represents a function call. A `:g-apply` object cannot be syntactically analyzed in the way other symbolic objects can, so most symbolic counterparts, given a `:g-apply` object, will simply create another one wrapping its arguments. To diagnose indeterminate results, it is helpful to know when the first `:g-apply` object was created. If you run $$\texttt{(gl::break-on-g-apply)},$$ then when a `:g-apply` object is constructed, the function and symbolic arguments will be printed and an interrupt will occur, allowing you to inspect the backtrace. For example, the following form produces an indeterminate result. $$\begin{array}{l} \texttt{(def-gl-thm~integer-half} \\ \texttt{~~:hyp~(and~(unsigned-byte-p~4~x)} \\ \texttt{~~~~~~~~~~~~(not~(logbitp~0~x)))} \\ \texttt{~~:concl~(equal~(*~1/2~x)} \\ \texttt{~~~~~~~~~~~~~~~~(ash~x~-1))} \\ \texttt{~~:g-bindings~`((x~,(g-int~0~1~5))))} \\ \end{array}$$ After running `(gl::break-on-g-apply)`, running the above form enters a break after printing $$\texttt{(g-apply BINARY-* (1/2 (:G-NUMBER (NIL \# \# \# NIL)))}$$ to signify that a `:g-apply` form was created after trying to multiply some symbolic integer by $\frac{1}{2}$. Another likely reason is that there is a typo in your theorem. When a variable is omitted from the `:g-bindings` form, a warning is printed and the missing variable is assigned a `:g-var` object. A `:g-var` can represent any ACL2 object, without restriction. Symbolic counterparts typically produce `:g-apply` objects when called on `:g-var` arguments, and this can easily lead to indeterminate results. Memory Problems {#sec:memory-problems} --------------- Memory management can play a significant role in symbolic execution performance. In some cases GL may use too much memory, leading to swapping and slow performance. In other cases, garbage collection may run too frequently or may not reclaim much space. We have several recommendations for managing memory in large-scale GL proofs. Some of these suggestions are specific to Clozure Common Lisp. 1\. Load the `centaur/misc/memory-mgmt-raw` book and use the `set-max-mem` command to indicate how large you would like the Lisp heap to be. For instance, $$\texttt{(set-max-mem (* 8 (expt 2 30)))}$$ says to allocate 8 GB of memory. To avoid swapping, you should use somewhat less than your available physical memory. This book disables ephemeral garbage collection and configures the garbage collector to run only when the threshold set above is exceeded, which can boost performance. 2\. Optimize hash-consing performance. GL’s representations of BDDs and AIGs use `hons` for structure-sharing. The `hons-summary` command can be used at any time to see how many honses are currently in use, and hash-consing performance can be improved by pre-allocating space for these honses with `hons-resize`. See the `:doc` topics for these commands for more information. 3\. Be aware of (and control) hash-consing and memoization overhead. Symbolic execution can use a lot of hash conses and can populate the memoization tables for various functions. The memory used for these purposes is *not* automatically freed during garbage collection, so it may sometimes be necessary to manually reclaim it. A useful function is `(maybe-wash-memory n)`, which frees this memory and triggers a garbage collection only when the amount of free memory is below some threshold $n$. A good choice for $n$ might be 20% of the `set-max-mem` threshold. It can be useful to call `maybe-wash-memory` between proofs, or between the cases of parametrized theorems; see `:doc def-gl-param-thm` for its `:run-before-cases` argument. Related Work {#sec:related} ============ GL is most closely related to Boyer and Hunt’s [@09-boyer-g] *G* system, which was used for earlier proofs about Centaur’s floating-point unit. G used a symbolic object format similar to GL’s, but only supported BDDs. It also included a compiler that could produce “generalized” versions of functions, similar to symbolic counterparts. GL actually has such a compiler, but the interpreter is more convenient since no compilation step is necessary, and the performance difference is insignificant. In experimental comparisons, GL performed as well or better than G, perhaps due to the change from G’s sign/magnitude number encoding to GL’s two’s-complement encoding. The G system was written “outside the logic,” in Common Lisp. It could not be reasoned about by ACL2, but an experimental connection was developed which allowed ACL2 to trust G to prove theorems. In contrast, GL is written entirely in ACL2, and its proof procedure is a reflectively-verified clause processor, which provides a significantly better story of trust. Additionally, GL can be safely configured and extended by users via preferred definitions and custom symbolic counterparts. Reeber [@06-reeber-sulfa] identified a decidable subset of ACL2 formulas called SULFA and developed a SAT-based procedure for proving theorems in this subset. Notably, this subset included lists of bits and recursive functions of bounded depth. The decision procedure for SULFA is not mechanically verified, but Reeber’s dissertation [@07-reeber-dissertation] includes an argument for its correctness. GL addresses a different subset of ACL2 (e.g., SULFA includes uninterpreted functions, whereas GL includes numbers and arithmetic primitives), but the goals of both systems are similar. ACL2 has a built-in BDD algorithm (described in `:doc bdd`) that, like SULFA, basically deals with Booleans and lists of Booleans, but not numbers, addition, etc. This algorithm is tightly integrated with the prover; it can treat provably Boolean terms as variables and can use unconditional rewrite rules to simplify terms it encounters. The algorithm is written in program mode (outside the ACL2 logic) and has not been mechanically verified. GL seems to be significantly faster, at least on a simple series of addition-commutativity theorems. Fox [@11-fox-blasting] has implemented a bit-blasting procedure in HOL4 that can use SAT to solve problems phrased in terms of a particular bit-vector representation. This tool is based on an LCF-style integrations of proof-producing SAT solvers, so it has a strong soundness story. We would expect there to be some overhead for any LCF-style solution [@09-weber-sat], and GL seems to be considerably faster on the examples in Fox’s paper; see the supporting materials for details. Manolios and Srinivasan [@06-manolios-pipeline] describe a connection between ACL2 and UCLID to verify that a pipelined processor implements its instruction set. In this work, ACL2 is used to simplify the correctness theorem for a bit-accurate model of the processor down to a more abstract, term-based goal. This goal is then given to UCLID, a decision procedure for a restricted logic of counter arithmetic, lambdas, and uninterpreted functions. UCLID then proves the goal much more efficiently than, e.g., ACL2’s rewriter. This work seems complementary to GL, which deals with bit-level reasoning, i.e., the parts of the problem that this strategy addresses using ACL2. Srinivasan [@07-srinivasan-dissertation] additionally described ACL2-SMT, a connection with the Yices SMT solver. The system attempts to unroll and simplify ACL2 formulas until they can be translated into the input language of the SMT solver (essentially linear integer arithmetic, array operations, and uninterpreted integer and Boolean functions). It then calls Yices to discharge the goal, and Yices is trusted. GL addresses a different subset of ACL2, e.g., GL supports list operations and more arithmetic operations like `logand`, but ACL2-SMT has uninterpreted functions and can deal with, e.g., unbounded arithmetic. Armand, et. al [@11-armand-sat] describe work to connect SAT and SMT solvers with Coq. Unlike the ACL2-SMT work, the connection is carried out in a verified way, with Coq being used to check proof witnesses generated by the solvers. This connection can be used to prove Coq goals that directly fit into the supported logic of the SMT solver. GL is somewhat different in that it allows most any ACL2 term to be handled when its variables range over a finite space. Conclusions =========== GL provides a convenient and efficient way to solve many finite ACL2 theorems that arise in hardware verification. It allows properties to be stated in a straightforward manner, scales to large problems, and provides clear counter-examples for debugging. At Centaur Technology, it plays an important role in the verification of arithmetic units, and we make frequent improvements to support new uses. Beyond this paper, we encourage all GL users to see the online documentation, which can be found under `:doc gl` after loading the GL library. If you prefer, you can also generate an HTML version of the documentation; see `centaur/README` for details. Finally, the documentation for ACL2(h) may be useful, and can be found at `:doc hons-and-memoization`. While we have described the basic idea of symbolic execution and how GL uses it to prove theorems, Swords’ dissertation [@10-swords-dissertation] contains a much more detailed description of GL’s implementation. It covers tricky topics like the handling of `if` statements and the details of BDD parametrization. It also covers the logical foundations of GL, such as correctness claims for symbolic counterparts, the introduction of symbolic interpreters, and the definition and verification of the GL clause processor. Acknowledgments --------------- Bob Boyer and Warren Hunt developed the G system, which pioneered many of the ideas in GL. Anna Slobodová has carried out several sophisticated proofs with GL and beta-tested many GL features. Matt Kaufmann and Niklas Een have contributed to our verified SAT integration. Gary Byers has answered many of our questions and given us advice about Clozure Common Lisp. We thank Warren Hunt, Matt Kaufmann, David Rager, Anna Slobodová, and the anonymous reviewers for their corrections and feedback on this paper. [^1]: Note that since `#b0010100` is not within a `:g-boolean` or `:g-number` form, it is *not* the index of a Boolean variable. Instead, like the symbols `exact` and `fast`, it is just an ordinary ACL2 constant that stands for itself, i.e., 20.
{ "pile_set_name": "ArXiv" }
--- author: - 'Giovanni Picogna, Wilhelm Kley' bibliography: - 'biblio.bib' date: 'Received / Accepted ' subtitle: HL Tau system title: 'How do giant planetary cores shape the dust disk?' --- [We are observing, thanks to ALMA, the dust distribution in the region of active planet formation around young stars. This is a powerful tool to connect observations with theoretical models and improve our understandings of the processes at play.]{} [We want to test how a multi-planetary system shapes its birth disk and study the influence of the planetary masses and particle sizes on the final dust distribution. Moreover, we apply our model to the HL Tau system in order to obtain some insights on the physical parameters of the planets that are able to create the observed features.]{} [We follow the evolution of a population of dust particles, treated as Lagrangian particles, in two-dimensional, locally isothermal disks where two equal mass planets are present. The planets are kept in fixed orbits and they do not accrete mass.]{} [The outer planet plays a major role removing the dust particles in the co-orbital region of the inner planet and forming a particle ring which promotes the development of vortices respect to the single planetary case. The ring and gaps width depends strongly on the planetary mass and particle stopping times, and for the more massive cases the ring clumps in few stable points that are able to collect a high mass fraction. The features observed in the HL Tau system can be explained through the presence of several massive cores that shape the dust disk, where the inner planet(s) should have a mass on the order of $0.07\,M_\mathrm{Jup}$ and the outer one(s) on the order of $0.35\,M_\mathrm{Jup}$. These values can be significantly lower if the disk mass turns out to be less than previously estimated. Decreasing the disk mass by a factor 10 we obtain similar gap widths for planets with a mass of $10\,M_\oplus$ and $20\,M_\oplus$ respectively. Although the particle gaps are prominent, the expected gaseous gaps would be barely visible.]{} Introduction {#sec:intro} ============ The planetary cores of giant planets form on a timescale $\sim 1\,\mbox{Myr}$. In this relatively short time-span, a huge number of processes takes place, allowing a swarm of small dust particles to grow several order of magnitudes in size and mass, before the gas disk removal. Until now, the only observational constraints that we had in order to test planet formation models were the gas and dust emissions from protoplanetary nebulae on large scales (on the order of $\sim100\,\mbox{au}$) and the final stage of planet formation through the detection of full-fledged planetary systems. Thanks to the recent advent of a new generation of radio telescopes, like the Atacama Large Millimeter Array (ALMA), we are starting to get some pristine images of the formation process itself, resolving the dust component of protoplanetary disks in the region of active planet formation around young stars. An outstanding example of this giant leap in the observational data is the young HL Tau system, imaged by ALMA in Bands $3$, $6$, and $7$ (respectively at wavelengths $2.9$, $1.3$ and $0.87\,\mbox{mm}$) with a spatial resolution up to $3.5\,\mbox{au}$. Several features can be seen in the young protoplanetary disk, but the most striking is the presence of several axysimmetric rings in the $\mbox{mm}$ dust disk [@Partnership2014]. Although different mechanisms can be responsible of the observed features [@Flock2015; @Zhang2015], the most straightforward explanation for the ring formation, in the sense that others mechanism require specific initial conditions that reduce their general applicability, is the presence of several planetary cores that grow in their birth disk and shape its dust content. Indeed, in order to have a particle concentration at a particular region, we need a steep pressure gradient in the gaseous disk which can trap particles, ‘sufficiently’ decoupled from the gas, by changing its migration direction. A long-lived high-pressure region can be created even by a small mass planets [@Paardekooper2004], which can effectively carve a deep dust gap and concentrate particles at the gap edges and at corotation in tadpole orbits [@Paardekooper2007; @Fouchet2007]. The aim of this paper is to test how two giant planetary cores shape the dust disk in which they are born, implementing a particle population in the 2D hydro code <span style="font-variant:small-caps;">fargo</span> [@Masset2000], and study the influence of the planetary masses and particle sizes on the final disk distribution. Moreover, we apply our model to the HL Tau system in order to obtain some insights on the physical parameters of the planets creating the observed features. This paper is organised as follows. In Section \[sec:model\] we discuss under which physical conditions a planet is capable of opening a gap in the dust and gaseous disk in order to define the important physical scales for our model. In Section \[sec:drag\] we define the model adopted for the gas drag. Then, the setup of our simulations is explained in Section \[sec:initsetup\], and the main results are outlined in Section \[sec:res\]. Finally, in Section \[sec:disc\] we discuss our results and their implications and limitation, while the major outcomes are highlighted in Section \[sec:conc\]. Background {#sec:model} ========== In order to set up our model, we need first to determine what is the minimum mass of a planetary core that is able to open up a gap in the gaseous and dust disk, for a given set of disk parameters, and the relative opening timescale. In particular, we want to understand the influence of the different physical processes modelled on the outcome of the simulation. Theory of gap formation {#sec:gap} ----------------------- The theory of gap formation in gaseous disks has been studied extensively in the past, and there are a set of general criteria that a planet must fulfil in order to carve a gap. However, the possibility to open a gap in the dust disk is more complicated, since it depends strongly on the coupling between the dust and the gaseous media, and only recently this problem has been tackled. ### Gaseous gap The torque exchange between the disk and the planet adds angular momentum to the outer disk regions and removes it from the inner ones. As a result, the disk structure is modified in the regions close to the planet location and, given a minimum core mass and enough time, a gap develops. The time scale needed to open a gap of half width $x_\mathrm{s}$ can be crudely estimated from the impulse approximation [@Lin1979]. The total torque acting on a planet of mass $M_\mathrm{p}$ and semi-major axis $a$ due to its interaction with the outer disk of surface density $\Sigma$ is [@Lin1979; @Papaloizou2006] $$\label{eq:torqPlan} \frac{dJ}{dt}=-\frac{8}{27}\frac{G^2M_\mathrm{p}^2 a\Sigma} {\Omega_\mathrm{p}^2{x_\mathrm{s}}^3}$$ where $\Omega_\mathrm{p}=\sqrt{GM_\star/a^3}$ is the planet orbital frequency around a star of mass $M_\star$. The angular momentum that must be added to the disk in order to remove the gas inside the gap is $$\label{eq:AngMomRem} \Delta J=2\pi a x_\mathrm{s} \Sigma\frac{dl}{dr}\biggr|_{a}x_\mathrm{s}$$ where $l=\sqrt{GM_\star r}$ is the gas specific angular momentum. Thus, the gap opening time can be estimated as $$\begin{aligned} \label{eq:topen} t_\mathrm{open} &= \frac{\Delta J}{|dJ/dt|} = \frac{27}{8}\pi\frac{1}{q^2\sqrt{GM_\star a}} \frac{{x_\mathrm{s}}^5}{a^3} \\ &\simeq 33.8\,{\left(\frac{h}{0.05}\right)}^{5} {\left(\frac{q}{1.25*10^{-4}}\right)}^{-2} P_\mathrm{p} \end{aligned}$$ where $q=M_\mathrm{p}/M_\star$ is the planet to star mass ratio, and we assumed that the minimum half width of the gap should be $x_\mathrm{s}=H$, where $H$ is the effective disk thickness, and $h=H/r$ is the normalised disk scale height. All values are evaluated at the planet location and the final estimate of the opening time is given in units of the planet orbital time $P_\mathrm{p}$. Although this is a crude estimate, it has been shown with more detailed descriptions that the order of magnitude obtained is correct. Based on this criterion, given enough time, even a small core can open a gap in an inviscid disk. But, we need to quantify the magnitude of the competing factors that act to prevent or promote its development, in order to obtain a better estimate of the gap opening time scale, and the minimum mass ratio. #### Thermal condition — The assumption made for the minimum gap half width in eq. (\[eq:topen\]) is necessary to allow non-linear dissipation of waves generated by the planet [@Korycansky1996] and to avoid dynamical instabilities at the planet location [@Papaloizou1984], which are necessary conditions to clear the regions close to the planet location. This condition, called thermal condition, translates into a first criterion for open up a gap $$\label{eq:gapTherm} x_\mathrm{s}=1.16a\sqrt{\frac{q}{h}}\geq H = ha$$ for a 2D disk [@Masset2006], which correspond to a minimum planet to star mass ratio of $$\label{eq:gapqTherm} q_\mathrm{th} \simeq h^3 = 1.25*10^{-4}\, {\left(\frac{h}{0.05}\right)}^3$$ and a related thermal mass $$\label{eq:gapMTherm} M_\mathrm{th} \simeq M_\star h^3 = 1.25*10^{-4} M_\star$$ #### Viscous condition — The viscous diffusion acts to smooth out sharp radial gradients in the disk surface density, preventing the gap clearing mechanism. The time needed by viscous forces to close up a gap of width $x_\mathrm{s}$ is given by the diffusion timescale for a viscous fluid, which can be derived directly from the Navier-Stokes equation in cylindrical polar coordinates [see e.g. @Armitage2010] $$\label{eq:tvisc} t_\mathrm{visc}=\frac{x_\mathrm{s}^2}{\nu}\simeq 39.8\, {\left(\frac{\alpha}{0.004}\right)}^{-1} P_\mathrm{p}$$ where $\nu$ is the kinematic viscosity, and $\alpha=\nu\Omega/c_\mathrm{s}^2$ is the Shakura-Sunyaev parameter that measures the efficiency of angular momentum transport due to turbulence. The minimum mass $q_\mathrm{visc}$ needed to open a gap in a viscous disk is obtain by comparing the opening time due to the torque interaction, eq. (\[eq:topen\]), with the closing time owing to viscous stress eq. (\[eq:tvisc\]) [@Lin1986; @Lin1993] $$\label{eq:gapVisc} q_\mathrm{visc} \simeq {\left(\frac{27}{8}\pi\right)}^{1/2}\alpha^{1/2}h^{5/2} \simeq 1.15*10^{-4}{\left(\frac{\alpha}{0.004}\right)}^{1/2} {\left(\frac{h}{0.05}\right)}^{5/2}$$ Thus, for the parameters chosen, the viscous condition is very similar to the thermal condition. #### Generalised condition — A more general semi-analytic criterion, which takes into account the balance between pressure, gravitational, and viscous torques for a planet on a fixed circular orbit has also been derived [@Lin1993; @Crida2006] $$\label{eq:gapCrida} \frac{3}{4}\frac{H}{r_\mathrm{H}}+\frac{50}{q\ R_\mathrm{e}} < 1$$ where $r_\mathrm{H}=a{(q/3)}^{1/3}$ is the Hill radius and $R_\mathrm{e}$ is the Reynolds number. From the previous relation, and plugging in the parameters used in our analysis we found that the minimum mass ratio is $q\simeq10^{-3}$. However, this criterion was derived for low planetary masses in low viscosity disks, and the behaviour might be considerably different changing those parameters. Moreover, this condition defines a gap as a drop of the mass density to $10\%$ of the unperturbed density at the planet’s location, but even a less dramatic depletion of mass affect planet-disk interaction and could be potentially detected. ### Dust gap In order to create a gap in the dust disk we need a radial pressure structure induced by the planet. Indeed, also a very shallow gap in the gas will change the drift speed of the dust particles significantly [@Whipple1972; @Weidenschilling1977] favouring the formation of a particle gap. Thus, the minimum mass needed to open up a gap in the dust disk is a fraction of the one needed to clear a gap in the gas.  [@Paardekooper2004; @Paardekooper2006] performed extensive 2D simulations, treating the dust as a pressure-less fluid (which is a good approximation for tightly coupled particles), and they found that a planet more massive than $0.05 M_\mathrm{Jup}=0.38\,M_\mathrm{th}$ can open a gap in a mm size disk. Furthermore the dust gap opening time for this lower mass case was $50\,P_\mathrm{p}$, which is about half the timescale to open a gas gap. Previous studies found also a clear dependence of the gap width with the core mass, where larger planets open wider gaps [@Paardekooper2006; @Zhu2014]. Finally, there are some contrasting results regarding the dependence of the gap width respect to the particle size [@Paardekooper2006], but it seems that for less coupled particles the gap is wider for larger particles [@Fouchet2007]. Model {#sec:drag} ===== Solid particles and gaseous molecules exchange momentum due to drag forces, that depend strongly on the condition of the gas and on shape, size and velocity of the particle. For sake of simplicity we limit ourselves to spherical particles. The drag force acts always in a direction opposite to the relative velocity. The regime that describes a particular system is defined by any two of three non-dimensional parameters. The Knudsen number $K = \lambda/s$ is the ratio of the two major length scales of the system: the mean free path of the gas molecules $\lambda$ and the particle size $s$. The Mach number $M = v_\mathrm{r}/c_\mathrm{s}$ is the ratio between the relative velocity between particles and gas $\mathbf{v}_\mathrm{r}$, and the gas sound speed $c_\mathrm{s}$. Finally, the Reynolds number $R_\mathrm{e}$ is related to different physical properties of the particle and gaseous media $$R_\mathrm{e} = \frac{2 v_\mathrm{r}s}{\nu_\mathrm{m}}$$ where $\nu_\mathrm{m}$ is the gas molecular viscosity, defined as $$\nu_\mathrm{m} = \frac{1}{3}{\left(\frac{m_0\bar{v}_\mathrm{th}}{\sigma} \right)}$$ where $m_0$ and $\bar{v}_\mathrm{th}$ are the mass and mean thermal velocity of the gas molecules, and $\sigma$ is their collisional cross section. There are two main regimes of the drag forces that we are going to study. Stokes regime ------------- For small Knudsen number, the particle experience the gas as a fluid. The drag force of a viscous medium with density $\rho_\mathrm{g} (\mathbf{r}_\mathrm{p})$ acting on a spherical dust particle with radius $s$ can be modelled as [@Landau1959] $$\mathbf{F}_\mathrm{D,S}=-\frac{1}{2}C_\mathrm{D}\pi s^2\rho_\mathrm{g} (\mathbf{r}_\mathrm{p}) v_\mathrm{r} \mathbf{v}_\mathrm{r}$$ where the drag coefficient $C_\mathrm{D}$ is defined for the various regimes described above as [@Whipple1972; @Weidenschilling1977] $$C_\mathrm{D} \simeq \begin{cases} 24\,{R_\mathrm{e}}^{-1} & R_\mathrm{e} < 1 \\ 24\,{R_\mathrm{e}}^{-0.6} & 1 < R_\mathrm{e} < 800 \\ 0.44 & R_\mathrm{e} > 800 \end{cases}$$ for low Mach numbers, however, for our choice of the parameter space high values are not expected in this regime. Epstein regime -------------- For low Knudsen numbers, the interaction between particles and single gas molecule becomes important. It can be modelled as [@Epstein1923] $$\mathbf{F}_\mathrm{D,E}= -\frac{4}{3}\pi\rho_\mathrm{g} (\mathbf{r}_\mathrm{p})s^2\bar{v}_\mathrm{t} \mathbf{v}_\mathrm{r}$$ General law ----------- The transition between the Epstein and Stokes regime occurs for a particle of size $s=9\lambda/4$ which in our case is a $m$ size particle in the inner disk, where the mean free path of the gas molecules is defined as [@Haghighipour2003] $$\lambda = \frac{m_0}{\pi a_0^2\rho_\mathrm{g}(r)} = \frac{4.72*10^{-9}}{\rho_\mathrm{g}} [cm]$$ for a molecular hydrogen particle with $a_0=1.5*10^8\,\mbox{cm}$. In order to model a broad range of particles sizes we adopt a linear combination of Stokes and Epstein regimes [@Supulver2000; @Haghighipour2003] $$\mathbf{F}_\mathrm{D} = (1-f)\mathbf{F}_\mathrm{D,E} + f\mathbf{F}_\mathrm{D,S}$$ where the factor $f$ is related to the Knudsen number and is defined as: $$f=\frac{s}{s+\lambda}=\frac{1}{1+\mathit{Kn}}$$ Stopping time ------------- An important parameter to evaluate the strength of the drag force is the stopping time $t_\mathrm{s}$, which can be defined as $$\mathbf{F}_\mathrm{D} = -\frac{m_\mathrm{s}}{t_\mathrm{s}} \mathbf{v}_\mathrm{r}$$ where $m_\mathrm{s}$ is the mass of the single dust particle of density $\rho_\mathrm{s}$ and, in the Epstein regime, the stopping time can be expressed as $$\label{eq:stop} t_\mathrm{s}=\frac{s\rho_\mathrm{s}}{\rho_\mathrm{g}\bar{v}_\mathrm{th}}$$ It is useful also to derive an non-dimensional stopping time (or Stokes number) as $$\tau_\mathrm{s}=\frac{s\rho_\mathrm{s}}{\rho_\mathrm{g} \bar{v}_\mathrm{th}}\Omega_\mathrm{K}(\mathbf{r})$$ Setup {#sec:initsetup} ===== We used the <span style="font-variant:small-caps;">fargo</span> code [@Masset2000; @Baruteau2008], modified in order to take into account the evolution of partially decoupled particles. An infinitesimally thin disk around a star resembling the observed HL Tau system [@Kwon2011] is modelled. Thus, the vertically integrated versions of the hydrodynamical equations are solved in cylindrical coordinates ($r,\phi,z$), centred on the star where the disk lies in the equatorial plane ($z=0$). The resolution adopted in the main simulations is $256\times512$ with $250\,000$ dust particles for each size, although we tested also a case doubling the resolution in order to test whether our results were resolution dependent. Gas component {#par:gasdisk} ------------- The initial disk is axysimmetric and it extends from $0.1$ to $4$ in code units, where the unit of length is $25\,\mbox{au}$. The gas moves with azimuthal velocity given by the Keplerian speed around a central star of mass $1$ ($0.55\,M_\odot$), corrected by the rotating velocity of the coordinate system. We assume no initial radial motion of the gas, since a thin Keplerian disk is radially in equilibrium as gravitational and centrifugal forces approximately balance because pressure effects are small. The initial surface density profile is given by $$\label{eq:surfprof} \Sigma(r)=\Sigma_0\ r^{-1}$$ where $\Sigma_0$ is the surface density at $r=1$ such as the total disk mass is equal to $0.24$ ($0.13\ M_\odot$) in order to match the value found by @Kwon2011. The disk is modelled with a locally isothermal equation of state, which keeps constant the initial temperature stratification throughout the whole simulation. We assumed a constant aspect ratio $H/r = 0.05$, that corresponds to a temperature profile $$\label{eq:tempprof} T(r) = T_0\ r^{-1}$$ We introduce a density floor of $\Sigma_\mathrm{floor} = 10^{-9}\ \Sigma_0$ in order to avoid numerical issues. For the inner boundary we applied a zero-gradient outflow condition, while for the outer boundary we adopted a non-reflecting boundary condition. In addition, to maintain the initial disk structure in the outer parts of the disk we implemented a wave killing zone close to the boundary [@deVal-Borro2006], $$\label{eq:dumping} \frac{d\xi}{dt}=-\frac{\xi-\xi_0}{\tau} {R(r)}^2$$ where $\xi$ represent the radial velocity, angular velocity, and surface density. Those physical quantities are damped towards their initial values on a timescale given by $\tau$ and $R(r)$ is a linear ramp-function decreasing from $1$ to $0$ from $r=3.6$ to the outer radius of the computational domain. The details of the implementation of the boundary conditions can be found in @Muller2012. For the viscosity we adopt a constant $\alpha$ viscosity $\alpha = 0.004$. Furthermore, we discuss the gravitational stability of such initial configuration in Appendix \[sec::stability\]. Dust component {#par:dustdisk} -------------- The solid fraction of the disk is modelled with $250\,000$ Lagrangian particles for each size considered. We study particles with sizes of $\mbox{mm},\mbox{cm},\mbox{dm},\mbox{m}$, and internal density $\rho_\mathrm{d}=2.6\ \mbox{g/cm}^3$. The initial surface density profile for the dust particles is flat $$\Sigma_\mathrm{s}(r) = \Sigma_\mathrm{s,0}$$ This choice was made in order to have a larger reservoir of particles in the outer disk, since at the beginning of the simulation the planets are slowly growing, thus they are unable to filtrate particles efficiently. The particles were introduced at the beginning of the simulation, and they are evolved with two different integrators, depending on their stopping times. Following the approach by [@Zhu2014], we adopted a semi-implicit Leapfrog-like (Drift-Kick-Drift) integrator in polar coordinates for larger particles, and a fully implicit integrator for particles well coupled to the gas. For the interested reader, we have added in Appendix \[sec::integrators\] the detailed implementation of the two integrators. In this work we do not take into account the back-reaction of the particle on the gaseous disk since we are interested only in the general structure of the dust disk and not on the evolution of dust clumps. Furthermore, for sake of simplicity and to speed up our simulations, we do not consider the effect of the disk self-gravity on the particle evolution. This could be in principle an important factor for the young and massive HL Tau system, although no asymmetric structures related to a gravitationally unstable disk are observed. Finally, we do not consider particle diffusion by disk turbulence, which could be important to prevent strong clumping of particles (Baruteau, private communication), and it will be the subject of a future study. In Figure \[Fig:Ts\] we plotted the stopping times calculated at the beginning of the simulations for the various particle species modelled. The smaller particles (cm, and mm-size) are strongly coupled to the gas in the whole domain, while dm-size particles approach a stopping time of order unity in the outer disk, and m particles in the inner part were we can see also a change in the profile due to the passage from the Epstein to Stokes drag regime. ![Stopping time at the beginning of the simulation for the different particle sizes modelled.\[Fig:Ts\]](Ts){width=".45\textwidth"} The transition between the Epstein and Stokes regime is clearly visible in Figure \[Fig:Vrad\] where the radial drift velocity at the equilibrium is plotted for the different particles sizes in the whole domain. As the particles approaches a stopping time of order unity their radial velocity grows, so the highest value is associated to the dm particles in the outer disk and the m-size particles in the inner parts. Furthermore, due to the transition between the two drag regimes, the profile of the curves rapidly changes from cm to m-size particles. We point out that when the planet start to clear a gap, the gas surface density inside it drops, and thus the stopping time of particles in horseshoe orbit can increase up to 2 orders of magnitude [@Paardekooper2006]. The transition between the different drag regimes is then expected not only in the inner part of the disk but also near the planet co-orbital regions. ![Radial drift velocity profile at the equilibrium for the different particle sizes modelled.\[Fig:Vrad\]](vr){width=".45\textwidth"} Planets {#par:planets} ------- We embed two equal mass planets that orbit their parent star in circular orbits with semi-major axes $a_1=1$ and $a_2=2$. Their mass ranges from $1\,M_\mathrm{th}=0.07\,M_\mathrm{Jup}$ to $10\,M_\mathrm{th}=0.7\,M_\mathrm{Jup}$. Mass accretion is not allowed, and the planets do not feel the disk, so their orbital parameters remain fixed during the whole simulation. The gravitational potential of the planets is modelled through a Plummer type prescription, which takes into account the vertical extent of the disk and avoids the numerical issues related to a point-mass potential. We used a smoothing value of $\epsilon = 0.6\ H$ as this describes the vertically averaged forces very well [@Muller2012]. To prevent strong shock waves in the initial phase of the simulations the planetary core mass is increased slowly over $20$ orbits. Tab. \[tab:sum\] summarises the parameters of the standard model. Parameter Range --------------------------------- ------------------------- Planet mass \[$M_\mathrm{th}$\] $1$, $5$, $10$ Dust size \[$\mbox{cm}$\] $0.1$, $1$, $10$, $100$ : Models[]{data-label="tab:sum"} With these values the gaps are not expected to overlap since, even for the highest mass planet, from eq. (\[eq:gapTherm\]) we have $x_\mathrm{s}\simeq0.18$, thus $$a_1+x_\mathrm{s} < a_2-x_\mathrm{s}$$ The simulations were run for 600 orbital times of the inner planet, which corresponds to $\sim 200$ orbits of the outer planet, and to $\sim 10^5\,\mbox{yr}$, which is a consistent amount of time for a planetary system around a young star like HL Tauri which has only $10^6\,\mbox{yr}$. Results {#sec:res} ======= Massive core ($10 M_\mathrm{th}$) {#sec:res10} --------------------------------- Based on the criteria of gap opening reviewed in Section \[sec:model\], two $10 M_\mathrm{th}$ planets should open up rapidly a gap in the gas and dust disk. We study in detail the disk evolution for these massive cores, focusing on particle concentration which happen mainly at gap edges. In the following analysis the region of high surface density between the two planets is referred as the ring. It will have an inner and outer edge, which correspond to the outer edge of the inner gap and the inner edge of the outer gap, respectively. Furthermore, there will be the outer edge of the outer gap and the inner edge of the inner gap. The study of the various gap edges, where there is an abrupt change in the surface density profile, is important since those are potentially unstable regions where gas and particles could collect, changing the final surface density distribution of the disk. ### Gas distribution The inner planet has already opened a clear gap after 100 orbits (Figure \[Fig:Surf10\] - top panel) where, as expected by the general criterion of [@Crida2006], the surface density is an order of magnitude lower than its unperturbed value. Meanwhile, the outer planet is still opening its gap since it has a longer dynamical timescale. ![Gas surface density (top panel) and vorticity (bottom panel) profile after $100$ orbits.\[Fig:Surf10\]](Sigma "fig:"){width=".45\textwidth"} ![Gas surface density (top panel) and vorticity (bottom panel) profile after $100$ orbits.\[Fig:Surf10\]](Vort "fig:"){width=".45\textwidth"} The steep profile in the surface density close to the ring inner edge can trigger a Rossby wave instability (RWI, [@Li2001]). This instability gives rise to a growing non-axisymmetric perturbation consisting of anticyclonic vortices. A vortex is able to collect a high mass fraction and it can change significantly the final distribution of gas and dust in the disk. An important parameter to study, when considering the evolution of vortices is the gas vorticity, which is defined as $$\omega_\mathrm{z}={(\nabla\times \mathbf{v})}_\mathrm{z}$$ We show its profile in Figure \[Fig:Surf10\] (bottom panel) and its 2D distribution in Figure \[Fig:Surf2D10\] (bottom panel). Comparing the 2D distributions of vorticity and surface density (Figure \[Fig:Surf2D10\]), we see that vorticity peaks where the gap is deeper, and low vorticity regions appear at the centre of spiral arms created by the planets and close to the ring inner edge. The development of vortices due to the presence of a planet has been studied extensively. However, their evolution in a multi-planet system has not yet been addressed. From Figure \[Fig:Surf2D10\] we see that the outer planet perturbs substantially the co-orbital region of the inner one. There are two competing factors that need to be taken into account in order to estimate the lifetime of a vortex. On the one hand vortex formation is promoted by the enhanced surface density gradient at the ring location due to the combined action of both planets that push away the disk from their location. On the other hand, the periodic close encounters of the outer planet with the vortices enhance the eccentricity of the dust particles trap into it, favouring their escape and thus depleting the solid concentration inside the vortex. ![Gas surface density (top panel) and vorticity distribution (bottom panel) at the end of the simulation for the massive cores case. \[Fig:Surf2D10\]](Surf2D10 "fig:"){width=".45\textwidth"} ![Gas surface density (top panel) and vorticity distribution (bottom panel) at the end of the simulation for the massive cores case. \[Fig:Surf2D10\]](Vort2D10 "fig:"){width=".45\textwidth"} The capacity of collect particles by a vortex is closely linked to its orbital speed. If a vortex has a Keplerian orbital speed, then dust particles with the same orbital frequency will remain in the vortex for many orbits and slowly drift to its centre due to drag forces. On the other hand, a particle in a vortex orbiting with a non-Keplerian frequency will experience a Coriolis force in the Keplerian reference frame and, if the drag force is unable to counteract it, the particle will leave the vortex location [@Youdin2010]. The evolution of vortices can be also studied from the dynamics of coupled particles, which follow closely the gas dynamics. From the analysis of cm particles near the ring inner edge (Figure \[Fig:Vortevol\]), we can see that after 50 orbits two vortices are already visible and they last for several tens of orbits. The outer planet stretches periodically the vortices, and as a result they slowly shrink in size. In order to see if the vortices that develop in our simulation are capable of collecting a large fraction of particles, we plot in Figure \[Fig:Vortevol\] the vortices in a co-moving frame with the disk at $r=1.3$. The two vortices follow within a few percent the Keplerian speed, thus they are potentially able to trap a consistent fraction of particles. ![cm-sized particle distribution at different time-steps (50, 60, 70, 80 orbital times) at the ring inner edge for the massive cores case. The evolution of two vortices in a frame co-moving with the disk at $1.3\,\mbox{au}$ is shown. The vortex centre orbits the central star with a velocity close to the background Keplerian speed, which promotes particle trapping inside the vortex. \[Fig:Vortevol\]](VortexEvol){width=".45\textwidth"} The influence of the outer planet on the development of RWI at the gap edge is studied by running a different model with only one massive planet ($M=10 M_\mathrm{th}$) at $r = 1.0$ (Figure \[Fig:Comp1\]). The particle concentration near the inner ring is weaker in the single core simulation. As a result, the vortices that form are less prominent and, although the perturbation to the ring inner edge is reduced respect to the dual core simulation, their lifetime is shorter. ![Same as Fig. \[Fig:Vortevol\] but for a single massive core at $r=1$. \[Fig:Comp1\]](Comp1pla){width=".45\textwidth"} ### Particle distribution The evolution of the normalised surface density of the various dust species is shown in Figure \[Fig:Dust10\]. The inner and outer planets carve rapidly (first $50$ orbits) a particle gap, except for the most coupled particles simulated (mm-sized), which are cleared on a longer timescale for the outer planet (see Figure \[Fig:Dust10\] - fourth panel). As stated before, this behaviour is only due to the longer dynamical timescale of the outer planet, and it follows closely the evolution of the gas in the outer gap. A significant fraction of particles clumps in the co-orbital region with the inner planet for several hundred periods. However, they are finally disrupted by the tidal interaction with the outer planet which excite their eccentricities, causing a close fly-by with the inner planet. The only particles which remain for the all simulated period in the co-orbital region are the m-size ones, although even they are perturbed and a significant mass exchange between the two Lagrangian points (L4 and L5) takes place (see Figure \[Fig:Dust10\] - first panel and Fig. \[Fig:Gap10m\] - bottom right panel). The outer planet is also able to keep a fraction of particles in the co-orbital region for longer timescale, though they are much more dispersed respect to the Lagrangian points. Moreover, in all simulations the L4 Lagrangian point is the most populated. ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $10\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust10\]](1m10M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $10\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust10\]](1dm10M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $10\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust10\]](1cm10M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $10\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust10\]](1mm10M "fig:"){width=".38\textwidth"} ![Particle distribution near the inner massive planetary core location for mm (top left), cm (top right), dm (bottom left) and m (bottom right) size particles at the end of the simulation. The velocity vectors of the particles respect to the planet are shown and the colour scale highlight the radial velocity component. \[Fig:Gap10m\]](gapdust10M){width=".45\textwidth"} The ring, which forms between the two gaps, gets shortly very narrow and it stabilises in a position close to the 5:3 mean motion resonance (MMR) with the inner planet which seems a stable orbit. As shown in Figure \[Fig:Clumping\] the particles in the ring clump around $5$ symmetric points which gain a high mass. ![dm sized particle distribution at the end of the simulation of the two massive cores. The particle ring between the two planets shrinks with time until it clumps in few stable points that forms a pentagon-like structure with an orbit close to the 5:3 MMR with the inner planet at $r=1.4$. \[Fig:Clumping\]](MMR){width=".45\textwidth"} There are visible vortices in the particle distribution in the first hundred orbits for the cm size particles at the inner ring edge (third panel). Although these structures are prominent in the particle distribution of cm size dust, they are not visible in the other dust size distributions. The main reason is that for larger particles the ring get shortly very narrow and so there is no time for them to be trapped into the vortex. The velocity components of the particles shown in Figure \[Fig:Gap10m\] highlight the strong perturbations due to the spiral arm generated by the outer planet which affects mainly the most coupled particles and the gas. The bodies passing close to the planet location, as in the meter case shown in Figure \[Fig:Gap10m\] (bottom right panel), gain a high velocity component that is represented by the long black arrow. Finally, the particle distributions at the end of the simulation (see Figure \[Fig:Gap10m\]) show the dependence of gap width on particle size. The mm size particles (Fig. \[Fig:Gap10m\] - top left panel) reach a region closer to the planet location, where we overplot the minimum half-gap width as calculated from the eq. (\[eq:gapTherm\]), showing that the dynamics of the smallest particles follow closely that of the gas. The others particles have increasingly larger gaps. The dm particles (Fig. \[Fig:Gap10m\] - bottom left panel) have cleared almost completely the gap region since they have a stopping time closer to one near the planet location and thus their evolution is faster. Intermediate mass core ($5 M_\mathrm{th}$) ------------------------------------------ The intermediate mass planet should still open up a gap in the gaseous disk, but on a much longer timescale, since from eq. (\[eq:topen\]) the opening time scales as $q^{-2}$. ### Gas distribution From the density profile of Figure \[Fig:Surf10\] (first panel) we can see that after $100$ orbits there is a visible gap opened by the inner planet, even if it is considerably shallower than the massive cores case, as for the vorticity profile (second panel). The outer planet still perturbs the co-orbital region of the inner one (Figure \[Fig:Surf2D5\]), but its magnitude is reduced and it has has barely modified the unperturbed surface density distribution at its location. Even though the influence by the outer planet is less dramatic compared to the more massive case, the reduction of the surface density profile’s steepness hinders the development of vortices near the ring inner edge. ![Gas surface density (top panel) and vorticity distribution (bottom panel) at the end of the intermediate mass cores simulation. \[Fig:Surf2D5\]](Surf2D5 "fig:"){width=".45\textwidth"} ![Gas surface density (top panel) and vorticity distribution (bottom panel) at the end of the intermediate mass cores simulation. \[Fig:Surf2D5\]](Vort2D5 "fig:"){width=".45\textwidth"} ### Particle distribution The intermediate mass inner planet carves a gap in the dust disk after $50$ orbits for all the particles sizes (Figure \[Fig:Dust5m\]). Instead, the outer planet after $50$ orbits is able to clean its gap only for the most decoupled particles (dm and m), while it takes $~300$ orbits to clean a gap in the cm particles (third panel), and it has open only a partial gap in the mm size particles disk at the end of the simulation (fourth panel). The dm particles (second panel) have the highest migration speed in the outer disk, as expected from Figure \[Fig:Vrad\] and after $300$ orbits they are all distributed in narrow regions at gap edges and in co-orbital region with the outer planet. On the other hand, the cm particles (third panel) are more coupled to the gas and the outer planet has not yet carved a gap sufficient to confine them in the outer regions of the disk at the end of the simulation. Thus, we observe after $600$ orbits that they engulf the planet gap (third panel, last screen-shot). The dependence of the gap width respect to particle size is highlighted in Figure \[Fig:Gap5m\], where we can see a strong difference between the mm particles gap, which follows the gas dynamic, remaining close to the location of the half-gap width ($x_\mathrm{s}$) over-plotted on the first panel, and the gap width of the other particles. The interaction between the two planet disrupts on the long term the particles clumping around the stable Lagrangian points, as in the more massive case. At the end of the simulation only the m and mm particles (Figure \[Fig:Gap5m\]) are still present in co-orbital region, preferable at the L5 point. The clumping of particles in few stable points at the ring location is still visible in the intermediate mass case, but only in the m, and dm cases (first and second panels of Figure \[Fig:Dust5m\]). Furthermore, some vortices are forming in the outer part of the particle disk of cm dust (third panel of Figure \[Fig:Dust5m\]). However, these are numerical artefacts because the disk outer edge is initialized with a sharp density profile cut which, on the long term, favours the development of vortices. ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $5\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust5m\]](1m5M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $5\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust5m\]](1dm5M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $5\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust5m\]](1cm5M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $5\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust5m\]](1mm5M "fig:"){width=".38\textwidth"} ![Particle distribution near the inner intermediate mass planetary core location for mm (top left), cm (top right), dm (bottom left) and m (bottom right) size particles at the end of the simulation. The velocity vectors of the particles respect to the planet are shown and the colour scale shows the relative radial velocity. \[Fig:Gap5m\]](gapdust5M){width=".45\textwidth"} Low mass core ($1 M_\mathrm{th}$) --------------------------------- Finally, we explore the low mass core scenario in order to study a case where the particle ring between the two planets does not clump into few stable points and the gaps carved by planets remain narrow, such as in the observed HL Tau system. ### Gas distribution Figure \[Fig:Surf10\] (top panel) shows that both the inner and outer planet are not massive enough to clear a gap in the gaseous disk within the simulated time. From the 2D distribution of the surface density and vorticity (Figure \[Fig:Surf2D1\]) it is possible to see that the presence of the two planets changes slightly the unperturbed state of the gaseous disk (where the scale has been changed respect to the plots of the more massive cases in order to highlight the small differences). ![Gas surface density (top panel) and vorticity distribution (bottom panel) at the end of the simulation for the two low mass cores. \[Fig:Surf2D1\]](Surf2D1 "fig:"){width=".45\textwidth"} ![Gas surface density (top panel) and vorticity distribution (bottom panel) at the end of the simulation for the two low mass cores. \[Fig:Surf2D1\]](Vort2D1 "fig:"){width=".45\textwidth"} ### Particle distribution Although the gas profile is not changed considerably due to the small mass of the planets, they are able to open up clear gaps in the dust disk in the first 50 orbits for m and dm size particles (Figure \[Fig:Dust1m\]), while it takes $100$ orbits for the cm size particles and at the end of the simulation it is still clearing the gap for the most coupled particles. As stated before, this process takes longer for the outer planet which is able to open a clear gap only at the end of the simulation for all but the mm size particles where only a partial gap is barely visible. ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $1\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust1m\]](1m1M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $1\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust1m\]](1dm1M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $1\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust1m\]](1cm1M "fig:"){width=".38\textwidth"} ![Dust normalised surface density distribution for the m (first panel), dm (second panel), cm (third panel), and mm-sized (fourth panel) particles disk and 2 equal mass $1\,M_\mathrm{th}$ cores at $r=1,2$ at 4 different times (50, 100, 300, 600 orbital times) for each case. \[Fig:Dust1m\]](1mm1M "fig:"){width=".38\textwidth"} A significant fraction of particles remains in the co-orbital region with both the inner and outer planet for the all period simulated. As found for the massive core case, their density is higher in the L4 point. Also in this case, the dm size particles are the ones with the fastest dynamical evolution and it is possible to see in Figure \[Fig:Dust1m\] (third panel) that after 300 orbits the outer disk engulfs the outer planet co-orbital region, which is unable to filtrate effectively those particles. In this case, the ring between the two planets does not clump in a small number of stable points, but it remains wide for the all simulation, though it shrinks with time and its width depends on the particle size (Figure \[Fig:SurfPart1\] -top panel). From Figure \[Fig:Dust1m\] (first panel) is it possible to see also the formation of ripples just outside the outer planet location. This effect can be recalled from the final eccentricity distribution of the m size particles (Figure \[Fig:SurfPart1\] - bottom panel). This behaviour is due to the eccentricity excitation of particles that passes close to the planet location. For less coupled particles, the interaction with the gas takes several orbital time-steps in order to smooth out the eccentricity, thus these typical structures form. This effect is only visible in the low mass cores simulation since the particle gap is narrower, thus the particles get closer to the planet location and the excitation of their eccentricity is higher. ![Particle surface density (top panel) and eccentricity profile (bottom panel) at the end of the simulation for the different particle species.\[Fig:SurfPart1\]](1MthSigmaPart "fig:"){width=".45\textwidth"} ![Particle surface density (top panel) and eccentricity profile (bottom panel) at the end of the simulation for the different particle species.\[Fig:SurfPart1\]](1MthEccPart "fig:"){width=".45\textwidth"} In Figure \[Fig:Gap1m\] we focused on the gap structure close to the inner planet location for the different particle sizes at the end of the simulation. We overplot also the minimum gap half-width $x_\mathrm{s}$, in order to test whether this condition is met for the most coupled particles which follow the gas dynamics. From the distribution of mm size particles we can see that the gap is opened exactly at the location of the minimum gap half-width, and there is still a lot of material in the horseshoe region. The gap is considerably wider for the cm size particles ![Particle distribution near the inner low mass planetary core location for mm (top left), cm (top right), dm (bottom left) and m (bottom right) size particles at the end of the simulation. The velocity vectors of the particles respect to the planet are shown and the colour scale shows the relative radial velocity. \[Fig:Gap1m\]](gapdust1M){width=".45\textwidth"} Finally, in Figure \[Fig:Masstrans\] we highlighted the constant mass transfer that take place through the inner planet location for the most coupled particles which are not effectively filtered by the less massive planet. We plot the radial velocity of the particles with colour a scale in order to emphasise the flow in both directions. ![Mass transfer through the planet position of mm-size particles for the 1 thermal mass planet, which is unable to effectively filtrate them. The particles are plotted together with their velocity vectors and the colorscale indicates their radial velocities.\[Fig:Masstrans\]](Mtrans){width=".45\textwidth"} Discussion {#sec:disc} ========== Path to a second generation of planets {#par:plan} -------------------------------------- The possibility to create the conditions for a second generation of planets by a massive core has already been studied in the past. However, the combined action of a multi-planetary system can achieve the same goal from less massive first generation planets. As we have seen in Sec. \[sec:res10\], two $10\,M_\mathrm{th}=0.7\,M_\mathrm{Jup}$ can trigger vortex formation which are more prominent and live longer compared to the single mass case. Nevertheless, these vortices appear like a transient effect that develop after more than $80$ orbits of the inner planet and last until $120$ orbits, thus it is difficult to relate this behavior to an initial condition effect and it does not depend on the timescale over which the planet are grown in the simulation. Furthermore, vortex formation depends on viscosity and it can persist for longer times in low viscosity disks. Although the main dust collection driver is the ring generated by the combined action of the two planets, vortices can create some characteristic observable features, enhancing locally the dust surface density. Moreover, for both the massive and intermediate mass cores, the particle ring clumps into few symmetric points close to the 5:3 MMR which are stable for several hundred orbits. Taking a dust-to-gas mass ratio of $0.01$ we found that the mass collected in those stable points can reach several Earth masses. However, we point out that this strong mass clumping might be reduced by the introduction of particle diffusion due to disk turbulence. Comparison with Previous Work {#par:prev} ----------------------------- This is one of the first studies on the dust evolution and filtration in a multiple planet system thus there are no direct comparison with similar setups. However, recently [@Zhu2014] have performed an extensive analysis of the dust filtration by a single planet in a 2D and 3D disk, from which we have taken some ideas and it is the natural test comparison for the different outcomes of our analysis. One first interesting comparison between the two results is the possibility to form vortices at the gap edges, taking into account that in their scenario vortex development was favoured by the choice of a non-viscous disk. We found that, even if for single planet the vortices are hindered, the presence of an additional planet can enhance the density at the ring location and promote the development of vortices. Moreover, we regain the ‘ripple’ formation as in @Zhu2014 for decoupled particles close to the planet location. [@Zhu2014] found also a direct proportionality between the gap width and the planet mass, where a $9 M_\mathrm{Jup}$ induced a vortex at the gap edge at a distance more than twice the planet semi-major axis. Although we have not modelled such high mass planets we have obtained a similar outcome for our parameter space. In our simulation we do not find the presence of strong MMR which aid the gap clearing since we do not model particles with stopping times greater than $\sim$ 5, thus the coupling with the gas disrupt the MMR . They became important only when the gas surface density is highly depleted, such as in the ring between planets for the high mass cases, where a 5:3 MMR with the inner planet is found to be a stable location. Furthermore, as observed in [@Ayliffe2012] there is a strong correlation between particle size and particle gap, where the most coupled particles reach regions closer to the planet location, and are potentially accreted by the planet or they migrate in the inner disk, while less coupled particles are effectively filtrated by the planet. Comparison with Observations {#par:obs} ---------------------------- Pre-transitional disks are defined observationally as disks with gaps. These features are observed in many cases in the sub-mm dust emission and there is no evidence that the gaseous emission follows the same pattern. The observation of (pre-) transitional disks highlights different physical behaviours that need to be explained. A major problem is the coexistence of a significant accretion rate onto the star (up to the same order as common T Tauri Stars - CTTS) with dust cleared zones, and the absence of near infra-red (NIR) emission. One of the most plausible explanation to solve this issue is the presence of multiple giant planets that can create a common gap and thus enhance the accretion rate across them exchanging torque with the disk, while depleting the dust component, through a filtration mechanism that, together with dust growth, can explain the absence of a strong NIR emission in (pre-) transitional disks compared to full disks [@Zhu2012]. For a full review of the topic see [@Espaillat2014] In the HL Tau system, although several rings have been observed in its dust emission, it has still a very high accretion rate onto the star $\dot{M} = 2.13*10^{-6}M_\odot/yr$ [@Robitaille2007]. This is a prove that, at least in this system, the rings observed in the dust emission are not related to rings in the gas distribution. Since we found that a wide ring is observed in our simulations only for the small mass case and a clear gap is visible for the outer planet only in the intermediate and massive core cases, we choose to run a different model with an inner small mass core and an outer intermediate mass core in order to compare the outcome of our simulations with the HL Tau system. We rescaled the system and run a different simulation in order to compare it to the real one, placing the inner planet at $32.3\,\mbox{au}$ corresponding to the D2 gap (see Figure \[Fig:HLTau2\] - top panel) and the outer one at $64.6\,\mbox{au}$, keeping the 2:1 ratio, which is close to the B5 location. Comparing the 2D surface density distribution at the end of the simulation with the deprojected image in the continuum emission and their slices (Figures \[Fig:HLTau2\]-\[Fig:HLTau3\]) we can outline several shared features and differences. The gap created by the inner planet has a very similar configuration as the one observed. On the other hand there are no clear visible features inside its orbit in the observed imaged while a variety of inner structures are visible in the output of the simulation. These features are mainly due to the inner wave generated by the planet. The strong gap that is visible close to the star is instead not physical and it is related to the inner boundary condition. In the outer part of the disk, several differences can be oulined. The major one is the high surface density in the horseshoe region, which is related to our choice of an initial flat profile for the particle distribution. Although this approximation was chosen to extend the simulated time preventing a fast depletion of material from the outer disk, it also favoured the dust trapping by the outer planet. Moreover, the particle ring is more depleted than in the observed image. Thus, we expect that the planetary mass responsible for the observed outer gap should be slightly smaller than the one adopted in this simulation. A final remark is the strong depletion of dust particles just inside the outer planet location due to its dust filtration mechanism, which is clear from the bottom panel of Figure \[Fig:HLTau2\]. Due to this effect, in a multiplanetary system, a planet is not necessary located where the gap is deeper but it could be at the rim of the gap preventing the particles of a certain size to cross its location. However, a part from these differences, due to our initial choice of the parameter space, the structures obtained from the simulations are similar to what is observed. ![Top panel: deprojected image from the mm continuum of HL Tau. Bottom panel: cross-cuts at PA=$138^\circ$ through the peak of the mm continuum of HL Tau [@Partnership2014]. \[Fig:HLTau2\]](HLTau2){width=".45\textwidth"} ![Top panel: final mm-dust surface density distribution for a inner low mass core and an outer intermediate mass core. Bottom panel: relative surface density distribution. \[Fig:HLTau3\]](pappa2){width=".45\textwidth"} Dependence on the disk surface density {#par:sta} -------------------------------------- The dynamical evolution of dust particles is closely linked to their stopping time, which is directly related to the disk surface density through eq. \[eq:stop\], for the Epstein regime. Thus, if we decrease the disk surface density by a factor $10$ in order to stabilize the disk in the isothermal case (see Appendix \[sec::stability\] for a study of the disk stability with a more realistic equation of state), we need to lower by the same factor the particle size to reobtain the same particle dynamics. On the other hand, if we want to keep the particle’s size fixed, in order to compare our results with the ALMA continuum images, we need to decrease the planetary mass, since the gap width depends on the particle stopping time. We tried a different choice of the parameter space to obtain a similar output with a much smaller disk mass and planetary mass cores. ![Top panel: dust distribution for the mm-sized particles disk and 2 planetary cores of $\sim 10$ and $\sim 20\,M_\oplus$ after 250 orbits of the inner planet. Bottom panel: Relative surface density distribution (red solid curve), where the distribution of the higher disk mass case (from Figure \[Fig:HLTau3\] has been overplotted (dashed blue curve).  \[Fig:HLTau4\]](pappa1){width=".45\textwidth"} We report in Fig. \[Fig:HLTau4\] a run with a disk mass of $1/10$ of the test case. The width of the gap created by two planets is similar, as outlined by the bottom panel. However, in this case the planetary mass adopted to open such narrow gaps in the particle disk are much lower: $\sim 10$ and $\sim 20\,M_\oplus$ for the inner and ouer planet, respectively. These lower values of the planetary masses prove that the ability of planets to open gaps in the dust disk is widely applicable, and increase the likelihood of the planetary origin through core accretion in this young system at large radii ($\sim 60\,\mbox{au}$). It will be crucial to define better the disk mass in order to constrain the particle dynamics and the planetary mass growing inside their birth disk. Conclusion {#sec:conc} ========== We have implemented a population of dust particles into the 2D hydro code <span style="font-variant:small-caps;">fargo</span> [@Masset2000] in order to study the coupled dynamic of dust and gas. The dust is modelled through Lagrangian particles, which permit us to cover the evolution of both small dust grains and large bodies within the same framework. We have studied in particular the dust filtration in a multi-planetary system to obtain some observable features that can be used to interpret the observations made by modern infrared facilities like ALMA . From the analysis of our simulations we have found that the outer planet - affects the co-orbital region of the inner one exciting the particles in the Lagrangian points (L4 and L5), which are effectively removed in the majority of the cases, - increases the surface density in the region between them, creating a particle ring which can clump in a small number of symmetric points, collecting a mass up to several Earth masses, - promotes the development of vortices at the ring inner edge, increasing the steepness of the surface density profile. Moreover, when the planets are not massive enough to create a narrow particle ring between the planets, its width depends on the particle size. This could be a potentially observable feature that can link the ring formation with the presence of planets. Furthermore, we confirmed previous results regarding the particle gap, which develops much more quickly than the gaseous one, and is wider for higher mass planets and more decoupled particles. The features observed in the HL Tau system can be explained through the presence of several massive cores, or lower mass cores depending on the adopted surface density, that shape the dust disk. We have obtained that the inner planet(s) should be on the order of $1\,M_\mathrm{th}=0.07\,M_\mathrm{Jup}$, in order to open a small gap in the dust disk while keeping a wide particle ring. The outer one(s) should have a mass on the order of $5\,M_\mathrm{th}=0.35\,M_\mathrm{Jup}$ in order to open a visible gap. These values are in agreement with those found by [@Kanagawa2015; @DiPierro2015; @Dong2015]. We point out that, decreasing the disk surface density by a factor 10 reduces the required planetary mass to open the observed gaps to a value of $10\,M_\oplus$ and $20\,M_\oplus$ respectively. These reduced values render the planet formation through core accretion more reliable in the young HL Tau system. Although the particle gaps observed are prominent, the expected gaseous gaps would be barely visible. The limitations of this work are the lack of particle back-reaction on the gas, self-gravity of the disk, and particle diffusion. Furthermore we have not model accretion of particles onto the planet and planet migration. These approximation were chosen in order to study the global evolution of particle distribution with different stopping times and different planet masses, without increasing excessively the computation time. Although the disk is very massive, the asymmetric features typical of a gravitationally unstable disk are not observed in the continuum mm observations that should correctly describe the gas flow, thus we do not expect the real system to be subject to strong perturbations due to its self-gravity. The particle-back reaction plays an important factor when studying the evolution of particle clumps, but it is not expected to change significantly the global dust distribution. However, particle diffusion could have an important role both in reducing the dust migration and preventing strong clumping of particles. We are planning in future works to relax these approximations, running more accurate simulations and testing the contribution of the individual physical process on the final dust filtration and distribution. Moreover, we have limited our analysis to the peculiar case of equal mass planets on a fixed orbit, and changing each one of these conditions can result in a rather different outcome. The possibility to evolve the simulations further in time has been considered since the outer planets where not able to open up a clear gap for the less massive cases. However, in any case they are able to open only a very shallow gap, so the possible influence on the subsequent evolution of the particles close to their position is not expected to be significant. Furthermore, since we have modeled the system for $\sim 10^5\,\mbox{yr}$ around a young star ($\sim 10^6\,\mbox{yr}$), and we need to form the planetary cores in the first place, it does not seem unrealistic to observe a planet at $\sim 60\,\mbox{au}$ still in the gap clearing phase. We thank an anonymous referee for his useful comments and suggestions. G. Picogna acknowledges the support through the German Research Foundation (DFG) grant KL 650/21 within the collaborative research program ”The first 10 Million Years of the Solar System”. Some simulations were performed on the bwGRiD cluster in Tübingen, which is funded by the Ministry for Education and Research of Germany and the Ministry for Science, Research and Arts of the state Baden-Württemberg, and the cluster of the Forschergruppe FOR 759 ”The Formation of Planets: The Critical First Growth Phase” funded by the DFG. Disk stability {#sec::stability} ============== The disk parameters adopted in this work were selected in order to match the observational data [@Kwon2011]. We point out that the disc to star mass ratio is relatively high and, in the locally isothermal approximation, it is gravitationally unstable in the outer regions considering its Toomre parameter $$Q = \frac{h}{r}\frac{M_\star}{M_d}\frac{2(r_{out}-r_{in})}{r} \simeq \frac{1.6}{r}.$$ However, the isothermal equation of state is usually a poor representation of the temperature distribution in the disk, especially for the inner regions of protoplanetary disks around young stars. In order to validate this model we ran an additional hydro simulation in which we include a more realistic equation of state where the radiative transport and radiative cooling are included. From Fig. \[Fig:sg\] it is clear that the more appropriate equation of state prevent the disk to become even partially unstable. ![Disk surface density after 20 inner planetary orbits for the isothermal case (top panel) and fully radiative case (bottom panel) where the self-gravity of the disk has been considered. \[Fig:sg\]](sgi "fig:"){width=".45\textwidth"} ![Disk surface density after 20 inner planetary orbits for the isothermal case (top panel) and fully radiative case (bottom panel) where the self-gravity of the disk has been considered. \[Fig:sg\]](sgr "fig:"){width=".45\textwidth"} The different equation of state could in principle affect the gap opening timescale, however the long term evolution of the particle dynamics is not expected to vary significantly. Moreover, the important parameter in our study is the stopping time of the particles, so the results obtained remain valid for different values of the surface density profile, scaling accordingly the dust size, as discussed in Section \[par:sta\]. Integrators {#sec::integrators} =========== In order to model the dynamics of the particle population in our simulations we tried different integrators. Semi-implicit integrator in polar coordinates --------------------------------------------- In order to follow the dynamics of particles well-coupled with the gas, which have a stopping time much smaller than the time step adopted to evolve the gas dynamics, we adopted the semi-implicit Leapfrog (Drift-Kick-Drift) integrator described in @Zhu2014 in polar coordinates. This method guarantees the conservation of the physical quantities for the long term simulations performed in this paper, and at the same time it is faster than an explicit method. #### Scheme Half Drift: $$\begin{aligned} v_{\mathrm{R},n+1} &= v_{\mathrm{R},n} \\ l_{n+1} &= l_n \\ R_{n+1} &= R_n + v_{\mathrm{R},n}\frac{\mathrm{d}t}{2} \\ \phi_{n+1} &= \phi_{n}+\frac{1}{2}\left( \frac{l_n}{R_n^2}+\frac{l_{n+1}} {R_{n+1}^2}\right) \frac{\mathrm{d}t}{2} \end{aligned}$$ Kick: $$\begin{aligned} R_{n+2} = &\ R_{n+1} \\ \phi_{n+2} = &\ \phi_{n+1} \\ l_{n+2} = &\ l_{n+1} + \frac{dt}{1+\frac{\mathrm{d}t}{2t_{\mathrm{s},n+1}}} \left[-{\left(\frac{\partial\Phi}{\partial\phi}\right)}_{n+1} + \frac{v_{\mathrm{g,\phi},n+1}R_{n+1}-l_{n+1}}{t_{\mathrm{s},n+1}} \right] \\ v_{\mathrm{R},n+2} = &\ v_{\mathrm{R},n+1} + \frac{dt}{1+\frac{\mathrm{d}t}{2t_{\mathrm{s},n+1}}} \Bigg[ \frac{1}{2}\Bigg( \frac{l_{n+1}^2}{R_{n+1}^3}+ \frac{l_{n+2}^2}{R_{n+2}^3} \Bigg)- \Bigg( \frac{\partial\Phi}{\partial R} \Bigg)_{n+1} + \\ &\ +\frac{v_{\mathrm{g,R},n+1}-v_{\mathrm{R},n+1}}{t_{\mathrm{s},n+1}} \Bigg] \end{aligned}$$ Half Drift: $$\begin{aligned} v_{\mathrm{R},n+3} &= v_{\mathrm{R},n+2} \\ l_{n+3} &= l_{n+2} \\ R_{n+3} &= R_{n+2} + v_{\mathrm{R},n+3}\frac{\mathrm{d}t}{2} \\ \phi_{n+3} &= \phi_{n+2}+\frac{1}{2}\left( \frac{l_{n+2}}{R_{n+2}^2}+\frac{l_{n+3}} {R_{n+3}^2}\right) \frac{\mathrm{d}t}{2} \end{aligned}$$ where $v_{\mathrm{R}}$ is the radial velocity, $l$ the angular momentum, $R$ the cylindrical radius, and $\phi$ the polar angular coordinate. The index $n$ shows the step at which the various quantities are considered. Further information reguarding to the integrator can be found in @Zhu2014. Fully-implicit integrator in polar coordinates ---------------------------------------------- For particles with stopping time much smaller than the numerical time step, the drag term can dominate the gravitational force term, causing the numerical instability of the integrator. Thus, it is necessary to adopt a fully implicit integrator following @Bai2010a [@Zhu2014] #### Scheme Predictor step: $$\begin{aligned} R_{n+1} &= R_n + v_{\mathrm{R},n} \mathrm{d}t \\ \phi_{n+1} &= \phi_n+\frac{l_n}{R_n^2} \mathrm{d}t \end{aligned}$$ Shift: $$\begin{aligned} v_{\mathrm{R},n+1} &= v_{\mathrm{R},n} + \frac{{{\mathop{}\!\mathrm{d}}t}/2} {1+{\mathop{}\!\mathrm{d}}t{\left( \frac{1}{2t_{\mathrm{s},n}} + \frac{1}{2t_{\mathrm{s},n+1}} + \frac{{\mathop{}\!\mathrm{d}}t}{2t_{\mathrm{s},n}t_{\mathrm{s},n+1}} \right)}}\cdot \\ & \cdot \Bigg[ -{\left(\frac{\partial\Phi}{\partial R}\right)}_n -\frac{v_{\mathrm{R},n}-v_{\mathrm{g,R},n}}{t_{\mathrm{s},n}} +\frac{l_n^2}{R_n^3} + \Bigg( -{\left(\frac{\partial\Phi}{\partial R}\right)}_{n+1}+ \nonumber \\ &-\frac{v_{\mathrm{R},n}- v_{\mathrm{g,R},n+1}}{t_{\mathrm{s},n+1}} +\frac{l_{n+1}^2}{R_{n+1}^3} \Bigg) {\left(1+\frac{{\mathop{}\!\mathrm{d}}t}{t_{\mathrm{s},n}}\right)} \Bigg] \nonumber \\ l_{n+1} &= l_{n} + \frac{{{\mathop{}\!\mathrm{d}}t}/2} {1+{\mathop{}\!\mathrm{d}}t\left(\frac{1}{2t_\mathrm{s,n}}+\frac{1} {2t_{\mathrm{s},n+1}}+ \frac{{\mathop{}\!\mathrm{d}}t}{2t_{\mathrm{s},n}t_{\mathrm{s},n+1}}\right)}\cdot \\ &\cdot\Bigg[ -{\left(\frac{\partial\Phi}{\partial \phi}\right)}_n -\frac{l_n - R_n v_{\mathrm{g,\phi},n}}{t_{\mathrm{s},n}} +\Bigg( -{\left(\frac{\partial\Phi}{\partial \phi} \right)}_{n+1} + \nonumber \\ &-\frac{l_n - R_{n+1} v_{\mathrm{g,\phi},n+1}} {t_{\mathrm{s},n+1}} \Bigg) {\left(1+\frac{{\mathop{}\!\mathrm{d}}t}{t_{\mathrm{s},n}}\right)} \Bigg] \nonumber \end{aligned}$$ Corrector step: $$\begin{aligned} R_{n+1} &= R_n + \frac{1}{2}(v_{\mathrm{R},n}+ v_{\mathrm{R},n+1})\mathrm{d}t \\ \phi_{n+1} &= \phi_{n} + \frac{1}{2}\left( \frac{l_{n}}{R_{n}^2}+\frac{l_{n+1}} {R_{n+1}^2}\right) \mathrm{d}t \end{aligned}$$ Particle tests {#sec::partests} ============== In order to test the numerical integrators described in Sec. \[sec::integrators\], we did an orbital test and a drift test proposed by @Zhu2014. Orbital tests ------------- We release one dust particle at $r=1$, $\phi=0$, with $v_\phi=0.7$, and integrate it for $20$ orbits. The time-steps $\Delta t$ are varied between $0.1$ and $0.01$ in units of the orbital time. The results are shown in Figure \[orbital\]. The particle follows an eccentric orbit with $e = 0.51$. The time steps are $\Delta t = 0.1$, compared with the orbital time ($2\pi$). The precession observed is due to the fact that even symplectic integrators cannot simultaneously preserve angulat momentum and energy exactly. The advantage of the semi-implicit scheme is that it does preserve geometric properties of the orbits, while the fully implicit integrator does not. For comparison, the orbit is calculated also with an explicit integrator, but with a much smaller time step $\Delta t=0.01$, showing no visible precession. This behavior is recovered also with the implicit schemes reducing the timestep. Since $\Delta t = 0.01$ is normally the time step used in our planet-disk simulations, our integrators are quite accurate even if we integrate the orbit of particles having moderate eccentricity. ![Orbital evolution of a dust particle released at $r=1$, $\phi=0$, with $v_\phi=0.7$ for the different integrators adopted in the simulations (red and green curves), compared to the solution from an explicit integrator (black curve). \[orbital\]](orbital){width=".45\textwidth"} 2D drift tests -------------- We model a 2D gaseous disk in hydrostatic equilibrium with $\Sigma\propto r^{-1}$ and release particles with different stopping times from $r=1$. The radial domain is $[0.5, 3]$ with a resolution in the radial direction of $400$ cells. The drift speed at the equilibrium is given by [@Nakagawa1986] $$\label{eq:drifteq} v_\mathrm{R,d}=\frac{\tau_\mathrm{s}^{-1}v_\mathrm{R,g}-\eta v_\mathrm{K}}{\tau_\mathrm{s}+\tau_\mathrm{s}^{-1}}$$ where $v_\mathrm{R,g}$ is the gas radial velocity, $\eta$ is the ratio of the gas pressure gradient to the stellar gravity in the radial direction, and we consider $v_\mathrm{R,g}=0$ since we are at the equilibrium. ![Evolution of the particle drift speed in the first 10 orbits for particles with different stopping times. The analytic solution obtained from eq. (\[eq:drifteq\]) is plotted with a black line, while the drift speed obtained from the semi-implicit and fully-implicit integrators are displayed with red and grey lines respectively. \[orbital2\]](Testvrad){width=".45\textwidth"} Figure \[orbital2\] shows the evolution of the particle radial velocity in the first 10 orbits for the implicit (red) and semi-implicit (grey) integrators together with the analytic solution (black) obtained from eq. (\[eq:drifteq\]). The particle drift speed reaches almost immediately the expected drift speed. The semi-implicit integrator reaches the equilibrium speed on a longer timescale than the fully-implicit one only for the lower stopping time (smaller particles).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Baryon Acoustic Oscillation (BAO) feature in the power spectrum of galaxies provides a standard ruler to probe the accelerated expansion of the Universe. The current surveys covering a comoving volume sufficient to unveil the BAO scale are limited to redshift $z \lesssim 0.7$. In this paper, we study several galaxy selection schemes aiming at building an emission-line-galaxy (ELG) sample in the redshift range $0.6<z<1.7$, that would be suitable for future BAO studies using the Baryonic Oscillation Spectroscopic Survey (BOSS) spectrograph on the Sloan Digital Sky Survey (SDSS) telescope. We explore two different colour selections using both the SDSS and the Canada France Hawai Telescope Legacy Survey (CFHT-LS) photometry in the *u, g, r*, and *i* bands and evaluate their performance selecting luminous ELG. From about 2,000 ELG, we identified a selection scheme that has a 75 percent redshift measurement efficiency. This result confirms the feasibility of massive ELG surveys using the BOSS spectrograph on the SDSS telescope for a BAO detection at redshift $z\sim1$, in particular the proposed *eBOSS* experiment, which plans to use the SDSS telescope to combine the use of the BAO ruler with redshift space distortions using emission line galaxies and quasars in the redshift $0.6<z<2.2$.' bibliography: - 'biblio.bib' date: 'Accepted October 2nd 2012 by MNRAS. Received in original form July 17th 2012.' title: Investigating Emission Line Galaxy Surveys with the Sloan Digital Sky Survey Infrastructure --- \[firstpage\] cosmology - large scale structure - galaxy - selection - baryonic acoustic oscillations Introduction {#section:introduction} ============ ------------- ------------------ ------------------ ----------------- ------------------------- ------------- ------------- ----- redshift Sample variance range $\bar{n}(k_{1})$ $\bar{n}(k_{2})$ area req. \[deg$^{2}$\] for $k_{1}$ for $k_{2}$ for $k_{1}$ for $k_{2}$ $[0.3,0.6]$ 1.0 2.1 33 71 6188 204 440 $[0.6,0.9]$ 1.1 2.5 75 162 2585 194 419 $[0.9,1.2]$ 1.3 2.9 121 261 1615 195 421 $[1.2,1.5]$ 1.5 3.2 164 354 1227 201 435 $[1.5,1.8]$ 1.7 3.6 273 589 1041 284 613 ------------- ------------------ ------------------ ----------------- ------------------------- ------------- ------------- ----- With the discovery of the acceleration of the expansion of the universe [@1998AJ....116.1009R; @1999ApJ...517..565P], possibly driven by a new form of energy with sufficient negative pressure, recent results have concluded that $\sim96$ percent of the energy density of the universe is in a form not conceived by the Standard Model of particle physics and not interacting with the photons, hence dubbed “dark”. Lying at the heart of this discovery is the distance-redshift relation mapped by the type Ia supernovae (SnIa) combined with the temperature power spectrum of the cosmic microwave background fluctuations. Since the first detections, there has been a huge increment of data up to redshift $z\sim 1$ (@1998AJ....116.1009R,@1999ApJ...517..565P,,@2007ApJ...666..694W, @2004ApJ...607..665R, @2007ApJ...659...98R, @2009AJ....138.1271D [@Riess2011ApJ...730..119R]) The current precision and accuracy required to obtain deeper insight on the cosmological model using SnIa is limited by the systematic errors of this probe; therefore a joint statistical analysis with other probes is mandatory to assess a firm picture of the cosmological model. Corresponding to the size of the well-established sound horizon in the primeval baryon-photon plasma before photon decoupling [@1970ApJ...162..815P], the BAO scale provides a standard ruler allowing for geometric probes of the global metric of the universe. In the late-time universe it manifests itself in an excess of galaxies with respect to an unclustered (Poisson) distribution at the comoving scale $r \sim100 h^{-1} \mathrm{Mpc}$ — corresponding to a fundamental wave mode $k\sim 0.063 h \mathrm{Mpc}^{-1}$. The value of this scale at higher redshift is accurately measured by the peaks in the CMB power spectrum ([*e.g.*]{} @2009ApJS..180..330K [@Komatsu_2011]). Galaxy clustering and CMB observations therefore allow for a consistent comparison of the same physical scale at different epochs. The first detection of the ‘local’ BAO [@2005MNRAS.362..505C; @2005ApJ...633..560E] were based on samples at low redshift $z \leq 0.4$. Further analysis on a larger redshift range ($z>0.5$) and a wider area confirm the first result, reducing the errors by a factor of 2 [@Percival_2010; @Blake_2011]. Measurements of the BAO feature have thus become an important motivation for large galaxy redshift surveys; the small amplitude of the baryon acoustic peak, and the large value of $r_\mathrm{BAO}$, require comoving volumes of order of $\sim 1 \mathrm{Gpc}^3 h^{-3}$ and at least $10^5$ galaxies to ensure a robust detection ([*e.g.*]{} @1997PhRvL..79.3806T [@2003ApJ...594..665B]). BAO studies using luminous red galaxies (LRG) are currently being pushed to $z=0.7$ by the Baryonic Oscillation Spectroscopic Survey (BOSS) experiment as part of the Sloan Digital Sky Survey III (SDSS-III) survey [@2011AJ....142...72E]. So far, with a third of the spectroscopic data, the BAO feature has been measured at $z=0.57$ with a $6.7\sigma$ significance [@BOSSDR9BAO2012arXiv1203.6594A]. The final data set, which will be completed by mid-2014, will have a mean galaxy density of about $150$ galaxies per square degree over 10,000 deg$^2$. Recently, the WiggleZ experiment has obtained a significant $\sim 4.9\sigma$ detection of the BAO peak at $z=0.6$, by combining information from three independent galaxy surveys: the SDSS, the 6-degree Field Galaxy Survey (6dFGS) and the WiggleZ Dark Energy Survey [@Blake_2011]. In contrast to SDSS, WiggleZ has mapped the less biased, more abundant emission line galaxies [@Drinkwater_2010]. The next generation of cosmological spectroscopic surveys plans to map the high-redshift universe in the redshift range $0.6\leq z\leq2$ using the largest possible volume; see BigBOSS [@bigBOSS_2011], PFS-SuMIRe[^1], and EUCLID[^2]. To achieve this goal, suitable tracers covering this redshift range are needed. Above $z \sim 0.6$ the number density of LRGs decreases while the bulk of galaxy population is composed of star forming galaxies [@Abraham_1996; @Ilbert_06]; it is therefore compelling to build a large sample of such type of galaxies, which allows one to cover a large area and hence a large volume. The main challenges for future BAO surveys is to efficiently select targets for which a secure redshift can be measured within a short exposure time. Contrary to continuum-based LRG survey, the observational strategy of next generation surveys such as BigBOSS, PFS-SuMIRe, and EUCLID is based on redshift measurements using emission lines, which are a common feature of star-forming galaxies. In this paper we focus on targeting strategies for selecting luminous ELGs at $0.6<z<1.7$ using optical photometry, and we test our strategies using the BOSS spectrograph on the SDSS telescope [@Gunn_2006]. The plan of the paper is as follow. In section \[section:ELGs\_BAO\], we derive the necessary ELG redshift distribution to detect the BAO feature. In section \[section:color\_selection\] we explain how the ELG selection criteria were designed using different photometric catalogs, based on the performances of the BOSS spectrograph. In section \[section:Measurements\] we compare observed spectra issued from this selection with simulations and we discuss the efficiency of the proposed selection schemes. In section \[properties\] we discuss the main physical properties of the ELGs. In section \[section:discussion\], we present the redshift distribution of the observed ELGs and how to improve the selection. In appendix \[tble\_appendix\] we display a representative set of the spectra observed. Throughout this study we assume a flat $\Lambda$CDM cosmology characterized by $(\Omega_m, n_s, \sigma_8)=(0.27,0.96,0.81)$. Magnitudes are given in the AB system. Baryon Acoustic Oscillations {#section:ELGs_BAO} ============================ ![image](BAO_needs2.pdf){width="180mm"} Density and geometry requirements --------------------------------- In order to constrain the distance-redshift relation at $z>0.6$ using the BAO, we need a galaxy sample that covers the volume of the universe observable at this redshift. In this section we derive the required mean number density of galaxy, $\bar{n}(z)$, and the area to be covered in order to observe the BAO feature at the one percent level. The statistical errors in the measure of the power spectrum of galaxies $P(k,z)$, evaluated at redshift $z$ and at scale $k$, arise from sample variance and shot noise [@1986MNRAS.219..785K]. Denoting the latter as $\mathcal{N}(z)=1/\bar{n}(z)$, to measure a significant signal the minimal requirement is $$\bar{n}(z)P(k, z) = \frac{P(k, z)}{\mathcal{N}(z)} \gtrsim 2. \label{eqn:np_is_1}$$ As the amplitude of the power spectrum decreases with redshift, the required density increases with redshift. [*e.g.*]{}, at $z=0.6$, we need a galaxy density of $\bar{n}=2.1 \times10^{-4}\; h^3\mathrm{Mpc}^{-3}$ ; at $z=1.5$, $\bar{n}=3.2 \times10^{-4}\; h^3\mathrm{Mpc}^{-3}$. The full trend in redshift bins is given in Table \[BAO\_req\] and in Figures \[ELG\_bao\_needs\] a) and b) which show equation (\[eqn:np\_is\_1\]) as a function of redshift for $k\simeq0.063 \;h \; {\rm Mpc}^{-1}$ and $k\simeq0.12\;h \; {\rm Mpc}^{-1}$ (the location of the first and the second harmonics of the BAO peak in the linear power spectrum). In order to minimize the sample variance, we must sample the largest possible volume (a volume of 1 $\mathrm{Gpc}^3 \; h^{-3}$ roughly corresponds to a precision in the BAO scale measurement of 5 percent). To quantify this calculation, we use the effective volume sampled $V_{eff}$, defined as [@Tegmark_1997] $$V_\mathrm{eff}(k)= 4 \pi \int dr \, r^2 \left[ \frac{\bar{n}(r) b^2(z) P(r,k)}{1+\bar{n}(r) b^2(z) P(r,k)} \right]^2 . \label{eff_vol}$$ In this calculation, we assume a linear bias according to the DEEP2 study by @2008ApJ...672..153C that varies according to the redshift as $b(z)=b_0 (1+z)$, with $b(z=0.8)=1.3$. The bias could be larger for the more luminous ELGs, that are thought to be the progenitors of massive red galaxies [@cooper2008]. We shall evaluate the bias of ELGs more precisely in a future paper. The corresponding area to be surveyed in order to reach $V_\mathrm{eff}\sim 1\mathrm{Gpc}^3 \; h^{-3}$ is shown in Table \[BAO\_req\], setting redshift bins of width $\Delta z=0.3$ from $z=0.3$ to $z=1.8$. The Figure \[ELG\_bao\_needs\] c) shows the behavior of $V_\mathrm{eff}$ as a function of the area for a given slice of redshift with $\bar{n}$ given in the third column of Table \[BAO\_req\]. For the redshift range $[0.6,0.9]$ the survey area must be $\gtrsim$2,500 $\mathrm{deg}^2$. For the redshift range $[0.9,1.2]$ the survey area must be $\gtrsim$1,600 $\mathrm{deg}^2$. The observation of $[0.6,1.7]$ with a single galaxy selection thus needs 2,500 $\mathrm{deg}^2$ to sample the BAO at all redshifts. ### Reconstruction of the galaxy field {#reconstruction-of-the-galaxy-field .unnumbered} To obtain a high precision on the measure of the BAO scale, it is necessary to correct the 2-point correlation function from the dominant non-linear effect of clustering. The bulk flows at a scale of $20\;h^{-1}\;{\rm Mpc}$ that form large scale structures smear the BAO peak: it is smoothed by the velocity of pairs (At redshift 1 the rms displacement for biased tracers due to bulk flows is $8.5\;h^{-1}\; {\rm Mpc}$ in real space and $17\;h^{-1} \; {\rm Mpc}$ in redshift space) [@2007ApJ...664..675E; @2007ApJ...664..660E]. Reconstruction consists in correcting this smoothing effect. The key quantity that allows reconstruction on a data sample is the smoothing scale used to reconstruct the velocity field and should be as close to $5\;h^{-1}\; {\rm Mpc}$ as possible in order to measure the bulk flows without being biased by other non-linear effects that occur on smaller scales. The reconstruction algorithm applied on the SDSS-II Data Release 7 [@Abazajian_2009] LRG sample sharpens the BAO feature and reduces the errors from 3.5 percent to 1.9 percent. This sample has a density of tracers of $10^{-4}\; h^3\; {\rm Mpc}^{-3}$ and the optimum smoothing applied is $15\;h^{-1}\; {\rm Mpc}$ [@2012arXiv1202.0090P]. On the SDSS-III/BOSS data in our study (different patches cover 3,275 deg$^2$ on a total of 10,000 deg$^2$), reconstruction sharpens the BAO peak allowing a detection at high significance, but does not significantly improve the precision on the distance measure due to the gaps in the current survey (see @BOSSDR9BAO2012arXiv1203.6594A). To allow an optimum reconstruction using a smoothing three times smaller ($5 \;h^{-1}\; {\rm Mpc}$) it is necessary to have a dense and contiguous galaxy survey : gaps in the survey footprint smaller than 1 Mpc and a sampling density higher than $3 \times 10^{-4}\; h^3 \; {\rm Mpc}^{-3}$. This setting should reduce the sample variance error on the acoustic scale by a factor four. Observational requirements -------------------------- A mean galaxy density of $3 \times 10^{-4}\; h^3 \; {\rm Mpc}^{-3}$ can be reached by a projected density of 162 galaxies $\mathrm{deg}^{-2}$ with $0.6<z<0.9$, 261 $\mathrm{deg}^{-2}$ with $0.9<z<1.2$, 354 with $1.2<z<1.5$, and 589 with $1.5<z<1.8$. Considering a simple case where a survey is divided in three depths, the shallow one covering 2,500 deg$^2$ should contain 419,000 galaxies ; the medium 421,000 galaxies over 1,600 deg$^{2}$ ; and the deep 435,000 galaxies over 1,200 deg$^{2}$. This represents a survey containing 1,350,000 measured redshifts in the redshift range $[0.6,1.5]$. The challenge is to build a selection function that enhances the observation of these projected densities. Given a ground-based large spectroscopic program that measures $1.5 \times10^6$ spectra (it corresponds to about 4 years of dark time operations on SDSS telescope dedicated to ELGs), the challenge is to define a selection criterion that samples galaxies to measure the BAO on the greatest redshift range possible. We define the selection efficiency as the ratio of the number of spectra in the desired redshift range and the number of measured spectra. The example in the previous paragraph needs a selection with an efficiency of $1.35/1.5\sim$ 90 percent. Previous galaxy targets selections ---------------------------------- To reach densities of tracers $\gtrsim10^{-4}\; h^3\; {\rm Mpc}^{-3}$ at $z>0.6$ with a high efficiency, a simple magnitude cut is not enough. Such a selection would be largely dominated by low-redshift galaxies. The use of colour selections is necessary to narrow the redshift range of the target selection for observations. SDSS-I/II galaxies are selected with visible colours in the red end of the colour distribution of galaxies, resulting in a sample of LRG and not ELGs [@2001AJ....122.2267E]. The projected density of LRG is $\sim120$ deg$^{-2}$ with a peak in the redshift distribution at $z\sim0.35$. With the SDSS-I/II LRG sample, the distance redshift relation was reconstructed at 2 percent at $z=0.35$. BOSS has currently completed about half of its observation plan. The tracers used by BOSS are, as SDSS-I/II LRG, selected in the red end of the color distribution of galaxies, they are called CMASS (it stands for ‘constant mass’ galaxies) and the selection will be detailed in Padmanabhan et al. in prep. (2012). The current BAO detection using the data release 9 (a third of the observation plan) with the CMASS tracers at $z\sim 0.57$ has a $6.7\sigma$ significance (@BOSSDR9BAO2012arXiv1203.6594A). WiggleZ blue galaxies are selected using UV and visible colours: they have a density of 240 galaxies deg$^{-2}$ and a peak in the redshift distribution around $z=0.6$ [@Drinkwater_2010]. The WiggleZ experiment has obtained a $4.9\sigma$ detection of the BAO peak at $z=0.6$ [@Blake_2011]. At their peak density, both of these BAO surveys reach a galaxy density of $3 \times 10^{-4}\; h^3\; {\rm Mpc}^{-3}$, which guarantees a significant detection of the BAO. Galaxy selections beyond $z=0.6$ were already performed by surveys such as the VIMOS-VLT Deep Survey[^3] (VVDS, see @2005Natur.437..519L), DEEP2[^4] (see @Davis_2003) or Vimos Public Extragalactic Redshift Survey[^5] (VIPERS, see Guzzo et al. 2012, in preparation), but they are not tuned for a BAO analysis. The DEEP2 Survey selected galaxies using BRI photometry in the redshift range $0.75-1.4$ on a few square degrees with a redshift success of 75 percent using the Keck Observatory. It studied the evolution of properties of galaxies and the evolution of the clustering of galaxies compared to samples at low-redshift. In particular, insights in galaxy clustering to $z=1$ brings a strong knowledge about the bias of these galaxies [@2008ApJ...672..153C]. The VVDS wide survey observed 20 000 redshift on 4 deg$^2$ limited to $I_{AB}<22.5$ ; they studied the properties of the galaxy population to redshift $1.2$ and the small scale clustering around $z=1$. The VIPERS survey maps the large scale distribution of 100 000 galaxies on 24 $\mathrm{deg}^2$ in the redshift range $0.5-1.2$ to study mainly clustering and redshift space distortions. Their colour selection, based on *ugri* bands, is described in more detail in the section \[section:discussion\]. Color Selections {#section:color_selection} ================ Our aim is to explore different colour selections that focus on galaxies located in $0.6<z<1.7$ with strong emission lines, so that assigning redshifts to these galaxies is feasible within short exposure times (typically one hour of integration on the 2.5m SDSS telescope). The methodology used here has been first explored and experimented by @Davis_2003, @Adelberger_2004, @Drinkwater_2010. @Adelberger_2004 derived different colour selections for faint galaxies (with $23<R<25.5$) at redshifts $1<z<3$ based on the Great Observatories Origins Deep Survey data (GOODS, see @2003ApJ...587...25D). @Drinkwater_2010 selected ELGs using UV photometry from the Medium Imaging Survey of the GAlaxy EVolution EXplorer (MIS-GALEX, see @2005ApJ...619L...1M) data combined with SDSS, to obtain a final density of $238$ ELGs per square degree with $0.2< z <0.8$ over $\sim 800$ square degrees. Our motivation is to probe much wider surveys than GOODS or GALEX (ultimately a few thousands square degrees) and to concentrate on intrinsically more luminous galaxies (typically with $g<23.5$) with a redshift distribution extended to redshift 1.7. The selection criteria studied in this work are designed for a ground-based survey and more specifically for the SDSS telescope, a 2.5m telescope located at Apache Point Observatory (New Mexico, USA), which has a [*unique*]{} wide field of view to carry out LSS studies [@Gunn_2006]. The current BOSS spectrographs cover a wavelength range of $3600-10200 \AA$. Its spectral resolution, defined by the wavelength divided by the resolution element, varies from $R\sim 1,600$ at $3,600\AA$ to $R\sim 3,000$ at $10,000\AA$ [@2011AJ....142...72E]. The highest redshift detectable with the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission line doublet $(\lambda 3727,\lambda 3729)$ is thus $z_\mathrm{max}=1.7$. To select ELGs in the redshift range $[0.6,1.7]$ we have explored two different selection schemes: first using *u,g,r* photometry and secondly using *g,r,i* photometry. Photometric data properties: SDSS, CFHT-LS and COSMOS ----------------------------------------------------- ![The four bands *ugri* and their precision are illustrated; in red for SDSS photometry; in blue for CFHTLS photometry. The *u* band quality is limiting the precision of the colour selection on SDSS photometry. Note that the photometric redshift CFHTLS catalog is cut at $i=24$, and the SDSS data is R-selected with $err_R\leq 0.2$.[]{data-label="mag_errors_SDSS_CFHT"}](logmagErrmag2.pdf){width="88mm"} ![image](selection.pdf){width="150mm"} The photometric SDSS survey, delivered under the data release 8 (DR8, @SDSS_DR8), covers 14,555 square degrees in the 5 photometric bands *u, g, r, i, z*. It is the largest volume multi-color extragalactic photometric survey available today. The 3$\sigma$ magnitude depths are: $u=22.0$, $g=22.2$, $r=22.2$, $i=21.3$; see @1996AJ....111.1748F for the description of the filters and @1998AJ....116.3040G for the characteristics of the camera. The magnitudes we use are corrected from galactic extinction. The Canada France Hawaii Telescope Legacy Survey[^6] (hereafter CFHTLS) covers $\sim155$ deg$^2$ in the *u,g,r,i,z* bands. The transmission curves of the filters differ slightly[^7] from SDSS. The data and cataloging methods are described in the T0006 release document[^8]. The 3$\sigma$ magnitude depths are: $u=25.3$, $g=25.5$, $r=24.8$, $i=24.5$. The CFHT-LS photometry is ten times (in $r$ and $i$) to thirty times (in $u$) deeper than SDSS DR8, however the CFHTLS covers a much smaller field of view than SDSS DR8. The magnitudes we use are corrected from galactic extinction. The CFHT-LS photometric redshift catalogs are presented in @Ilbert_06, and @Coupon_2009 ; the photometric redshift accuracy is estimated to be $\sigma_z < 0.04 (1+z)$ for $g\leq 22.5$. This photometric redshift catalog is cut at $i=24$, beyond which photometric redshifts are highly unreliable. Fig. \[mag\_errors\_SDSS\_CFHT\] displays the relative depth between SDSS and CFHT-LS wide surveys in the *u,g,r,i*-bands. COSMOS is a deep 2 deg$^2$ survey that has been observed at more than 30 different wavelengths [@2007ApJS..172....1S]. The COSMOS photometric catalog is described in @Capak_2007 and the photometric redshifts in @Ilbert_2009. The COSMOS Mock Catalog, (hereafter CMC; see[^9]) is a simulated spectro-photometric catalog based on the COSMOS photometric catalog and its photometric redshift catalog. The magnitudes of an object in any filter can be computed using the photometric redshift best-fit spectral templates (@Jouvel_2009, Zoubian et al. 2012, in preparation). The limiting magnitudes of the CMC in the each band are the same as in the real COSMOS catalog (detection at $5\sigma$ in a 3" diameter aperture): $\emph{u}<26.4$, $\emph{g}<27$, $\emph{r}<26.8$, $\emph{i}<26.2$. For magnitudes in the range $14<m<26$ in the *g,r,i* bands from the Subaru telescope and in the *u* band from CFHTLS, the CMC contains about 280,000 galaxies in 2 deg$^2$ to COSMOS depth. The mock catalog also contains a simulated spectrum for each galaxy. These simulated spectra are generated with the templates used to fit COSMOS photometric redshifts. Emission lines are empirically added using Kennicutt calibration laws [@Kennicutt_1998; @Ilbert_2009], and have been calibrated using zCOSMOS [@Lilly_2009] as described in Zoubian et al. 2012, in preparation. The strength of $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission lines was confirmed using DEEP2 and VVDS DEEP luminosity functions [@LeFevre2005; @zhu09]. Finally a host galaxy extinction law is applied to each spectrum. Predicted observed magnitudes take into account the presence of emission lines. Color selections {#color-selections} ---------------- Based on the COSMOS and CFHT-LS photometric redshifts, we explore two simple colour selection functions using the *ugr* and *gri* bands. Fig. \[selection\_figure\] shows the targets available in the *ugr* and *gri* colour planes. We construct a bright and a faint sample based on the photometric depths of SDSS and CFHT-LS. ### [*ugr*]{} selection The *ugr* colour selection is defined by $-1<u-r<0.5$ and $-1<g-r<1$ that selects galaxies at $z\geq 0.6$ and ensures that these galaxies are strongly star-forming ($u-r$ cut). The cut $-1<u-g<0.5$ removes all low-redshift galaxies ($z<0.3$). Finally the magnitude range is $20<g<22.5$ and $g<$23.5 for the bright and the faint samples, resp. Fig. \[selection\_figure\] a) and b). ### [*gri*]{} selection The bright *gri* colour selection is defined by the range $19<i<21.3$. We select blue galaxies at $z\sim 0.8$ with $0.8<r-i<1.4$ and $-0.2<g-r<1.1$ (Fig. \[selection\_figure\] c). In the faint range $21.3<i<23$, we tilt the selection to select higher redshifts with $-0.4<g-r<0.4$, $-0.2<r-i<1.2$ and $g-r<r-i$ (Fig. \[selection\_figure\] d). $\# \mathrm{deg}^{-2}$ $\bar{u}$ $\bar{g}$ $\bar{r}$ $\bar{i}$ $\bar{z}$ $\sigma_z $ $\bar{f_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}}$ $Q^1_{f_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}}$ $Q^3_{f_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}}$ -- ------------------------ ----------- ----------- ----------- ----------- ----------- ------------- --------------------------------------------------------- --------------------------------------------------------- --------------------------------------------------------- ------- ------- b 130.0 21.98 21.87 21.69 - 1.25 0.53 61.74 46.47 88.39 f 1450.8 23.27 23.18 22.98 - 1.19 0.38 16.60 13.06 22.26 b 257.2 - 22.69 21.87 20.93 0.80 0.21 13.85 8.65 22.21 f 2170.5 - 23.34 23.09 22.55 0.93 0.31 10.23 6.83 15.99 b 193.3 21.95 21.8 21.7 - 1.28 0.38 f 1766.8 23.37 23.19 23.07 - 1.29 0.31 b 361.4 - 22.62 21.8 20.82 0.81 0.11 f 3317.5 - 23.34 23.11 22.55 1.03 0.35 b 232.2 21.89 21.76 21.69 - 1.27 0.37 f 1679.1 23.36 23.18 23.06 - 1.28 0.31 b 391.6 - 22.62 21.78 20.8 0.82 0.1 f 3334.2 - 23.34 23.11 22.54 1.03 0.33 b 166.96 21.76 21.77 21.52 - b 204.96 - 22.57 21.75 20.76 Predicted properties of the selected samples -------------------------------------------- The *ugr* colour selection avoids the stellar sequence, but not the quasar sequence. Hence, the contamination of the *ugr* selection by point-source objects is primarily due to quasars; see Fig. \[selection\_figure\] a) and b). The resulting photometric-redshift distribution as derived from the CFHT-LS photometric redshift catalog has a wide span in redshift, covering $0.6<z<2$ as shown in Fig. \[selection\_figure\_bis\]. The distribution is centered at $z=1.3$ for the bright and the faint sample with a scatter of $0.3$ (see Table \[mock\_selections\]). The expected $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ fluxes are computed from the CMC catalog and are shown in Fig. \[selection\_figure\_bis\]. For 90 percent of galaxies in the faint sample, the predicted flux is above $10.6 \times 10^{-17}\mathrm{erg\,cm^{-2}\,s^{-1}}$. The bright sample galaxies show strong emission lines. The *gri* selection avoids both the stellar sequence and the quasar sequence; see Fig. \[selection\_figure\] c) and d). Thus the contamination from point-sources should be minimal. Fig. \[selection\_figure\_bis\] shows the photometric redshift distribution of the *gri* selection applied to CFHT photometry. The redshifts are centered at $z=0.8$ for the bright and $1.0$ for the faint sample (see Table \[mock\_selections\]). The expected $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ flux, computed with the CMC catalog, is shown in Fig. \[selection\_figure\_bis\]. Emissions are weaker than for the *ugr* selection as expected. The different selections shown in Fig. \[selection\_figure\] and Fig \[selection\_figure\_bis\] are summarized in Table \[mock\_selections\], which contains the number densities available, mean magnitudes, mean redshifts, and mean $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ fluxes (when available) of the different samples considered. We have lower densities in the CMC than in the CFHT-LS catalog. This is probably due to cosmic variance as the CMC only covers 2 deg$^2$. The SDSS colour-selected samples are complete for the bright samples at $g<22.5$ and $i<21.3$, not for the faint samples. The CFHTLS-selected samples are complete for both the bright and faint samples; see Fig.\[cumulative\_samples\], where the total cumulative number counts (solid line) of the *ugr* and *gri* colour-selected samples are plotted as a function of $g$ and $i$ bands respectively. On the bright end of this Figure, although both photometry are complete at the bright limit, we note a discrepancy between the total amount of target selected on CFHT and SDSS that implies selections on CFHT are denser than on SDSS (difference between the red and blue solid lines). This is due to the transposition of the color selection from one photometric system to the other. In fact, we select targets on SDSS with a transposed criterion from CFHT using the calibrations by @Regnault_2009. The transposed criterion is as tight as the original. But as the errors on the magnitude are larger in the SDSS system, their colour distributions are more spread. Therefore the SDSS selection is a little less dense than the CFHT selection. Targeting the bright range is limited by galaxy density, in the best case one can reach 300 targets deg$^{-2}$ and it contains point-sources (stars and quasars) and low-redshift galaxies. In the faint range, the target density is ten times greater, but the exposure time necessary to assign a reliable redshift will be much longer (one magnitude deeper for a continuum-based redshift roughly corresponds to an exposure five times longer). The stellar, quasars and low-redshift contamination is smaller in the faint range. Fig. \[selection\_figure\_bis\] shows the distributions in redshift and in $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ flux we expect given a magnitude range and a colour criterion within the framework of the CMC simulation. The main trend is that the *ugr* selection identifies strong $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emitters out to $z\sim2$ where the *gri* peaks at redshift $1$ and extends to $1.4$ with weaker $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emitters. ![image](selection2.pdf){width="180mm"} We also used a criterion to split targets in terms of compact and extended sources, which is illustrated in Fig. \[cumulative\_samples\]. For CFHT-LS we have used the half-light radius ($r_2$ value, to be compared to the $r_{2}^{limit}$ value which defines the maximal size of the PSF at the location of the object considered - see Coupon et al 2009 and CFHT-LS T0006 release document) to divide the sample into compact and extended objects. For SDSS we used the “<span style="font-variant:small-caps;">type</span>” flag, which separates compact (<span style="font-variant:small-caps;">type</span>=6) from extended objects (<span style="font-variant:small-caps;">type</span>=3). For the *ugr* colour selection, the number counts are dominated by compact blue objects (quasars) at $g\leq22.2$. At $g\geq 22.2$ the counts are dominated by extended ELGs. For comparison we show in Fig. \[cumulative\_samples\] the cumulative counts of the XDQSO catalog from @Bovy_2011 who identified quasars in the SDSS limited to $g<21.5$. We notice an excellent match with the bright (compact) *ugr* colour-selected objects. For the *gri* colour selection, there is a low contamination by compact objects because the colour box does not overlap with either the stellar or the quasar sequence. ![image](cumul.pdf){width="180mm"} ELG Observations {#section:Measurements} ================ To test the reliability of both the bright *ugr* ($g<22.5$) and the bright *gri* ($i<21.3$) colour selections, we have conducted a set of dedicated observations, as part of the “Emission Line Galaxy SDSS-III/BOSS ancillary program”. The observations were conducted between Autumn 2010 and Spring 2011 using the SDSS telescope with the BOSS spectrograph at Apache Point Observatory. A total of $\sim$2,000 spectra, observed 4 times 15 minutes, were taken in different fields: namely, in the Stripe 82 (using single epoch SDSS photometry for colour selection) and in the CFHT-LS W1, W3 and W4 wide fields (using CFHT-LS photometry). This data set was released in the SDSS-III Data Release 9[^10]. Description of SDSS-III/BOSS spectra ------------------------------------ We used the SDSS photometric catalog [@SDSS_DR8] to select 313 objects according to their *ugr* colours located in the Stripe-82 and 899 objects selected according to their *gri* colours in the CFHT-LS W3 field. In addition we used the CFHT-LS photometry to select 878 *ugr* targets in the CFHT-LS W1 field, and 391 *gri* targets in the CFHT-LS W3 field for observation. The spectra are available in SDSS Data Release 9 and flagged ‘ELG’. All of these spectra were manually inspected to confirm or correct the redshifts produced by two different pipelines (<span style="font-variant:small-caps;">zCode</span> and its modified version that we used to fit the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission line doublet). As the BOSS pipeline redshift measurement is designed to fit LRG continuum some ELG with no continuum were assigned wrong redshifts. To classify the observed objects, we have defined seven sub-categories : ### Objects with secure redshifts {#objects-with-secure-redshifts .unnumbered} - ‘ELG’, Emission-line galaxy (redshift determined with multiple emission lines). Usually these spectra have a weak ‘blue’ continuum and lack a ‘red’ continuum. Empirically, using <span style="font-variant:small-caps;">platefit vimos</span> pipeline output, this class corresponds to a spectrum with more than two emission lines with observed equivalent widths $EW \leq -6 \AA$ ; see examples in Appendix \[tble\_appendix\]. - ‘RG’, Red Galaxy with continuum in the red part of its spectrum, allowing a secure redshift measurement through multiple absorption lines ( [*e.g.*]{} Ca K&H, Balmer lines) and the $4000\AA$ break. Some of these objects have also weak emission (E+A galaxies). Empirically their spectra have a mean $D_n(4000)$ of $1.3$ ; where $D_n(4000)$ is the ratio of the continuum level after the break and before the break. These galaxies typically have $i\sim20$, which is fainter than the CMASS targeted by BOSS. - ‘QSO’, Quasars, which are identified through multiple broad lines. Examples are given in Fig. \[qsos\]. - Stars. ### Objects with unreliable redshifts {#objects-with-unreliable-redshifts .unnumbered} - ‘Single emission line’ : the spectra contain only a single emission line which cannot allow a unique redshift determination. For this population, the CFHT T0006 photometric redshifts are compared to the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ redshift (assuming the single emission line is $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$) in Fig. \[sinlgeEmLowContiRedshift\]. The two estimates agree very well : 77.7 percent have $(z_{spec} - z_{phot})/(1+ z_{spec})<0.1$ for the *gri* selection and 62.7 percent for the *ugr* selection. These galaxies with uncertain redshift tend to have slightly fainter magnitudes with a mean CFHT *g* magnitude at 22.6 and a scatter of 0.6, whereas for the whole ELGs is 22.4 with a scatter of 0.4. - ‘Low continuum’ spectra that show a $4000 \AA$ break too weak for a secure redshift estimate. The agreement between photometric and spectroscopic redshift estimation is excellent : 84.6 percent within 10 percent errors; see Fig. \[sinlgeEmLowContiRedshift\]. - ‘Bad data’, the spectrum is either featureless, extremely noisy or both. The detailed physical properties of the ELGs are discussed in section \[properties\] and a number of representative spectra are displayed in Appendix \[tble\_appendix\]. ![T0006 CFHT-LS photometric redshifts of single emission line and low continuum galaxies observed against $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ redshift. A strong correlation is clearly evident. A slight systematic over-estimation of the photometric redshift is visible above redshift 1.2 (these photometric redshifts were calibrated below 1.2).[]{data-label="sinlgeEmLowContiRedshift"}](singleEmLoxConti.pdf){width="88mm"} Redshift Identification ----------------------- The results of the observations are summarized by categories in Table \[objects\_W3\]. For the targets selected using SDSS photometry and with the *ugr* selection : 32 percent are ELGs at a redshift $z>0.6$ (100 spectra). The low-redshift ELGs represent 32 percent of the observed targets (101 spectra).The other categories are : 65 ‘bad data’ (20 percent), 30 quasars (10 percent), 10 stars (3.5 percent), and 7 red galaxies with $z<0.6$ (2.5 percent). With the *gri*-selection, 57 percent of the targets are at $z>0.6$. However, still 21 percent of the spectra fall into the bad data class. Using CFHTLS photometry 46 percent of targets are ELGs at $z>0.6$ and 14 percent are quasars with the *ugr* selection. With the *gri*-selection, 73 percent are galaxies at $z>0.6$, five-sixths of which are ELG. For both selections, targeting with CFHTLS is more efficient than with SDSS. The complete classification of observed targets is in Table \[objects\_W3\]. The redshift distribution of the observed objects is compared to the distributions from the BOSS and WiggleZ current BAO experiments in Fig. \[ELG\_nz\]. The Figure shows that *ugr* and *gri* target selections enable a BAO study at higher redshifts. With a joint selection, we can reach the requirements described in Table \[BAO\_req\] to detect BAO feature to redshift 1. ---------------------- -------- ----- -------- ----- -------- ----- -------- ----- Type Number % Number % Number % Number % ELG($z>0.6$) 450 50 240 61 100 32 402 46 ELG($z<0.6$) 60 7 3 1 101 32 84 9 RG($z>0.6$) 73 8 46 12 0 0 0 0 RG($z<0.6$) 30 3 0 0 7 3 0 0 single emission line 36 4 12 3 0 0 102 12 low continuum 13 1 1 0 0 0 0 0 QSO 8 1 5 1 30 10 126 14 stars 44 5 12 3 10 3 6 1 bad data 185 21 72 18 65 20 158 18 total 899 100 391 100 313 100 878 100 ---------------------- -------- ----- -------- ----- -------- ----- -------- ----- ![Observed redshift distribution for the *ugr* ELGs (blue), the *gri* ELGs (black) compared to the distribution of galaxies from BOSS (red) and WiggleZ (green). Magenta lines represent constant density of galaxies at 1 and 3 $\times10^{-4}\; h^3 \;{\rm Mpc}^{-3}$, it constitutes our density goals.[]{data-label="ELG_nz"}](nZobserved.pdf){width="88mm"} Comparison of measured ELGs with the CMC forecasts -------------------------------------------------- ![image](nicePlot5.pdf){width="180mm"} To investigate the expected purity of ELG galaxies samples, we created mock catalogs covering redshifts between 0.6 and 1.7. Continuum spectra of ELGs were generated from the Cosmos Mock Catalog and emission lines were added according to the modeling described in @Jouvel_2009. Two simulated galaxy catalogs were built, one for each colour selection function (*ugr* and *gri*). Each synthetic spectrum was affected by sky and photon noise as if observed by BOSS spectrographs, by using the <span style="font-variant:small-caps;">specsim1d</span> software. We simulated a set of four exposures of 900 seconds each. The resulting simulated spectra were then analyzed by the <span style="font-variant:small-caps;">zCode</span> pipeline [@2006MNRAS.372..425C] to extract the spectroscopic redshift. As our targets are mainly emission line galaxies, we only use the redshift estimate based on fitting discrete emission line templates in Fourier space over all z. We address the flux measurement of emission lines. This exercise was conducted using the <span style="font-variant:small-caps;">Platefit Vimos</span> software developed by @Lamareille_2009. This software is based on the <span style="font-variant:small-caps;">platefit</span> software that was developed to analyze SDSS spectra [@2004ApJ...613..898T; @2004MNRAS.351.1151B]. The <span style="font-variant:small-caps;">platefit vimos</span> software was developed to measure the flux of all emission lines after removing the stellar continuum and absorption lines from lower resolution and lower signal-to-noise ratio spectra . The stellar component of each spectrum is fit by a non-negative linear combination of 30 single stellar population templates with different ages (0.005, 0.025, 0.10, 0.29, 0.64, 0.90, 1.4, 2.5, 5 and 11 Gyr) and metallicities (0.2, 1 and 2.5 $Z_\odot$). These templates have been derived using the @2003MNRAS.344.1000B libraries and have been resampled to the velocity dispersion of VVDS spectra. The dust attenuation in the stellar population model is left as a free parameter. Foreground dust attenuation from the Milky Way has been corrected using the @1998ApJ...500..525S maps. After removal of the stellar component, the emission lines are fit as a single nebular spectrum made of a sum of Gaussians at specified wavelengths. All emission lines are set to have the same width, with the exception of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]\lambda3727$ line which is a doublet of two lines at 3726 and 3729 $\AA$ that appear broadened compared to other single lines. Detected emission lines may also be removed from the original spectrum in order to obtain the observed stellar spectrum and measure indices, as well as emission-line equivalent widths. The underlying continuum is obtained by smoothing the stellar spectrum. Equivalent widths are then measured via direct integration over a $5\sigma$ bandpass of the emission-line Gaussian model divided by the underlying continuum. Then emission lines fluxes are measured for each simulated spectra using the extracted redshift from <span style="font-variant:small-caps;">zCode</span> and the true redshift for cross-checks. We consider that a redshift has been successfully measured if $\Delta z/(1+z)<0.001$. We believe that this threshold could be lowered to $10^{-4}$ in the future by using a more advanced redshift solver. Using the current pipeline, we can distinguish these two regimes. The first regime is the redshift range $z<1.0$. Many emission lines (\[OII\], H$\beta$, \[OIII\]) are present in the SDSS spectrum. For $g<23.5$, 91 percent of the redshift are measured sucessfully. Among the remaining 9 percent, catastrophic failures represent 3.5 percent (the pipeline outputs a redshift between 0 and 1.6 with $\Delta z/(1+z)>0.01$). Inaccurate redshifts represent 3.9 percent (the pipeline outputs a redshift between 0 and 1.6 with $0.001<\Delta z/(1+z)<0.01$) and 1.5 percent are not found by the pipeline ($z=-9$ is output). The second regime is the redshift range $1.0\leq z< 1.7$: the redshift determination hinges on the identification of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublet. For $g<23.5$, 66.8 percent of the redshifts are measured sucessfully. 19.1 percent are catastrophic failures and 14.1 percent are inaccurate redshifts. Work is ongoing to improve the redshift measurement efficiency at $z>1$. In the second regime, the minimum $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ flux required to compute a reliable redshift depends on the redshift / wavelength, because of the strong OH sky lines in the spectrum. We infer from the observed spectra that to measure a reliable redshift, we require a $5\sigma$ detection of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ lines, which means a (blended or not) detection of two peaks in the emission line separated by $2(1+z)$. The detection significance is defined from the 1d spectrum. From the data the faintest $5\sigma$ detections are made with a flux of $4\times 10^{-17} \mathrm{erg\,s^{-1}\,cm^{-2}}$ and the brightest $5\sigma$ detections need a flux of $2\times 10^{-16} \mathrm{erg\,s^{-1}\,cm^{-2}}$ to be on top of sky lines. The simulation shows the same thresholds; see Fig. \[OII\_detection\_limit\]. The simulation confirms the detection limit we observe. The bottom plot of Fig. \[OII\_detection\_limit\] raises the issue that the sky time variation has a non-negligible impact on the detection limit of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission doublet for redshifts $z>1.1$. Though this ELG sample is too small to address this issue. In fact the sample was observed during ten different nights and the number of ELG with $z>1.1$ is less than 60. It is thus not possible to derive a robust trend comparing the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ detections to the sky value of each observation. Handling this issue would require a sample of $\sim$500 redshifts in $1.1<z<1.6$ observed many times over many nights. With such a sample in hands, we could quantify exactly how to optimize the observational strategy. Physical properties of ELGs {#properties} =========================== All *ugr* and *gri* ELG spectra were analyzed with two different software packages: the PlateFit VIMOS [@Lamareille_2009] and the Portsmouth Spectroscopic Pipeline [@2012arXiv1207.6115T]. In this section we discuss the following physical properties of the observed ELGs: redshift, star forming rate (SFR), stellar mass, metallicity and classification of the ELG type (Seyfert 2, LINERs, SFG, composite). Observations a larger samples of ELGs are planned to estimate how these quantities vary over time and with their environment, and also to estimate how the clustering depends on these physical quantities. It is key to replace future BAO tracers in the galaxy formation history. With the current sample, we draw simple trends using means and standard deviation of the observed quantities, and we place the ELGs in the galaxy classification made by @Lamareille_2010 [@Marocco_2011]. Main Properties --------------- The main properties of the ELGs are shown in the Table \[main\_properties\]. The star forming rate was computed using the equation 18 of @Argence_2009. The stellar mass was estimated using the CFHTLS *ugriz* photometry. (The errors on the stellar mass using only SDSS photometry were too large to be meaningful, thus the empty cells in the table). The metallicity is estimated using the calibration by @Tremonti_2004. The main trends are : - The *gri*-selected galaxies of CFHTLS are the more massive galaxies in terms of stellar mass. - The *ugr* selects stronger star-forming galaxies than *gri* (due to the *u*-band selection). There is a factor of two variations in the strength of the measured oxygen lines. - The *ugr* selects galaxies that have $12 + \log{[OH]} \in [8,9]$ whereas *gri* focuses slightly more on the higher $12 + \log{[OH]} \approx 9$. - the SFR appears to be independent of the color selection schemes. ----------------------------------------------------------- -------- ---------- -------- ---------- -------- ---------- -------- ---------- mean $\sigma$ mean $\sigma$ mean $\sigma$ mean $\sigma$ EW$_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}$ -14.86 9.01 -16.75 10.13 -50.58 27.24 -30.75 23.04 Flux$_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}$ 16.85 9.65 18.58 10.37 30.36 30.1 24.23 39.27 EW$_{H_\beta}$ -10.28 10.8 -10.72 8.65 -24.27 22.88 -17.18 19.34 Flux$_{H_\beta}$ 15.44 8.6 14.63 7.72 12.97 15.16 12.57 23.91 EW$_{\left[\mathrm{O\textrm{\textsc{iii}}}\right]}$ -10.09 10.98 -11.33 10.76 -65.3 91.56 -16.89 30.49 Flux$_{\left[\mathrm{O\textrm{\textsc{iii}}}\right]}$ 17.74 20.15 17.43 21.59 35.13 53.49 13.39 37.79 $12 + \log OH$ 8.94 0.20 8.92 0.19 8.69 0.21 8.69 0.25 $\log$SFR$_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]}$ 0.97 0.35 0.92 0.45 0.96 1.24 0.76 0.84 $\log(M^*/M_\odot)$ 10.85 0.3 10.23 6.87 9.33 0.80 - - ----------------------------------------------------------- -------- ---------- -------- ---------- -------- ---------- -------- ---------- Classification. --------------- ![image](classification.pdf){width="180mm"} We use a recent classification [@Lamareille_2010; @Marocco_2011] for the ELG sample. The classification is made using $\log(\left[\mathrm{O\textrm{\textsc{iii}}}\right]/H_\beta)$, $\log(\left[\mathrm{O\textrm{\textsc{ii}}}\right]/H_\beta)$, $D_n(4000)$, and $\log(max(EW_{\left[\mathrm{O\textrm{\textsc{ii}}}\right]},EW_{\left[\mathrm{Ne\textrm{\textsc{iii}}}\right]}))$. We compare the ELG sample to zCOSMOS, as zCOSMOS has numerous star forming galaxies in the redshift range we are observing. Fig. \[classification\_ELG\] a) shows that the zCOSMOS and the *ugr* ELG samples are located in three of the five areas delimited by the classification: Seyfert 2 (‘Sy2’), Star Forming Galaxies (‘SFG’), and a third region where both mix (‘Sy2/SFG’). There are a few LINERs and Composite in either sample. Fig. \[classification\_ELG\] b) separates the *ugr* galaxies in the ‘Sy2/SFG’ area into ‘SFG’ or ‘Sy2’, and shows that zCOSMOS galaxies from the ‘Sy2/SFG’ area are both ‘Sy2’ and ‘SFG’ where the *ugr* ELGs in the ‘Sy2/SFG’ area are mostly ‘SFG’. The *gri* observed sample is located in the area of Star Forming Galaxies (‘SFG’), whether one considers the one selected on CFHT or on SDSS. Finally, the ELG selected, *ugr* or *gri*, are both in the ‘SFG’ part of the classification. Discussion {#section:discussion} ========== Redshift identification rates in [*ugr*]{} and [*gri*]{} -------------------------------------------------------- We summarize the redshift measurement efficiency of the *gri* and *ugr* colour-selected galaxies presented in this paper in Tables \[objects\_W3\] and \[redshift\_efficiency\], and we compare the results with those of WiggleZ [@Drinkwater_2010], BOSS and VIPERS (the percentages about VIPERS are based on a preliminary subset including only $\sim$ 20 percent of the survey). The original VIPERS selection flag (J. Coupon and O. Ilbert private communication) is defined to have colours compatible with an object at $z > 0.5$ if it has ($r-i \geq 0.7$ and $u-g\geq1.4$) or ($r-i \geq 0.5(u-g)$ and $u-g<1.4$) (Guzzo et al. (2012), in preparation). The efficiencies in the Table \[redshift\_efficiency\] show that a better photometry and thus more precise colours yield a better efficiency in terms of obtaining objects in the targeted redshift range. It also shows the colour selections proposed in this paper are competitive for building an LSS sample. To determine the necessary precision on the photometry to stay at the efficiencies observed, we degrade the photometry of the observed ELGs, then reselect them and recompute the efficiencies. Using a photometry less precise than the CFHTLS by a factor of 2.5 in the errors (the ratio of the median values of the mag errors in bins of 0.1 in magnitude equals 2.5) does not significantly change neither the efficiency nor the redshift distribution implied by the colour selection. This change also corresponds to loosening the colour criterion by 0.1 mag. For the *eBOSS* survey a photometry 2.5 times less precise than CFHTLS should be sufficient to maintain a high targeting efficiency (for comparison, SDSS is 10 times less precise than CFHTLS); Fig. \[degradedPhotometry:fig\] shows the smearing of the galaxy positions in the colour-colour plane for a degraded photometry. -------------- --------------- ----------- --------- -- -- -- -- selection spectroscopic object in quasars scheme redshift z window *gri* SDSS 80 62 1 *gri* CFHTLS 82 73 1 *ugr* SDSS 80 32 10 *ugr* CFHTLS 78 56 13 WiggleZ 60 35 - BOSS 95 95 - VIPERS 80 70 - -------------- --------------- ----------- --------- -- -- -- -- : Redshift efficiency in percent. The second column ‘spectroscopic redshift’ quantifies the amount of spectroscopic redshift obtained with the selection. The third column ‘object in z window’ is the number of spectroscopic redshift that are in the range the survey is aiming at, it is the efficiency of the target selection. The redshift window for ELG selection is $z>0.6$.[]{data-label="redshift_efficiency"} ![U-R vs. G-I colored according to the photometric redshift. On the left CFHT-LS photometry, on the right CFHT-LS photometry degraded by a factor 2.5. This comparison shows how the degradation of the photometry smears the clean separations between galaxy populations in redshift.[]{data-label="degradedPhotometry:fig"}](degradedPhotometry.pdf){width="88mm"} Measurement of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublet, single emission line spectra ------------------------------------------------------------------------------------------------------ For ground-based spectroscopic surveys observing ELGs with $1<z<1.7$, the only emission line remaining in the spectrum to assign the spectroscopic redshift is the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublet. For the redshift to be certain the doublet must be split ([*i.e.*]{}, we do not want the target to be classified as ‘single emission line’ ELG). Fig. \[OiiRedshifts:Fig\] shows a subsample of the observed bright *ugr* ELGs where $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ doublets are well resolved. ![image](OiiResolution.pdf){width="180mm"} We can circumvent the ‘single emission line’ ELG issue (Fig. \[sinlgeEmLowContiRedshift\]) by increasing the resolution of the spectrograph. This modification will enhance a better split of $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$, and will increase the room available to observe the doublet by rendering sky lines ‘thiner’. The sky acts as an observational window and prevents some narrow redshift ranges to be sampled by the spectrograph; see Fig. \[OII\_detection\_limit\]. Increasing the resolution dilutes the signal, and thus the exposure time has to be increased to reconstruct properly the doublet above the mean sky level. We performed a simulation of the $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ emission line fit to quantify by which amount the resolution must be increased to have no ‘single emission line’ ELGs. We fit one or two Gaussians on a doublet with a total flux of $10^{-16} \mathrm{erg\,s^{-1}\,cm^{-2}}$ (lowest ‘single emission line’ flux observed) contaminated by a noise of $3 \times 10^{-17} \mathrm{erg\,s^{-1}\,cm^{-2}}$ (typical BOSS dark sky). The $\chi^{2}$ of the two fits are equal at low resolution and become disjoint in favor of the fit with 2 Gaussians for a resolution above 3000 at $7454.2\AA$ ([*i.e.*]{} $\left[\mathrm{O\textrm{\textsc{ii}}}\right]$ at redshift 1). Such an increase in resolution could help assigning proper redshifts to ‘single emission line’ ELGs. How / why redshift went wrong ----------------------------- The main difference in redshift measurement efficiency between SDSS and CFHT-LS colour selection is mainly due to the difference in photometry depth. Using calibrations made by @Regnault_2009, it is possible to translate the colour selection criteria from CFHT-LS magnitudes to SDSS magnitudes. The colour difference can be as large as 1 magnitude as the SDSS magnitude cut is close to the detection limit of the SDSS survey; see Fig. \[griComparison\] where SDSS *gri* colour-selected galaxies are represented with their CFHTLS magnitudes. ![*gri* selection based on colours from SDSS (black box) represented on CFHTLS magnitudes. The scatter is quite large: about half the targets would not have been selected if we used CFHTLS photometry. The ‘wanted’ objects are galaxies at $z>0.6$ or quasars and ‘unwanted’ objects are the rest. []{data-label="griComparison"}](griComparison_both.pdf){width="80mm"} How to improve ELG selection for future surveys ----------------------------------------------- We suggest a few ways to increase the redshift measurement efficiency and reach the requirements set in the second section. For the *ugr* selection : lowering *u-g* cut to 0.3 diminishes the contamination by low-redshifts galaxies. Additional low-redshift galaxies can be removed from the selection through an inspection of the images. Some of the low-redshift galaxies are quite extended, and one could mistake a high-redshift merger for an extended low-redshift galaxy. Visual inspection reduces the low-redshift share from 9 percent to 4 percent. The compact and extended selection on the CFHT data is very efficient at identifying quasars. There is also room for improving the spectroscopic redshift determination and thus re-classify ‘single emission line’ galaxies : they represent a 12 percent share, among which 10 percent are at $z>0.6$. It seems reasonable to assume an efficiency improvement from 46 percent ELG($z>0.6$) + 14 percent quasar to 61 percent ELG($z>0.6$) + 14 percent quasar. Thus a total efficiency of $\sim75\%$ For the *gri* selection : improving the spectroscopic redshift determination pipeline can gain up to 5 percent efficiency thus increasing from $73$ to 78 percent of ELG($z>0.6$). We have also optimized target selections for BAO sampling density using the four bands *ugri*. We find that the optimum selections have a redshift distribution close to the smooth combination of the *gri* and *ugr* selections discussed here; see Fig. \[ugriSelections\]. Conclusion ========== We present an efficient emission-line galaxy selection that can provide a sample from which one can measure the BAO feature in the 2-point correlation function at $z>0.6$. With the photometry available today we can plan for a BAO measure to redshift 1 with the BOSS spectrograph. A representative set of photometric surveys that might be available for target selection in a near future on more than 2,000 square degrees are : - The Kilo Degree Survey (KIDS)[^11] aims at observing 1500 square degrees in the *ugri* bands with $3\sigma$ depth of 24.8, 25.4, 25.2, 24.2 using the VST. - the South Galactic Cap U-band Sky Survey[^12] (SCUSS) aims a $5 \sigma$ limiting magnitude of $23.0$ - the Dark Energy Survey (DES) aims at observing 5,000 square degrees in *griz* bands with 10 $\sigma$ depth of 24.6, 24.1, 24.3, 23.9. This survey does not include the *u* band [@Abbott_2005; @2008MNRAS.386.1219B]. - the Large Synoptic Survey Telescope (LSST) [@Ivezic_2008] plans to observe 20,000 square degrees in *ugrizy* bands with 5 $\sigma$ depth of 26.1, 27.4, 27.5, 26.8, 26.1, 24.9. Using such deeper photometric surveys and improved pipelines, it should be possible to probe BAO to redshift $z=1.2$ in the next 6 years, [*e.g.*]{} by the *eBOSS* experiment, and to $z=1.7$ in the next 10 years, [*e.g.*]{} by PFS-SuMIRE or *BigBOSS* experiment. Acknowledgements {#acknowledgements .unnumbered} ================ Johan Comparat especially thanks Carlo Schimd and Olivier Ilbert for insightful discussions about this observational program and its interpretation. We thank the SDSS-III/BOSS collaboration for granting us this ancillary program. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l’Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The BOSS French Participation Group is supported by Agence Nationale de la Recherche under grant ANR-08-BLAN-0222. ![Photometric redshift distributions obtained using the *ugri* bands. The dashed black lines are the low and high density goals mentioned in Section \[section:ELGs\_BAO\], $\bar{n}=10^{-4}$ and $3 \times10^{-4}\; h^3\mathrm{Mpc}^{-3}$. The dashed red line is the BOSS CMASS sample. The solid blue line is the distribution enhanced by the *ugri* selection, it has a projected sky density of $\sim340$ deg$^{-2}$. The solid red line is the *gri* selection (projected sky density $\sim350$ deg$^{-2}$). The solid green is the *ugr* selection (projected sky density $\sim400$ deg$^{-2}$). It shows the possibility of making a selection able to sample $[0.6,1.2]$ for a BAO experiment.[]{data-label="ugriSelections"}](allSelectionsCompared2.pdf){width="88mm"} Table of a subsample of observed galaxies at $z>0.6$ {#tble_appendix} ==================================================== \[gri\_table\] ![image](elgLoz.pdf){width="180mm"} \[Em\_loz\] ![image](elgMez.pdf){width="180mm"} \[Em\_midz\] ![image](elgHiz.pdf){width="180mm"} \[Em\_hiz\] ![image](qso.pdf){width="180mm"} \[qsos\] \[lastpage\] [^1]: http://sumire.ipmu.jp/en/ [^2]: http://sci.esa.int/euclid [^3]: http://cesam.oamp.fr/vvdsproject/ [^4]: http://deep.berkeley.edu/index.html [^5]: http://vipers.inaf.it/project.html [^6]: http://www.cfht.hawaii.edu/Science/CFHLS/ [^7]: http://cadcwww.dao.nrc.ca/megapipe/docs/filters.html [^8]: http://terapix.iap.fr/cplt/T0006-doc.pdf [^9]: http://lamwws.oamp.fr/cosmowiki/RealisticSpectroPhotCat [^10]: http://dr9.sdss3.org/ [^11]: http://kids.strw.leidenuniv.nl/ [^12]: http://batc.bao.ac.cn/Uband/
{ "pile_set_name": "ArXiv" }
--- author: - 'Grisha Perelman[^1]' title: | The entropy formula for the Ricci flow\ and its geometric applications --- Introduction {#introduction .unnumbered} ============ The Ricci flow equation, introduced by Richard Hamilton \[H 1\], is the evolution equation $\frac{d}{dt}g_{ij}(t)=-2R_{ij}$ for a riemannian metric $g_{ij}(t).$ In his seminal paper, Hamilton proved that this equation has a unique solution for a short time for an arbitrary (smooth) metric on a closed manifold. The evolution equation for the metric tensor implies the evolution equation for the curvature tensor of the form $Rm_t=\triangle Rm +Q,$ where $Q$ is a certain quadratic expression of the curvatures. In particular, the scalar curvature $R$ satisfies $R_t=\triangle R+2|\mbox{Ric}|^2,$ so by the maximum principle its minimum is non-decreasing along the flow. By developing a maximum principle for tensors, Hamilton \[H 1,H 2\] proved that Ricci flow preserves the positivity of the Ricci tensor in dimension three and of the curvature operator in all dimensions; moreover, the eigenvalues of the Ricci tensor in dimension three and of the curvature operator in dimension four are getting pinched pointwisely as the curvature is getting large. This observation allowed him to prove the convergence results: the evolving metrics (on a closed manifold) of positive Ricci curvature in dimension three, or positive curvature operator in dimension four converge, modulo scaling, to metrics of constant positive curvature. Without assumptions on curvature the long time behavior of the metric evolving by Ricci flow may be more complicated. In particular, as $t$ approaches some finite time $T,$ the curvatures may become arbitrarily large in some region while staying bounded in its complement. In such a case, it is useful to look at the blow up of the solution for $t$ close to $T$ at a point where curvature is large (the time is scaled with the same factor as the metric tensor). Hamilton \[H 9\] proved a convergence theorem , which implies that a subsequence of such scalings smoothly converges (modulo diffeomorphisms) to a complete solution to the Ricci flow whenever the curvatures of the scaled metrics are uniformly bounded (on some time interval), and their injectivity radii at the origin are bounded away from zero; moreover, if the size of the scaled time interval goes to infinity, then the limit solution is ancient, that is defined on a time interval of the form $(-\infty , T).$ In general it may be hard to analyze an arbitrary ancient solution. However, Ivey \[I\] and Hamilton \[H 4\] proved that in dimension three, at the points where scalar curvature is large, the negative part of the curvature tensor is small compared to the scalar curvature, and therefore the blow-up limits have necessarily nonnegative sectional curvature. On the other hand, Hamilton \[H 3\] discovered a remarkable property of solutions with nonnegative curvature operator in arbitrary dimension, called a differential Harnack inequality, which allows, in particular, to compare the curvatures of the solution at different points and different times. These results lead Hamilton to certain conjectures on the structure of the blow-up limits in dimension three, see \[H 4,$ \S 26$\]; the present work confirms them. The most natural way of forming a singularity in finite time is by pinching an (almost) round cylindrical neck. In this case it is natural to make a surgery by cutting open the neck and gluing small caps to each of the boundaries, and then to continue running the Ricci flow. The exact procedure was described by Hamilton \[H 5\] in the case of four-manifolds, satisfying certain curvature assumptions. He also expressed the hope that a similar procedure would work in the three dimensional case, without any a priory assumptions, and that after finite number of surgeries, the Ricci flow would exist for all time $t\to\infty,$ and be nonsingular, in the sense that the normalized curvatures $\tilde{Rm}(x,t)=tRm(x,t)$ would stay bounded. The topology of such nonsingular solutions was described by Hamilton \[H 6\] to the extent sufficient to make sure that no counterexample to the Thurston geometrization conjecture can occur among them. Thus, the implementation of Hamilton program would imply the geometrization conjecture for closed three-manifolds. In this paper we carry out some details of Hamilton program. The more technically complicated arguments, related to the surgery, will be discussed elsewhere. We have not been able to confirm Hamilton’s hope that the solution that exists for all time $t\to\infty$ necessarily has bounded normalized curvature; still we are able to show that the region where this does not hold is locally collapsed with curvature bounded below; by our earlier (partly unpublished) work this is enough for topological conclusions. Our present work has also some applications to the Hamilton-Tian conjecture concerning $\mbox{K\"{a}hler-Ricci}$ flow on $\mbox{K\"{a}hler}$ manifolds with positive first Chern class; these will be discussed in a separate paper. The Ricci flow has also been discussed in quantum field theory, as an approximation to the renormalization group (RG) flow for the two-dimensional nonlinear $\sigma$-model, see \[Gaw,$\S 3$\] and references therein. While my background in quantum physics is insufficient to discuss this on a technical level, I would like to speculate on the Wilsonian picture of the RG flow. In this picture, $t$ corresponds to the scale parameter; the larger is $t,$ the larger is the distance scale and the smaller is the energy scale; to compute something on a lower energy scale one has to average the contributions of the degrees of freedom, corresponding to the higher energy scale. In other words, decreasing of $t$ should correspond to looking at our Space through a microscope with higher resolution, where Space is now described not by some (riemannian or any other) metric, but by an hierarchy of riemannian metrics, connected by the Ricci flow equation. Note that we have a paradox here: the regions that appear to be far from each other at larger distance scale may become close at smaller distance scale; moreover, if we allow Ricci flow through singularities, the regions that are in different connected components at larger distance scale may become neighboring when viewed through microscope. Anyway, this connection between the Ricci flow and the RG flow suggests that Ricci flow must be gradient-like; the present work confirms this expectation. The paper is organized as follows. In $\S 1$ we explain why Ricci flow can be regarded as a gradient flow. In $\S 2,3$ we prove that Ricci flow, considered as a dynamical system on the space of riemannian metrics modulo diffeomorphisms and scaling, has no nontrivial periodic orbits. The easy (and known) case of metrics with negative minimum of scalar curvature is treated in $\S 2;$ the other case is dealt with in $\S 3,$ using our main monotonicity formula (3.4) and the Gaussian logarithmic Sobolev inequality, due to L.Gross. In $\S 4$ we apply our monotonicity formula to prove that for a smooth solution on a finite time interval, the injectivity radius at each point is controlled by the curvatures at nearby points. This result removes the major stumbling block in Hamilton’s approach to geometrization. In $\S 5$ we give an interpretation of our monotonicity formula in terms of the entropy for certain canonical ensemble. In $\S 6$ we try to interpret the formal expressions , arising in the study of the Ricci flow, as the natural geometric quantities for a certain Riemannian manifold of potentially infinite dimension. The Bishop-Gromov relative volume comparison theorem for this particular manifold can in turn be interpreted as another monotonicity formula for the Ricci flow. This formula is rigorously proved in $\S 7;$ it may be more useful than the first one in local considerations. In $\S 8$ it is applied to obtain the injectivity radius control under somewhat different assumptions than in $\S 4.$ In $\S 9$ we consider one more way to localize the original monotonicity formula, this time using the differential Harnack inequality for the solutions of the conjugate heat equation, in the spirit of Li-Yau and Hamilton. The technique of $\S 9$ and the logarithmic Sobolev inequality are then used in $\S 10$ to show that Ricci flow can not quickly turn an almost euclidean region into a very curved one, no matter what happens far away. The results of sections 1 through 10 require no dimensional or curvature restrictions, and are not immediately related to Hamilton program for geometrization of three manifolds. The work on details of this program starts in $\S 11,$ where we describe the ancient solutions with nonnegative curvature that may occur as blow-up limits of finite time singularities ( they must satisfy a certain noncollapsing assumption, which, in the interpretation of $\S 5,$ corresponds to having bounded entropy). Then in $\S 12$ we describe the regions of high curvature under the assumption of almost nonnegative curvature, which is guaranteed to hold by the Hamilton and Ivey result, mentioned above. We also prove, under the same assumption, some results on the control of the curvatures forward and backward in time in terms of the curvature and volume at a given time in a given ball. Finally, in $\S 13$ we give a brief sketch of the proof of geometrization conjecture. The subsections marked by \* contain historical remarks and references. See also \[Cao-C\] for a relatively recent survey on the Ricci flow. Ricci flow as a gradient flow ============================= Consider the functional ${\mathcal {F}}=\int_M{(R+|\nabla f|^2)e^{-f}dV}$ for a riemannian metric $g_{ij}$ and a function $f$ on a closed manifold $M$. Its first variation can be expressed as follows: $$\delta {\mathcal {F}}(v_{ij},h)=\int_M e^{-f}[-\triangle v+\nabla_i\nabla_jv_{ij}-R_{ij}v_{ij}$$ $$-v_{ij}\nabla_i f\nabla_j f+2<\nabla f,\nabla h>+(R+|\nabla f|^2)(v/2-h)]$$ $$=\int_M{e^{-f}[-v_{ij}(R_{ij}+\nabla_i\nabla_j f)+(v/2-h)(2\triangle f-|\nabla f|^2+R)]},$$where $\delta g_{ij}=v_{ij}$, $\delta f=h$, $v=g^{ij}v_{ij}$. Notice that $v/2-h$ vanishes identically iff the measure $dm=e^{-f}dV$ is kept fixed. Therefore, the symmetric tensor $-(R_{ij}+\nabla_i\nabla_j f)$ is the $L^2$ gradient of the functional ${\mathcal {F}}^m =\int_M{(R+|\nabla f|^2)dm}$, where now $f$ denotes $\log(dV/dm)$. Thus given a measure $m$ , we may consider the gradient flow $(g_{ij})_t=-2(R_{ij}+\nabla_i\nabla_j f)$ for ${\mathcal {F}}^m$. For general $m$ this flow may not exist even for short time; however, when it exists, it is just the Ricci flow, modified by a diffeomorphism. The remarkable fact here is that different choices of $m$ lead to the same flow, up to a diffeomorphism; that is, the choice of $m$ is analogous to the choice of gauge. Suppose that the gradient flow for ${\mathcal {F}}^m$ exists for $t\in[0,T].$ Then at $t=0$ we have ${\mathcal {F}}^m\le \frac{n}{2T}\int_M{dm}.$ [*Proof.*]{} We may assume $\int_M{dm}=1.$ The evolution equations for the gradient flow of ${\mathcal {F}}^m$ are $$(g_{ij})_t=-2(R_{ij}+\nabla_i\nabla_j f) ,\ \ f_t=-R-\triangle f ,$$ and ${\mathcal {F}}^m$ satisfies $${\mathcal {F}}^m_t=2\int{|R_{ij}+\nabla_i\nabla_j f|^2 dm}$$ Modifying by an appropriate diffeomorphism, we get evolution equations $$(g_{ij})_t=-2R_{ij} , f_t=-\triangle f + |\nabla f|^2 - R ,$$ and retain (1.2) in the form $${\mathcal {F}}_t=2\int{|R_{ij}+\nabla_i\nabla_j f|^2e^{-f}dV}$$ Now we compute $${\mathcal {F}}_t\ge\frac{2}{n}\int{(R+\triangle f)^2e^{-f}dV}\ge\frac{2}{n}(\int{(R+\triangle f)e^{-f}dV})^2=\frac{2}{n}{\mathcal {F}}^2,$$ and the proposition follows. [**1.3**]{} [*Remark.*]{} The functional ${\mathcal {F}}^m $ has a natural interpretation in terms of Bochner-Lichnerovicz formulas. The classical formulas of Bochner (for one-forms) and Lichnerovicz (for spinors) are $\nabla^*\nabla u_i=(d^*d+dd^*)u_i-R_{ij}u_j$ and $\nabla^*\nabla \psi=\delta^2\psi-1/4 R\psi.$ Here the operators $\nabla^*$ , $d^*$ are defined using the riemannian volume form; this volume form is also implicitly used in the definition of the Dirac operator $\delta$ via the requirement $\delta^*=\delta.$ A routine computation shows that if we substitute $dm=e^{-f}dV$ for $dV$ , we get modified Bochner-Lichnerovicz formulas $\nabla^{*m}\nabla u_i=(d^{*m}d+dd^{*m})u_i-R_{ij}^m u_j $ and $\nabla^{*m}\nabla\psi=(\delta^m)^2\psi-1/4R^m\psi,$ where $\delta^m\psi=\delta\psi-1/2(\nabla f)\cdot\psi$ , $R_{ij}^m=R_{ij}+\nabla_i\nabla_j f$ , $R^m=2\triangle f-|\nabla f|^2 +R .$ Note that $g^{ij}R_{ij}^m= R + \triangle f \ne R^m .$ However, we do have the Bianchi identity $\nabla_i^{*m}R_{ij}^m=\nabla_iR_{ij}^m-R_{ij}\nabla_i f=1/2\nabla_jR^m .$ Now ${\mathcal {F}}^m=\int_M{{R^m}dm}=\int_M{g^{ij}R_{ij}^m dm}.$ [**1.4\***]{} The Ricci flow modified by a diffeomorphism was considered by DeTurck, who observed that by an appropriate choice of diffeomorphism one can turn the equation from weakly parabolic into strongly parabolic, thus considerably simplifying the proof of short time existence and uniqueness; a nice version of DeTurck trick can be found in \[H 4,$\S 6$\]. The functional ${\mathcal {F}}$ and its first variation formula can be found in the literature on the string theory, where it describes the low energy effective action; the function $f$ is called dilaton field; see \[D,$\S 6$\] for instance. The Ricci tensor $R_{ij}^m$ for a riemannian manifold with a smooth measure has been used by Bakry and Emery \[B-Em\]. See also a very recent paper \[Lott\]. No breathers theorem I ====================== A metric $g_{ij}(t)$ evolving by the Ricci flow is called a breather, if for some $t_1<t_2 $ and $\alpha>0$ the metrics $\alpha g_{ij}(t_1)$ and $g_{ij}(t_2)$ differ only by a diffeomorphism; the cases $\alpha=1 , \alpha<1 , \alpha>1 $ correspond to steady, shrinking and expanding breathers, respectively. Trivial breathers, for which the metrics $g_{ij}(t_1)$ and $g_{ij}(t_2)$ differ only by diffeomorphism and scaling for each pair of $t_1$ and $t_2$, are called Ricci solitons. (Thus, if one considers Ricci flow as a dynamical system on the space of riemannian metrics modulo diffeomorphism and scaling, then breathers and solitons correspond to periodic orbits and fixed points respectively). At each time the Ricci soliton metric satisfies an equation of the form $R_{ij}+cg_{ij}+\nabla_i b_j +\nabla_j b_i=0,$ where $c$ is a number and $b_i$ is a one-form; in particular, when $b_i=\frac{1}{2}\nabla_i a$ for some function $a$ on $M,$ we get a gradient Ricci soliton. An important example of a gradient shrinking soliton is the Gaussian soliton, for which the metric $g_{ij}$ is just the euclidean metric on $\mathbb{R}^n$, $c=1$ and $a=-|x|^2/2.$ In this and the next section we use the gradient interpretation of the Ricci flow to rule out nontrivial breathers (on closed $M$). The argument in the steady case is pretty straightforward; the expanding case is a little bit more subtle, because our functional ${\mathcal {F}}$ is not scale invariant. The more difficult shrinking case is discussed in section 3. Define $\lambda(g_{ij})=\mbox{inf}\ {\mathcal {F}}(g_{ij},f) ,$ where infimum is taken over all smooth $f,$ satisfying $ \int_M{e^{-f}dV}=1 .$ Clearly, $\lambda(g_{ij})$ is just the lowest eigenvalue of the operator $-4\triangle+R.$ Then formula (1.4) implies that $\lambda(g_{ij}(t))$ is nondecreasing in $t,$ and moreover, if $\lambda(t_1)=\lambda(t_2),$ then for $t\in [t_1,t_2]$ we have $R_{ij}+\nabla_i\nabla_j f=0$ for $f$ which minimizes ${\mathcal {F}}.$ Thus a steady breather is necessarily a steady soliton. To deal with the expanding case consider a scale invariant version $\bar{\lambda}(g_{ij})=\lambda(g_{ij})V^{2/n}(g_{ij}).$ The nontrivial expanding breathers will be ruled out once we prove the following [**Claim**]{} [*$\bar{\lambda}$ is nondecreasing along the Ricci flow whenever it is nonpositive; moreover, the monotonicity is strict unless we are on a gradient soliton.*]{} (Indeed, on an expanding breather we would necessarily have $dV/dt>0$ for some $t {\in } [t_1,t_2].$ On the other hand, for every $t$, $-\frac{d}{dt}\mbox{log}V=\frac{1}{V}\int{RdV}\ge\lambda(t),$ so $\bar{\lambda}$ can not be nonnegative everywhere on $[t_1,t_2], $ and the claim applies.) [*Proof of the claim.*]{} $${\small\begin{array}{cc}d\bar{\lambda}(t)/dt\ge2V^{2/n}\int{|R_{ij}+\nabla_i\nabla_j f|^2e^{-f}dV}+\frac{2}{n}V^{(2-n)/n}\lambda\int{-RdV}\ge\\\\2V^{2/n}[\int{|R_{ij}+\nabla_i\nabla_j f-\frac{1}{n}(R+\triangle f)g_{ij}|^2e^{-f}dV}+\\\\\frac{1}{n}(\int{(R+\triangle f)^2e^{-f}dV}-(\int{(R+\triangle f)e^{-f}dV})^2)]\ge0,\end{array}}$$ where $f$ is the minimizer for ${\mathcal {F}}.$ The arguments above also show that there are no nontrivial (that is with non-constant Ricci curvature) steady or expanding Ricci solitons (on closed $M$). Indeed, the equality case in the chain of inequalities above requires that $R+\triangle f$ be constant on $M$; on the other hand, the Euler-Lagrange equation for the minimizer $f$ is $2\triangle f-|\nabla f|^2+R=const.$ Thus, $\triangle f-|\nabla f|^2=const=0$, because $\int{(\triangle f-|\nabla f|^2)e^{-f}dV}=0.$ Therefore, $f$ is constant by the maximum principle. A similar, but simpler proof of the results in this section, follows immediately from \[H 6,$\S 2$\], where Hamilton checks that the minimum of $RV^{\frac{2}{n}}$ is nondecreasing whenever it is nonpositive, and monotonicity is strict unless the metric has constant Ricci curvature. No breathers theorem II ======================== In order to handle the shrinking case when $\lambda>0,$ we need to replace our functional ${\mathcal {F}}$ by its generalization, which contains explicit insertions of the scale parameter, to be denoted by $\tau.$ Thus consider the functional $${{\cal W}}(g_{ij},f,\tau)=\int_M{[\tau(|\nabla f|^2+R)+f-n](4\pi\tau)^{-\frac{n}{2}}e^{-f}dV},$$ restricted to $f$ satisfying $$\int_M{(4\pi\tau)^{-\frac{n}{2}}e^{-f}dV}=1,$$ $\tau>0.$ Clearly ${{\cal W}}$ is invariant under simultaneous scaling of $\tau$ and $g_{ij}.$ The evolution equations, generalizing (1.3) are $$(g_{ij})_t=-2R_{ij} , f_t=-\triangle f+|\nabla f|^2-R+\frac{n}{2\tau} , \tau_t=-1$$ The evolution equation for $f$ can also be written as follows: $\Box^*u=0,$ where $u=(4\pi\tau)^{-\frac{n}{2}}e^{-f},$ and $\Box^*=-\partial/\partial t -\triangle +R$ is the conjugate heat operator. Now a routine computation gives $$d{{\cal W}}/dt=\int_M{2\tau|R_{ij}+\nabla_i\nabla_j f-\frac{1}{2\tau}g_{ij}|^2(4\pi\tau)^{-\frac{n}{2}}e^{-f}dV} .$$ Therefore, if we let $\mu(g_{ij},\tau)=\mbox{inf}\ {{\cal W}}(g_{ij},f,\tau)$ over smooth $f$ satisfying (3.2), and $\nu(g_{ij})=\mbox{inf}\ \mu(g_{ij},\tau) $ over all positive $\tau,$ then $\nu(g_{ij}(t))$ is nondecreasing along the Ricci flow. It is not hard to show that in the definition of $\mu$ there always exists a smooth minimizer $f$ (on a closed $M$). It is also clear that $\lim_{\tau\to\infty}\mu(g_{ij},\tau)=+\infty$ whenever the first eigenvalue of $-4\triangle +R$ is positive. Thus, our statement that there is no shrinking breathers other than gradient solitons, is implied by the following [**Claim**]{} [*For an arbitrary metric $g_{ij}$ on a closed manifold M, the function $\mu(g_{ij},\tau)$ is negative for small $\tau>0$ and tends to zero as $\tau$ tends to zero.*]{} [*Proof of the Claim. (sketch)*]{} Assume that $\bar{\tau}>0$ is so small that Ricci flow starting from $g_{ij}$ exists on $[0,\bar{\tau}].$ Let $u=(4\pi\tau)^{-\frac{n}{2}}e^{-f}$ be the solution of the conjugate heat equation, starting from a $\delta$-function at $t=\bar{\tau}, \tau(t)=\bar{\tau}-t.$ Then ${{\cal W}}(g_{ij}(t),f(t),\tau(t))$ tends to zero as $t$ tends to $\bar{\tau},$ and therefore $\mu(g_{ij},\bar{\tau})\le{{\cal W}}(g_{ij}(0),f(0),\tau(0))<0 $ by (3.4). Now let $\tau\to 0$ and assume that $f^{\tau}$ are the minimizers, such that $${{\cal W}}(\frac{1}{2}\tau^{-1}g_{ij},f^{\tau},\frac{1}{2}) ={{\cal W}}(g_{ij},f^{\tau},\tau)=\mu(g_{ij},\tau)\le c<0.$$ The metrics $\frac{1}{2}\tau^{-1}g_{ij} $ “converge” to the euclidean metric, and if we could extract a converging subsequence from $f^{\tau},$ we would get a function $f$ on $\mathbb{R}^n$, such that $\int_{\mathbb{R}^n}{(2\pi)^{-\frac{n}{2}}e^{-f}dx}=1$ and $$\int_{\mathbb{R}^n}{[\frac{1}{2}|\nabla f|^2+f-n](2\pi)^{-\frac{n}{2}}e^{-f}dx}<0$$ The latter inequality contradicts the Gaussian logarithmic Sobolev inequality, due to L.Gross. (To pass to its standard form, take $f=|x|^2/2-2\log\phi$ and integrate by parts) This argument is not hard to make rigorous; the details are left to the reader. [**3.2**]{} [*Remark.*]{} Our monotonicity formula (3.4) can in fact be used to prove a version of the logarithmic Sobolev inequality (with description of the equality cases) on shrinking Ricci solitons. Indeed, assume that a metric $g_{ij}$ satisfies $R_{ij}-g_{ij}-\nabla_i b_j-\nabla_j b_i=0.$ Then under Ricci flow, $g_{ij}(t)$ is isometric to $(1-2t)g_{ij}(0),$   $ \mu(g_{ij}(t),\frac{1}{2}-t)=\mu(g_{ij}(0),\frac{1}{2}),$ and therefore the monotonicity formula (3.4) implies that the minimizer $f$ for $\mu(g_{ij},\frac{1}{2})$ satisfies $R_{ij}+\nabla_i\nabla_j f-g_{ij}=0.$ Of course, this argument requires the existence of minimizer, and justification of the integration by parts; this is easy if $M$ is closed, but can also be done with more efforts on some complete $M$, for instance when $M$ is the Gaussian soliton. [**3.3\***]{} The no breathers theorem in dimension three was proved by Ivey \[I\]; in fact, he also ruled out nontrivial Ricci solitons; his proof uses the almost nonnegative curvature estimate, mentioned in the introduction. Logarithmic Sobolev inequalities is a vast area of research; see \[G\] for a survey and bibliography up to the year 1992; the influence of the curvature was discussed by Bakry-Emery \[B-Em\]. In the context of geometric evolution equations, the logarithmic Sobolev inequality occurs in Ecker \[E 1\]. No local collapsing theorem I ============================= In this section we present an application of the monotonicity formula (3.4) to the analysis of singularities of the Ricci flow. Let $g_{ij}(t)$ be a smooth solution to the Ricci flow $(g_{ij})_t=-2R_{ij}$ on $[0,T).$ We say that $g_{ij}(t)$ is locally collapsing at $T,$ if there is a sequence of times $t_k\to T$ and a sequence of metric balls $B_k=B(p_k,r_k)$ at times $t_k,$ such that $r_k^2/t_k$ is bounded, $|Rm|(g_{ij}(t_k))\le r_k^{-2}$ in $B_k$ and $r_k^{-n}Vol(B_k)\to 0.$ [**Theorem.**]{} [*If $M$ is closed and $T<\infty,$ then $g_{ij}(t)$ is not locally collapsing at $T.$*]{} [*Proof.*]{} Assume that there is a sequence of collapsing balls $B_k=B(p_k,r_k)$ at times $t_k\to T.$ Then we claim that $\mu(g_{ij}(t_k),r_k^2)\to -\infty.$ Indeed one can take $f_k(x)=-\log\phi(\mbox{dist}_{t_k}(x,p_k)r_k^{-1})+c_k,$ where $\phi$ is a function of one variable, equal 1 on $[0,1/2],$ decreasing on $[1/2,1],$ and very close to 0 on $[1,\infty),$ and $c_k$ is a constant; clearly $c_k\to -\infty$ as $r_k^{-n}Vol(B_k)\to 0.$ Therefore, applying the monotonicity formula (3.4), we get $\mu(g_{ij}(0),t_k+r_k^2)\to -\infty.$ However this is impossible, since $t_k+r_k^2$ is bounded. [**Definition**]{} [*We say that a metric $g_{ij}$ is $\kappa$-noncollapsed on the scale $\rho,$ if every metric ball $B$ of radius $r<\rho,$ which satisfies $|Rm|(x)\le r^{-2}$ for every $x\in B,$ has volume at least $\kappa r^n.$*]{} It is clear that a limit of $\kappa$-noncollapsed metrics on the scale $\rho$ is also $\kappa$-noncollapsed on the scale $\rho;$ it is also clear that $\alpha^2g_{ij}$ is $\kappa$-noncollapsed on the scale $\alpha\rho$ whenever $g_{ij}$ is $\kappa$-noncollapsed on the scale $\rho.$ The theorem above essentially says that given a metric $g_{ij}$ on a closed manifold $M$ and $T<\infty,$ one can find $\kappa=\kappa(g_{ij},T)>0,$ such that the solution $g_{ij}(t)$ to the Ricci flow starting at $g_{ij}$ is $\kappa$-noncollapsed on the scale $T^{1/2}$ for all $t\in [0,T),$ provided it exists on this interval. Therefore, using the convergence theorem of Hamilton, we obtain the following [**Corollary.**]{} [*Let $g_{ij}(t), t\in [0,T)$ be a solution to the Ricci flow on a closed manifold $M,$ $T<\infty.$ Assume that for some sequences $t_k\to T, p_k\in M$ and some constant $C$ we have $Q_k=|Rm|(p_k,t_k)\to\infty$ and $ |Rm|(x,t)\le CQ_k,$ whenever $t<t_k.$ Then (a subsequence of) the scalings of $g_{ij}(t_k)$ at $p_k$ with factors $Q_k$ converges to a complete ancient solution to the Ricci flow, which is $\kappa$-noncollapsed on all scales for some $\kappa>0.$*]{} A statistical analogy ===================== In this section we show that the functional ${{\cal W}},$ introduced in section 3, is in a sense analogous to minus entropy. [**5.1**]{} Recall that the partition function for the canonical ensemble at temperature $\beta^{-1}$ is given by $Z=\int{exp(-\beta E)d\omega(E)},$ where $\omega(E)$ is a “density of states” measure, which does not depend on $\beta.$ Then one computes the average energy $<E>=-\frac{\partial}{\partial\beta}\log Z,$ the entropy $S=\beta<E>+\log Z,$ and the fluctuation $\sigma=<(E-<E>)^2>=\frac{\partial^2}{(\partial\beta)^2}\log Z.$ Now fix a closed manifold $M$ with a probability measure $m$, and suppose that our system is described by a metric $g_{ij}(\tau),$ which depends on the temperature $\tau$ according to equation $(g_{ij})_\tau=2(R_{ij}+\nabla_i\nabla_j f),$ where $dm=udV, u=(4\pi\tau)^{-\frac{n}{2}}e^{-f},$ and the partition function is given by $\log Z=\int{(-f+\frac{n}{2})dm}.$ (We do not discuss here what assumptions on $g_{ij}$ guarantee that the corresponding “density of states” measure can be found) Then we compute $$<E>=-\tau^2\int_M{(R+|\nabla f|^2-\frac{n}{2\tau})dm},$$ $$S=-\int_M{(\tau(R+|\nabla f|^2)+f-n)dm},$$ $$\sigma=2\tau^4\int_M{|R_{ij}+\nabla_i\nabla_j f-\frac{1}{2\tau}g_{ij}|^2dm}$$ Alternatively, we could prescribe the evolution equations by replacing the $t$-derivatives by minus $\tau$-derivatives in (3.3 ), and get the same formulas for $Z, <E>, S, \sigma,$ with $dm$ replaced by $udV.$ Clearly, $\sigma$ is nonnegative; it vanishes only on a gradient shrinking soliton. $<E>$ is nonnegative as well, whenever the flow exists for all sufficiently small $\tau>0$ (by proposition 1.2). Furthermore, if (a) $u$ tends to a $\delta$-function as $\tau\to 0,$ or (b) $u$ is a limit of a sequence of functions $u_i,$ such that each $u_i$ tends to a $\delta$-function as $\tau\to\tau_i>0,$ and $\tau_i\to 0,$ then $S$ is also nonnegative. In case (a) all the quantities $<E>, S, \sigma$ tend to zero as $\tau\to 0,$ while in case (b), which may be interesting if $g_{ij}(\tau)$ goes singular at $\tau=0,$ the entropy $S$ may tend to a positive limit. If the flow is defined for all sufficiently large $\tau$ (that is, we have an ancient solution to the Ricci flow, in Hamilton’s terminology), we may be interested in the behavior of the entropy $S$ as $\tau\to\infty.$ A natural question is whether we have a gradient shrinking soliton whenever $S$ stays bounded. [**5.2**]{} [*Remark.*]{} Heuristically, this statistical analogy is related to the description of the renormalization group flow, mentioned in the introduction: in the latter one obtains various quantities by averaging over higher energy states, whereas in the former those states are suppressed by the exponential factor. [**5.3\***]{} An entropy formula for the Ricci flow in dimension two was found by Chow \[C\]; there seems to be no relation between his formula and ours. The interplay of statistical physics and (pseudo)-riemannian geometry occurs in the subject of Black Hole Thermodynamics, developed by Hawking et al. Unfortunately, this subject is beyond my understanding at the moment. Riemannian formalism in potentially infinite dimensions ======================================================== When one is talking of the canonical ensemble, one is usually considering an embedding of the system of interest into a much larger standard system of fixed temperature (thermostat). In this section we attempt to describe such an embedding using the formalism of Rimannian geometry. [**6.1**]{} Consider the manifold $\tilde{M}=M\times\mathbb{S}^N\times\mathbb{R}^+$ with the following metric:$$\tilde{g}_{ij}=g_{ij}, \tilde{g}_{\alpha\beta}=\tau g_{\alpha\beta}, \tilde{g}_{00}=\frac{N}{2\tau}+R, \tilde{g}_{i\alpha}=\tilde{g}_{i 0}=\tilde{g}_{\alpha 0}=0,$$ where $i,j$ denote coordinate indices on the $M$ factor, $\alpha,\beta$ denote those on the $\mathbb{S}^N$ factor, and the coordinate $\tau$ on $\mathbb{R}^+$ has index $0$; $g_{ij}$ evolves with $\tau$ by the backward Ricci flow $(g_{ij})_\tau=2R_{ij},$ $g_{\alpha\beta}$ is the metric on $\mathbb{S}^N$ of constant curvature $\frac{1}{2N}.$ It turns out that the components of the curvature tensor of this metric coincide (modulo $N^{-1}$) with the components of the matrix Harnack expression (and its traces), discovered by Hamilton \[H 3\]. One can also compute that all the components of the Ricci tensor are equal to zero (mod $N^{-1}$). The heat equation and the conjugate heat equation on $M$ can be interpreted via Laplace equation on $\tilde{M}$ for functions and volume forms respectively: $u$ satisfies the heat equation on $M$ iff $\tilde{u}$ (the extension of $u$ to $\tilde{M}$ constant along the $\mathbb{S}^N$ fibres) satisfies $\tilde{\triangle}\tilde{u}=0\ \mbox{mod}\ N^{-1};$ similarly, $u$ satisfies the conjugate heat equation on $M$ iff $\tilde{u}^*=\tau^{-\frac{N-1}{2}}\tilde{u}$ satisfies $\tilde{\triangle}\tilde{u}^*=0\ \ \mbox{mod}\ N^{-1}$ on $\tilde{M}.$ [**6.2**]{} Starting from $\tilde{g},$ we can also construct a metric $g^m$ on $\tilde{M},$ isometric to $\tilde{g}$ (mod $N^{-1}$), which corresponds to the backward $m$-preserving Ricci flow ( given by equations (1.1) with $t$-derivatives replaced by minus $\tau$-derivatives, $dm=(4\pi\tau)^{-\frac{n}{2}}e^{-f}dV$). To achieve this, first apply to $\tilde{g}$ a (small) diffeomorphism, mapping each point $(x^{i},y^{\alpha},\tau)$ into $ (x^{i},y^{\alpha},\tau(1-\frac{2f}{N}));$ we would get a metric $\tilde{g}^m,$ with components (mod $N^{-1}$) $$\tilde{g}^m_{ij}=\tilde{g}_{ij}, \tilde{g}^m_{\alpha\beta}=(1-\frac{2f}{N})\tilde{g}_{\alpha\beta}, \tilde{g}^m_{00}=\tilde{g}_{00}-2f_{\tau}-\frac{f}{\tau}, \tilde{g}^m_{i 0}=-\nabla_i f, \tilde{g}^m_{i \alpha}=\tilde{g}^m_{\alpha 0}=0;$$ then apply a horizontal (that is, along the $M$ factor) diffeomorphism to get $g^m$ satisfying $(g^m_{ij})_\tau=2(R_{ij}+\nabla_i\nabla_j f);$ the other components of $g^m$ become (mod $N^{-1}$) $$g^m_{\alpha\beta}=(1-\frac{2f}{N})\tilde{g}_{\alpha\beta}, g^m_{00}=\tilde{g}^m_{00}-|\nabla f|^2=\frac{1}{\tau}(\frac{N}{2}-[\tau(2\triangle f-|\nabla f|^2 +R)+f-n]),$$ $$g^m_{i 0}=g^m_{\alpha 0}=g^m_{i \alpha}=0$$ Note that the hypersurface $\tau=$const in the metric $g^m$ has the volume form $\tau^{N/2}e^{-f}$ times the canonical form on $M$ and $\mathbb{S}^N,$ and the scalar curvature of this hypersurface is $\frac{1}{\tau}(\frac{N}{2}+\tau(2\triangle f-|\nabla f|^2+R)+f)$ mod $N^{-1}.$ Thus the entropy $S$ multiplied by the inverse temperature $\beta$ is essentially minus the total scalar curvature of this hypersurface. [**6.3**]{} Now we return to the metric $\tilde{g}$ and try to use its Ricci-flatness by interpreting the Bishop-Gromov relative volume comparison theorem. Consider a metric ball in $(\tilde{M},\tilde{g})$ centered at some point $p$ where $\tau=0.$ Then clearly the shortest geodesic between $p$ and an arbitrary point $q$ is always orthogonal to the $\mathbb{S}^N$ fibre. The length of such curve $\gamma(\tau)$ can be computed as $$\int_0^{\tau(q)}{\sqrt{\frac{N}{2\tau}+R+|\dot{\gamma}_M(\tau)|^2}d\tau}$$ $$=\sqrt{2N\tau(q)}+\frac{1}{\sqrt{2N}}\int_0^{\tau(q)}{\sqrt{\tau}(R+|\dot{\gamma}_M(\tau)|^2)d\tau}+ O(N^{-\frac{3}{2}})$$ Thus a shortest geodesic should minimize $\mathcal{L}(\gamma)=\int_0^{\tau(q)}{\sqrt{\tau}(R+|\dot{\gamma}_M(\tau)|^2)d\tau},$ an expression defined entirely in terms of $M$. Let $L(q_M)$ denote the corresponding infimum. It follows that a metric sphere in $\tilde{M}$ of radius $\sqrt{2N\tau(q)}$ centered at $p$ is $O(N^{-1})$-close to the hypersurface $\tau=\tau(q),$ and its volume can be computed as $V(\mathbb{S}^N)\int_M{(\sqrt{\tau(q)}-\frac{1}{2N}L(x)+O(N^{-2}))^Ndx},$ so the ratio of this volume to $\sqrt{2N\tau(q)}^{N+n}$ is just constant times $N^{-\frac{n}{2}}$ times $$\int_M{\tau(q)^{-\frac{n}{2}}\mbox{exp}(-\frac{1}{\sqrt{2\tau(q)}}L(x))dx}+O(N^{-1})$$ The computation suggests that this integral, which we will call the reduced volume and denote by $\tilde{V}(\tau(q)),$ should be increasing as $\tau$ decreases. A rigorous proof of this monotonicity is given in the next section. [**6.4\***]{} The first geometric interpretation of Hamilton’s Harnack expressions was found by Chow and Chu \[C-Chu 1,2\]; they construct a potentially degenerate riemannian metric on $M\times \mathbb{R},$ which potentially satisfies the Ricci soliton equation; our construction is, in a certain sense, dual to theirs. Our formula for the reduced volume resembles the expression in Huisken monotonicity formula for the mean curvature flow \[Hu\]; however, in our case the monotonicity is in the opposite direction. A comparison geometry approach to the Ricci flow ================================================= [**7.1**]{} In this section we consider an evolving metric $(g_{ij})_{\tau}=2R_{ij}$ on a manifold $M;$ we assume that either $M$ is closed, or $g_{ij}(\tau)$ are complete and have uniformly bounded curvatures. To each curve $\gamma(\tau), 0<\tau_1\le\tau\le\tau_2,$ we associate its $\mathcal{L}$-length $$\mathcal{L}(\gamma)=\int_{\tau_1}^{\tau_2}{\sqrt{\tau}(R(\gamma(\tau))+|\dot{\gamma}(\tau)|^2)d\tau}$$ (of course, $ R(\gamma(\tau))$ and $|\dot{\gamma}(\tau)|^2$ are computed using $g_{ij}(\tau)$) Let $X(\tau)=\dot{\gamma}(\tau),$ and let $Y(\tau)$ be any vector field along $\gamma(\tau).$ Then the first variation formula can be derived as follows: $$\delta_Y(\mathcal{L}) =$$ $$\int_{\tau_1}^{\tau_2}{\sqrt{\tau}(<Y,\nabla R>+2<\nabla_Y X,X>)d\tau} =\int_{\tau_1}^{\tau_2}{\sqrt{\tau}(<Y,\nabla R>+2<\nabla_X Y,X>)d\tau}$$ $$=\int_{\tau_1}^{\tau_2}{\sqrt{\tau}(<Y,\nabla R>+2\frac{d}{d\tau}<Y,X>-2<Y,\nabla_X X>-4\mbox{Ric}(Y,X))d\tau}$$ $$=\left.2\sqrt{\tau}<X,Y>\right|_{\tau_1}^{\tau_2}+\int_{\tau_1}^{\tau_2}{\sqrt{\tau}<Y,\nabla R-2\nabla_X X-4\mbox{Ric}(X,\cdot)-\frac{1}{\tau}X>d\tau}$$ Thus $\mathcal{L}$-geodesics must satisfy $$\nabla_X X-\frac{1}{2}\nabla R+ \frac{1}{2\tau}X+2\mbox{Ric}(X,\cdot)=0$$ Given two points $p,q$ and $\tau_2>\tau_1>0,$ we can always find an $\mathcal{L}$-shortest curve $\gamma(\tau), \tau\in[\tau_1,\tau_2]$ between them, and every such $\mathcal{L}$-shortest curve is $\mathcal{L}$-geodesic. It is easy to extend this to the case $\tau_1=0;$ in this case $\sqrt{\tau}X(\tau)$ has a limit as $\tau\to 0.$ From now on we fix $p$ and $\tau_1=0$ and denote by $L(q,\bar{\tau})$ the $\mathcal{L}$-length of the $\mathcal{L}$-shortest curve $\gamma(\tau), 0\le\tau\le\bar{\tau},$ connecting $p$ and $q.$ In the computations below we pretend that shortest $\mathcal{L}$-geodesics between $p$ and $q$ are unique for all pairs $(q,\bar{\tau}); $ if this is not the case, the inequalities that we obtain are still valid when understood in the barrier sense, or in the sense of distributions. The first variation formula (7.1) implies that $\nabla L(q,\bar{\tau})=2\sqrt{\bar{\tau}}X(\bar{\tau}),$ so that $|\nabla L|^2=4\bar{\tau}|X|^2=-4\bar{\tau}R+4\bar{\tau}(R+|X|^2).$ We can also compute $$L_{\bar{\tau}}(q,\bar{\tau})=\sqrt{\bar{\tau}}(R+|X|^2)-<X,\nabla L>=2\sqrt{\bar{\tau}}R-\sqrt{\bar{\tau}}(R+|X|^2)$$ To evaluate $R+|X|^2$ we compute (using (7.2)) $$\frac{d}{d\tau}(R(\gamma(\tau))+|X(\tau)|^2)=R_{\tau}+<\nabla R,X>+2<\nabla_X X,X>+2\mbox{Ric}(X,X)$$ $$=R_{\tau}+\frac{1}{\tau}R+2<\nabla R,X>-2\mbox{Ric}(X,X)-\frac{1}{\tau}(R+|X|^2)$$ $$=-H(X)-\frac{1}{\tau}(R+|X|^2),$$ where $H(X)$ is the Hamilton’s expression for the trace Harnack inequality (with $t=-\tau$). Hence, $$\bar{\tau}^{\frac{3}{2}}(R+|X|^2)(\bar{\tau})=-K+\frac{1}{2}L(q,\bar{\tau}),$$ where $K=K(\gamma,\bar{\tau}) $ denotes the integral $\int_0^{\bar{\tau}}{\tau^{\frac{3}{2}}H(X)d\tau},$ which we’ll encounter a few times below. Thus we get $$L_{\bar{\tau}}=2\sqrt{\bar{\tau}}R-\frac{1}{2\bar{\tau}}L+\frac{1}{\bar{\tau}}K$$ $$|\nabla L|^2=-4\bar{\tau}R+\frac{2}{\sqrt{\bar{\tau}}}L-\frac{4}{\sqrt{\bar{\tau}}}K$$ Finally we need to estimate the second variation of $L.$ We compute $${\delta}^2_Y(\mathcal{L})=\int_0^{\bar{\tau}}{\sqrt{\tau}(Y\cdot Y\cdot R+2<\nabla_Y \nabla_Y X,X>+2|\nabla_Y X|^2)d\tau}$$ $$=\int_0^{\bar{\tau}}{\sqrt{\tau}(Y\cdot Y\cdot R+2<\nabla_X \nabla_Y Y,X>+2<R(Y,X),Y,X>+2|\nabla_X Y|^2)d\tau}$$ Now $$\frac{d}{d\tau}<\nabla_Y Y,X>=<\nabla_X \nabla_Y Y,X>+<\nabla_Y Y,\nabla_X X>+2Y\cdot\mbox{Ric}(Y,X)-X\cdot\mbox{Ric}(Y,Y),$$ so, if $Y(0)=0$ then $${\delta}^2_Y(\mathcal{L})=2<\nabla_Y Y,X>\sqrt{\bar{\tau}}+$$ $$\begin{gathered} \int_0^{\bar{\tau}}\sqrt{\tau}(\nabla_Y \nabla_Y R+2<R(Y,X),Y,X>+2|\nabla_X Y|^2\\+2\nabla_X\mbox{Ric}(Y,Y)-4\nabla_Y\mbox{Ric}(Y,X))d\tau,\end{gathered}$$ where we discarded the scalar product of $-2\nabla_Y Y$ with the left hand side of (7.2). Now fix the value of $Y$ at $\tau=\bar{\tau}$, assuming $|Y(\bar{\tau})|=1,$ and construct $Y$ on $[0,\bar{\tau}] $ by solving the ODE $$\nabla_X Y=-\mbox{Ric}(Y,\cdot)+\frac{1}{2\tau}Y$$ We compute $$\frac{d}{d\tau}<Y,Y>=2\mbox{Ric}(Y,Y)+2<\nabla_X Y,Y>=\frac{1}{\tau}<Y,Y>,$$ so $|Y(\tau)|^2=\frac{\tau}{\bar{\tau}},$ and in particular, $Y(0)=0.$ Making a substitution into (7.7), we get $$\mbox{Hess}_L(Y,Y)\le$$ $$\int_0^{\bar{\tau}}\sqrt{\tau}(\nabla_Y \nabla_Y R+2<R(Y,X),Y,X>+2\nabla_X\mbox{Ric}(Y,Y)-4\nabla_Y\mbox{Ric}(Y,X)$$ $$+2|\mbox{Ric}(Y,\cdot)|^2- \frac{2}{\tau}\mbox{Ric}(Y,Y)+\frac{1}{2\tau\bar{\tau}})d\tau$$ To put this in a more convenient form, observe that $$\frac{d}{d\tau}\mbox{Ric}(Y(\tau),Y(\tau))=\mbox{Ric}_{\tau}(Y,Y)+\nabla_X\mbox{Ric}(Y,Y)+ 2\mbox{Ric}(\nabla_X Y,Y)$$ $$=\mbox{Ric}_{\tau}(Y,Y)+\nabla_X\mbox{Ric}(Y,Y)+\frac{1}{\tau}\mbox{Ric}(Y,Y)- 2|\mbox{Ric}(Y,\cdot)|^2,$$ so $$\mbox{Hess}_L(Y,Y)\le\frac{1}{\sqrt{\bar{\tau}}}-2\sqrt{\bar{\tau}}\mbox{Ric}(Y,Y)-\int_0^{\bar{\tau}} {\sqrt{\tau}H(X,Y)d\tau},$$ where $$H(X,Y)=-\nabla_Y \nabla_Y R-2<R(Y,X)Y,X>-4(\nabla_X\mbox{Ric}(Y,Y)-\nabla_Y\mbox{Ric}(Y,X))$$ $$-2\mbox{Ric}_{\tau}(Y,Y)+ 2|\mbox{Ric}(Y,\cdot)|^2-\frac{1}{\tau}\mbox{Ric}(Y,Y)$$ is the Hamilton’s expression for the matrix Harnack inequality (with $t=-\tau$). Thus $$\triangle L\le-2\sqrt{\tau}R+\frac{n}{\sqrt{\tau}}-\frac{1}{\tau}K$$ A field $Y(\tau)$ along $\mathcal{L}$-geodesic $\gamma(\tau)$ is called $\mathcal{L}$-Jacobi, if it is the derivative of a variation of $\gamma$ among $\mathcal{L}$-geodesics. For an $\mathcal{L}$-Jacobi field $Y$ with $|Y(\bar{\tau})|=1$ we have $$\frac{d}{d\tau}|Y|^2=2\mbox{Ric}(Y,Y)+2<\nabla_X Y,Y>=2\mbox{Ric}(Y,Y)+2<\nabla_Y X,Y>$$ $$=2\mbox{Ric}(Y,Y)+\frac{1}{\sqrt{\bar{\tau}}}\mbox{Hess}_L(Y,Y)\le\frac{1}{\bar{\tau}}- \frac{1}{\sqrt{\bar{\tau}}}\int_0^{\bar{\tau}}{\tau^{\frac{1}{2}}H(X,\tilde{Y})d\tau},$$ where $\tilde{Y}$ is obtained by solving ODE (7.8) with initial data $\tilde{Y}(\bar{\tau})=Y(\bar{\tau}).$ Moreover, the equality in (7.11) holds only if $\tilde{Y}$ is $\mathcal{L}$-Jacobi and hence $\frac{d}{d\tau}|Y|^2=2\mbox{Ric}(Y,Y)+\frac{1}{\sqrt{\bar{\tau}}}\mbox{Hess}_L(Y,Y) =\frac{1}{\bar{\tau}}.$ Now we can deduce an estimate for the jacobian $J$ of the $\mathcal{L}$-exponential map, given by $\mathcal{L}\mbox{exp}_X(\bar{\tau})=\gamma(\bar{\tau}),$ where $\gamma(\tau)$ is the $\mathcal{L}$-geodesic, starting at $p$ and having $X$ as the limit of $\sqrt{\tau}\dot{\gamma}(\tau)$ as $\tau\to 0.$ We obtain $$\frac{d}{d\tau}\mbox{log}J(\tau)\le\frac{n}{2\bar{\tau}}-\frac{1}{2}\bar{\tau}^{-\frac{3}{2}}K,$$ with equality only if $2\mbox{Ric}+\frac{1}{\sqrt{\bar{\tau}}}\mbox{Hess}_L=\frac{1}{\bar{\tau}}g.$ Let $l(q,\tau)=\frac{1}{2\sqrt{\tau}}L(q,\tau)$ be the reduced distance. Then along an $\mathcal{L}$-geodesic $\gamma(\tau)$ we have (by (7.4)) $$\frac{d}{d\tau}l(\tau)=-\frac{1}{2\bar{\tau}}l+\frac{1}{2}(R+|X|^2) =-\frac{1}{2}\bar{\tau}^{-\frac{3}{2}}K,$$ so (7.12) implies that $\tau^{-\frac{n}{2}}\mbox{exp}(-l(\tau))J(\tau)$ is nonincreasing in $\tau$ along $\gamma$, and monotonicity is strict unless we are on a gradient shrinking soliton. Integrating over $M$, we get monotonicity of the reduced volume function $\tilde{V}(\tau)=\int_M{\tau^{-\frac{n}{2}}\mbox{exp}(-l(q,\tau))dq}.$ ( Alternatively, one could obtain the same monotonicity by integrating the differential inequality $$l_{\bar{\tau}}-\triangle l+|\nabla l|^2-R+\frac{n}{2\bar{\tau}}\ge0,$$ which follows immediately from (7.5), (7.6) and (7.10). Note also a useful inequality $$2\triangle l-|\nabla l|^2 +R+\frac{l-n}{\bar{\tau}}\le 0,$$ which follows from (7.6), (7.10).) On the other hand, if we denote $\bar{L}(q,\tau)=2\sqrt{\tau}L(q,\tau),$ then from (7.5), (7.10) we obtain $$\bar{L}_{\bar{\tau}}+\triangle \bar{L}\le2n$$ Therefore, the minimum of $\bar{L}(\cdot,\bar{\tau})-2n\bar{\tau}$ is nonincreasing, so in particular, the minimum of $l(\cdot,\bar{\tau})$ does not exceed $\frac{n}{2}$ for each $\bar{\tau}>0.$ (The lower bound for $l$ is much easier to obtain since the evolution equation $R_{\tau}=-\triangle R-2|\mbox{Ric}|^2$ implies $R(\cdot,\tau)\ge-\frac{n}{2(\tau_0-\tau)},$ whenever the flow exists for $\tau\in[0,\tau_0].$) [**7.2**]{} If the metrics $g_{ij}(\tau) $ have nonnegative curvature operator, then Hamilton’s differential Harnack inequalities hold, and one can say more about the behavior of $l.$ Indeed, in this case, if the solution is defined for $\tau\in[0,\tau_0],$ then $ H(X,Y)\ge-\mbox{Ric}(Y,Y)(\frac{1}{\tau}+\frac{1}{\tau_0-\tau})\ge -R(\frac{1}{\tau}+\frac{1}{\tau_0-\tau})|Y|^2$ and $H(X)\ge-R(\frac{1}{\tau}+\frac{1}{\tau_0-\tau}).$ Therefore, whenever $\tau$ is bounded away from $\tau_0$ (say, $\tau\le(1-c)\tau_0, c>0$), we get (using (7.6), (7.11)) $$|\nabla l|^2+R\le\frac{Cl}{\tau},$$ and for $\mathcal{L}$-Jacobi fields $Y$ $$\frac{d}{d\tau}\mbox{log}|Y|^2\le\frac{1}{\tau}(Cl+1)$$ [ **7.3**]{} As the first application of the comparison inequalities above, let us give an alternative proof of a weakened version of the no local collapsing theorem 4.1. Namely, rather than assuming $|Rm|(x,t_k)\le r_k^{-2}$ for $x\in B_k,$ we require $|Rm|(x,t)\le r_k^{-2}$ whenever $x\in B_k, t_k-r_k^2\le t\le t_k.$ Then the proof can go as follows: let $\tau_k(t)=t_k-t, p=p_k,\epsilon_k=r_k^{-1}Vol(B_k)^{\frac{1}{n}}.$ We claim that $\tilde{V}_k(\epsilon_k r_k^2) < 3\epsilon_k^{\frac{n}{2}}$ when $k$ is large. Indeed, using the $\mathcal{L}$-exponential map we can integrate over $T_pM$ rather than $M;$ the vectors in $T_pM$ of length at most $\frac{1}{2}\epsilon_k^{-\frac{1}{2}}$ give rise to $\mathcal{L}$-geodesics, which can not escape from $B_k$ in time $\epsilon_k r_k^2,$ so their contribution to the reduced volume does not exceed $2\epsilon_k^{\frac{n}{2}};$ on the other hand, the contribution of the longer vectors does not exceed $\mbox{exp}(-\frac{1}{2}\epsilon_k^{-\frac{1}{2}})$ by the jacobian comparison theorem. However, $\tilde{V}_k(t_k)$ (that is, at $t=0$) stays bounded away from zero. Indeed, since $\mbox{min}\ l_k(\cdot,t_k-\frac{1}{2}T)\le\frac{n}{2},$ we can pick a point $q_k,$ where it is attained, and obtain a universal upper bound on $l_k(\cdot, t_k)$ by considering only curves $\gamma$ with $\gamma(t_k-\frac{1}{2}T)=q_k,$ and using the fact that all geometric quantities in $g_{ij}(t)$ are uniformly bounded when $t\in[0,\frac{1}{2}T].$ Since the monotonicity of the reduced volume requires $\tilde{V}_k(t_k)\le\tilde{V}_k(\epsilon_k r_k^2),$ this is a contradiction. A similar argument shows that the statement of the corollary in 4.2 can be strengthened by adding another property of the ancient solution, obtained as a blow-up limit. Namely, we may claim that if, say, this solution is defined for $t\in(-\infty,0),$ then for any point $p$ and any $t_0>0,$ the reduced volume function $\tilde{V}(\tau),$ constructed using $p$ and $\tau(t)=t_0-t,$ is bounded below by $\kappa.$ [**7.4\***]{} The computations in this section are just natural modifications of those in the classical variational theory of geodesics that can be found in any textbook on Riemannian geometry; an even closer reference is \[L-Y\], where they use “length”, associated to a linear parabolic equation, which is pretty much the same as in our case. No local collapsing theorem II ============================== [**8.1**]{} Let us first formalize the notion of local collapsing, that was used in 7.3. [**Definition.**]{} [*A solution to the Ricci flow $(g_{ij})_t=-2R_{ij}$ is said to be $\kappa$-collapsed at $(x_0,t_0)$ on the scale $r>0$ if $|Rm|(x,t)\le r^{-2}$ for all $(x,t)$ satisfying $\mbox{dist}_{t_0}(x,x_0)<r$ and $t_0-r^2\le t\le t_0,$ and the volume of the metric ball $B(x_0,r^2)$ at time $t_0$ is less than $\kappa r^n.$*]{} [**8.2**]{} [**Theorem.**]{} [*For any $A>0$ there exists $\kappa=\kappa(A)>0$ with the following property. If $g_{ij}(t)$ is a smooth solution to the Ricci flow $(g_{ij})_t=-2R_{ij}, 0\le t\le r_0^2,$ which has $|Rm|(x,t)\le r_0^{-2}$ for all $(x,t),$ satisfying $\mbox{dist}_0(x,x_0)<r_0,$ and the volume of the metric ball $B(x_0,r_0)$ at time zero is at least $A^{-1}r_0^n,$ then $g_{ij}(t) $ can not be $\kappa$-collapsed on the scales less than $r_0$ at a point $(x,r_0^2)$ with $\mbox{dist}_{r_0^2}(x,x_0)\le Ar_0.$*]{} [*Proof.*]{} By scaling we may assume $r_0=1;$ we may also assume $\mbox{dist}_1(x,x_0)=A.$ Let us apply the constructions of 7.1 choosing $p=x, \tau(t)=1-t.$ Arguing as in 7.3, we see that if our solution is collapsed at $x$ on the scale $r\le 1,$ then the reduced volume $\tilde{V}(r^2)$ must be very small; on the other hand, $\tilde{V}(1)$ can not be small unless $\mbox{min}\ l(x,\frac{1}{2})$ over $x$ satisfying $\mbox{dist}_{\frac{1}{2}}(x,x_0)\le\frac{1}{10}$ is large. Thus all we need is to estimate $l,$ or equivalently $\bar{L},$ in that ball. Recall that $\bar{L}$ satisfies the differential inequality (7.15). In order to use it efficiently in a maximum principle argument, we need first to check the following simple assertion. [**8.3 Lemma.**]{} *Suppose we have a solution to the Ricci flow $(g_{ij})_t=-2R_{ij}.$* \(a) Suppose $\mbox{Ric}(x,t_0)\le (n-1)K$ when $ \mbox{dist}_{t_0}(x,x_0)<r_0.$ Then the distance function $d(x,t)=\mbox{dist}_t(x,x_0)$ satisfies at $t=t_0$ outside $B(x_0,r_0)$ the differential inequality $$d_t-\triangle d\ge -(n-1)(\frac{2}{3}Kr_0+r_0^{-1})$$ (the inequality must be understood in the barrier sense, when necessary) (b) (cf. \[H 4,$\S 17$\]) [*Suppose $\mbox{Ric}(x,t_0)\le (n-1)K$ when $\mbox{dist}_{t_0}(x,x_0)<r_0,$ or $\mbox{dist}_{t_0}(x,x_1)<r_0.$ Then $$\frac{d}{dt}\mbox{dist}_t(x_0,x_1)\ge -2(n-1)(\frac{2}{3}Kr_0+r_0^{-1}) \ \mbox{at}\ \ \ t=t_0$$*]{} [*Proof of Lemma.*]{} (a) Clearly, $d_t(x)=\int_{\gamma}{-\mbox{Ric}(X,X)},$ where $\gamma$ is the shortest geodesic between $x$ and $x_0$ and $X$ is its unit tangent vector, On the other hand, $\triangle d\le \sum_{k=1}^{n-1}{s_{Y_k}''(\gamma)},$ where $Y_k$ are vector fields along $\gamma,$ vanishing at $x_0$ and forming an orthonormal basis at $x$ when complemented by $X,$ and $s_{Y_k}''(\gamma)$ denotes the second variation along $Y_k$ of the length of $\gamma.$ Take $Y_k$ to be parallel between $x$ and $x_1,$ and linear between $x_1$ and $x_0,$ where $d(x_1,t_0)=r_0.$ Then $$\triangle d\le\sum_{k=1}^{n-1}s_{Y_k}''(\gamma)=\int_{r_0}^{d(x,t_0)}{-\mbox{Ric}(X,X)ds}+\int_0^{r_0} {(\frac{s^2}{r_0^2}(-\mbox{Ric}(X,X))+\frac{n-1}{r_0^2})ds}$$ $$=\int_{\gamma}{-\mbox{Ric}(X,X)} +\int_0^{r_0}{(\mbox{Ric}(X,X)(1-\frac{s^2}{r_0^2})+\frac{n-1}{r_0^2})ds}\le d_t+(n-1)(\frac{2}{3}Kr_0+r_0^{-1})$$ The proof of (b) is similar. Continuing the proof of theorem, apply the maximum principle to the function $ h(y,t)=\phi(d(y,t)-A(2t-1))(\bar{L}(y,1-t)+2n+1),$ where $d(y,t)=\mbox{dist}_t(x,x_0),$ and $\phi$ is a function of one variable, equal $1$ on $(-\infty,\frac{1}{20}),$ and rapidly increasing to infinity on $(\frac{1}{20},\frac{1}{10}),$ in such a way that $$2(\phi ')^2/\phi-\phi ''\ge (2A+100n)\phi '-C(A)\phi,$$ for some constant $C(A)<\infty.$ Note that $\bar{L}+2n+1\ge 1$ for $t\ge\frac{1}{2}$ by the remark in the very end of 7.1. Clearly, $\mbox{min}\ h(y,1)\le h(x,1)=2n+1.$ On the other hand, $\mbox{min}\ h(y,\frac{1}{2}) $ is achieved for some $y$ satisfying $d(y,\frac{1}{2})\le \frac{1}{10}.$ Now we compute $$\Box h=(\bar{L}+2n+1)(-\phi ''+(d_t-\triangle d-2A)\phi ')-2<\nabla\phi\nabla\bar{L}>+(\bar{L}_t-\triangle\bar{L})\phi$$ $$\nabla h=(\bar{L}+2n+1)\nabla\phi+\phi\nabla\bar{L}$$ At a minimum point of $h$ we have $\nabla h=0,$ so (8.2) becomes $$\Box h=(\bar{L}+2n+1)(-\phi ''+(d_t-\triangle d-2A)\phi '+2(\phi ')^2/\phi)+(\bar{L}_t-\triangle\bar{L})\phi$$ Now since $d(y,t)\ge\frac{1}{20}$ whenever $\phi '\neq 0,$ and since $\mbox{Ric}\le n-1$ in $B(x_0,\frac{1}{20}),$ we can apply our lemma (a) to get $d_t-\triangle d\ge-100(n-1)$ on the set where $\phi '\neq 0.$ Thus, using (8.1) and (7.15), we get $$\Box h\ge-(\bar{L}+2n+1)C(A)\phi-2n\phi\ge-(2n+C(A))h$$ This implies that $\mbox{min}\ h$ can not decrease too fast, and we get the required estimate. Differential Harnack inequality for solutions of the conjugate heat equation ============================================================================ Let $g_{ij}(t)$ be a solution to the Ricci flow $(g_{ij})_t=-2R_{ij}, 0\le t\le T,$ and let $u=(4\pi(T-t))^{-\frac{n}{2}}e^{-f}$ satisfy the conjugate heat equation $\Box^*u=-u_t-\triangle u+Ru=0.$ Then $v=[(T-t)(2\triangle f-|\nabla f|^2+R)+f-n]u$ satisfies $$\Box^*v=-2(T-t)|R_{ij}+\nabla_i\nabla_j f-\frac{1}{2(T-t)}g_{ij}|^2$$ [*Proof.*]{} Routine computation. Clearly, this proposition immediately implies the monotonicity formula (3.4); its advantage over (3.4) shows up when one has to work locally. Under the same assumptions, on a closed manifold $M$,or whenever the application of the maximum principle can be justified, $\mbox{min}\ v/u$ is nondecreasing in $t.$ Under the same assumptions, if $u$ tends to a $\delta$-function as $t\to T,$ then $v\le 0 $ for all $t<T.$ [*Proof.*]{} If $h$ satisfies the ordinary heat equation $h_t=\triangle h$ with respect to the evolving metric $g_{ij}(t),$ then we have $\frac{d}{dt}\int{hu}=0$ and $\frac{d}{dt}\int{hv}\ge 0.$ Thus we only need to check that for everywhere positive $h$ the limit of $\int{hv}$ as $t\to T$ is nonpositive. But it is easy to see, that this limit is in fact zero. Under assumptions of the previous corollary, for any smooth curve $\gamma(t)$ in $M$ holds $$-\frac{d}{dt}f(\gamma(t),t)\le\frac{1}{2}(R(\gamma(t),t)+|\dot{\gamma}(t)|^2) -\frac{1}{2(T-t)}f(\gamma(t),t)$$ [*Proof.*]{} From the evolution equation $f_t=-\triangle f+|\nabla f|^2-R+\frac{n}{2(T-t)}$ and $v\le 0$ we get $f_t+\frac{1}{2}R-\frac{1}{2}|\nabla f|^2-\frac{f}{2(T-t)}\ge 0.$ On the other hand,$ -\frac{d}{dt}f(\gamma(t),t)=-f_t-<\nabla f,\dot{\gamma}(t)>\le -f_t+\frac{1}{2}|\nabla f|^2+\frac{1}{2}|\dot{\gamma}|^2.$ Summing these two inequalities, we get (9.2). If under assumptions of the previous corollary, $p$ is the point where the limit $\delta$-function is concentrated, then $f(q,t)\le l(q,T-t),$ where $l$ is the reduced distance, defined in 7.1, using $p$ and $\tau(t)=T-t.$ [*Proof.*]{} Use (7.13) in the form $\Box^*\mbox{exp}(-l)\le 0.$ [**9.6**]{} [*Remark.*]{} Ricci flow can be characterized among all other evolution equations by the infinitesimal behavior of the fundamental solutions of the conjugate heat equation. Namely, suppose we have a riemannian metric $g_{ij}(t)$ evolving with time according to an equation $(g_{ij})_t=A_{ij}(t).$ Then we have the heat operator $\Box=\frac{\partial}{\partial t}-\triangle$ and its conjugate $\Box^*=-\frac{\partial}{\partial t}-\triangle-\frac{1}{2}A,$ so that $\frac{d}{dt}\int{uv}=\int{((\Box u)v-u(\Box^* v))}.$ (Here $A=g^{ij}A_{ij}$) Consider the fundamental solution $u=(-4\pi t)^{-\frac{n}{2}}e^{-f}$ for $\Box^*,$ starting as $\delta$-function at some point $(p,0).$ Then for general $A_{ij}$ the function $(\Box\bar{ f}+\frac{\bar{f}}{t})(q,t),$ where $\bar{f}=f-\int{fu},$ is of the order $O(1)$ for $(q,t)$ near $(p,0).$ The Ricci flow $A_{ij}=-2R_{ij}$ is characterized by the condition $(\Box\bar{ f}+\frac{\bar{f}}{t})(q,t)=o(1);$ in fact, it is $O(|pq|^2+|t|)$ in this case. [**9.7\***]{} Inequalities of the type of (9.2) are known as differential Harnack inequalities; such inequality was proved by Li and Yau \[L-Y\] for the solutions of linear parabolic equations on riemannian manifolds. Hamilton \[H 7,8\] used differential Harnack inequalities for the solutions of backward heat equation on a manifold to prove monotonicity formulas for certain parabolic flows. A local monotonicity formula for mean curvature flow making use of solutions of backward heat equation was obtained by Ecker \[E 2\]. Pseudolocality theorem ======================= For every $\alpha>0$ there exist $\delta>0,\epsilon>0$ with the following property. Suppose we have a smooth solution to the Ricci flow $(g_{ij})_t=-2R_{ij}, 0\le t\le (\epsilon r_0)^2,$ and assume that at $t=0$ we have $R(x)\ge -r_0^{-2}$ and $Vol(\partial\Omega)^n\ge(1-\delta)c_nVol(\Omega)^{n-1}$ for any $x,\Omega\subset B(x_0,r_0),$ where $c_n$ is the euclidean isoperimetric constant. Then we have an estimate $|Rm|(x,t)\le \alpha t^{-1}+({\epsilon}r_0)^{-2}$ whenever $0<t\le (\epsilon r_0)^2, d(x,t)=\mbox{dist}_t(x,x_0)<\epsilon r_0.$ Thus, under the Ricci flow, the almost singular regions (where curvature is large) can not instantly significantly influence the almost euclidean regions. Or , using the interpretation via renormalization group flow, if a region looks trivial (almost euclidean) on higher energy scale, then it can not suddenly become highly nontrivial on a slightly lower energy scale. [*Proof.* ]{} It is an argument by contradiction. The idea is to pick a point $(\bar{x},\bar{t})$ not far from $(x_0,0)$ and consider the solution $u$ to the conjugate heat equation, starting as $\delta$-function at $(\bar{x},\bar{t}),$ and the corresponding nonpositive function $v$ as in 9.3. If the curvatures at $(\bar{x},\bar{t})$ are not small compared to $\bar{t}^{-1}$ and are larger than at nearby points, then one can show that $\int{v}$ at time $t$ is bounded away from zero for (small) time intervals $\bar{t}-t$ of the order of $|Rm|^{-1}(\bar{x},\bar{t}).$ By monotonicity we conclude that $\int{v}$ is bounded away from zero at $t=0.$ In fact, using (9.1) and an appropriate cut-off function, we can show that at $t=0$ already the integral of $v$ over $B(x_0,r)$ is bounded away from zero, whereas the integral of $u$ over this ball is close to $1,$ where $r$ can be made as small as we like compared to $r_0.$ Now using the control over the scalar curvature and isoperimetric constant in $B(x_0r_0),$ we can obtain a contradiction to the logarithmic Sobolev inequality. Now let us go into details. By scaling assume that $r_0=1.$ We may also assume that $\alpha$ is small, say $\alpha<\frac{1}{100n}.$ From now on we fix $\alpha$ and denote by $M_{\alpha}$ the set of pairs $(x,t),$ such that $|Rm|(x,t)\ge\alpha t^{-1}.$ [**Claim 1.**]{}[*For any $A>0,$ if $g_{ij}(t)$ solves the Ricci flow equation on $0\le t\le \epsilon^2, A\epsilon<\frac{1}{100n},$ and $|Rm|(x,t)>\alpha t^{-1}+\epsilon^{-2}$ for some $(x,t),$ satisfying $0\le t\le \epsilon^2, d(x,t)<\epsilon,$ then one can find $({\bar{x}},{\bar{t}})\in M_{{\alpha}},$ with $0<{\bar{t}}\le {\epsilon}^2, d({\bar{x}},{\bar{t}})<(2A+1){\epsilon},$ such that $$|Rm|(x,t)\le 4|Rm|({\bar{x}},{\bar{t}}),$$ whenever $$(x,t)\in M_{{\alpha}}, 0<t\le {\bar{t}}, d(x,t)\le d({\bar{x}},{\bar{t}})+A|Rm|^{-\frac{1}{2}}({\bar{x}},{\bar{t}})$$* ]{} [*Proof of Claim 1.*]{} We construct $({\bar{x}},{\bar{t}})$ as a limit of a (finite) sequence $(x_k,t_k),$ defined in the following way. Let $(x_1,t_1)$ be an arbitrary point, satisfying $0<t_1\le {\epsilon}^2, d(x_1,t_1)<{\epsilon}, |Rm|(x_1,t_1)\ge {\alpha}t^{-1}+{\epsilon}^{-2}.$ Now if $(x_k,t_k)$ is already constructed, and if it can not be taken for $({\bar{x}},{\bar{t}}),$ because there is some $(x,t)$ satisfying (10.2), but not (10.1), then take any such $(x,t)$ for $(x_{k+1},t_{k+1}).$ Clearly, the sequence, constructed in such a way, satisfies $|Rm|(x_k,t_k)\ge 4^{k-1}|Rm|(x_1,t_1)\ge 4^{k-1}{\epsilon}^{-2},$ and therefore, $d(x_k,t_k)\le (2A+1){\epsilon}.$ Since the solution is smooth, the sequence is finite, and its last element fits. [**Claim 2.**]{} [*For $({\bar{x}},{\bar{t}}), $ constructed above, (10.1) holds whenever $${\bar{t}}-\frac{1}{2}{\alpha}Q^{-1}\le t\le {\bar{t}}, \mbox{dist}_{{\bar{t}}}(x,{\bar{x}})\le \frac{1}{10}AQ^{-\frac{1}{2}},$$ where $Q=|Rm|({\bar{x}},{\bar{t}}).$*]{} [*Proof of Claim 2.*]{} We only need to show that if $(x,t)$ satisfies (10.3), then it must satisfy (10.1) or (10.2). Since $({\bar{x}},{\bar{t}})\in M_{{\alpha}},$ we have $Q\ge {\alpha}{\bar{t}}^{-1},$ so ${\bar{t}}-\frac{1}{2}{\alpha}Q^{-1}\ge \frac{1}{2}{\bar{t}}.$ Hence, if $(x,t)$ does not satisfy (10.1), it definitely belongs to $M_{{\alpha}}.$ Now by the triangle inequality, $d(x,{\bar{t}})\le d({\bar{x}},{\bar{t}})+\frac{1}{10}AQ^{-\frac{1}{2}}.$ On the other hand, using lemma 8.3(b) we see that, as $t$ decreases from ${\bar{t}}$ to ${\bar{t}}-\frac{1}{2}{\alpha}Q^{-1},$ the point $x$ can not escape from the ball of radius $d({\bar{x}},{\bar{t}})+AQ^{-\frac{1}{2}}$ centered at $x_0.$ Continuing the proof of the theorem, and arguing by contradiction, take sequences ${\epsilon}\to 0,{\delta}\to 0$ and solutions $g_{ij}(t),$ violating the statement; by reducing ${\epsilon},$ we’ll assume that $$|Rm|(x,t)\le \alpha t^{-1}+2{\epsilon}^{-2}\ \mbox{whenever}\ 0\le t\le {\epsilon}^2\ \mbox{and}\ d(x,t)\le {\epsilon}$$ Take $A=\frac{1}{100n{\epsilon}}\to\infty ,$ construct $({\bar{x}},{\bar{t}}),$ and consider solutions $u=(4\pi({\bar{t}}-t))^{-\frac{n}{2}}e^{-f}$ of the conjugate heat equation, starting from ${\delta}$-functions at $({\bar{x}},{\bar{t}}),$ and corresponding nonpositive functions $v.$ [**Claim 3.**]{}[*As ${\epsilon},{\delta}\to 0,$ one can find times $\tilde{t}\in[{\bar{t}}-\frac{1}{2}{\alpha}Q^{-1},{\bar{t}}],$ such that the integral $\int_B{v}$ stays bounded away from zero, where $B$ is the ball at time $\tilde{t}$ of radius $\sqrt{{\bar{t}}-\tilde{t}}$ centered at ${\bar{x}}.$* ]{} [*Proof of Claim 3(sketch).*]{} The statement is invariant under scaling, so we can try to take a limit of scalings of $g_{ij}(t)$ at points $({\bar{x}},{\bar{t}})$ with factors $Q.$ If the injectivity radii of the scaled metrics at $({\bar{x}},{\bar{t}})$ are bounded away from zero, then a smooth limit exists, it is complete and has $|Rm|({\bar{x}},{\bar{t}})=1$ and $|Rm|(x,t)\le 4$ when ${\bar{t}}-\frac{1}{2}{\alpha}\le t\le {\bar{t}}.$ It is not hard to show that the fundamental solutions $u$ of the conjugate heat equation converge to such a solution on the limit manifold. But on the limit manifold, $\int_B{v}$ can not be zero for $\tilde{t}={\bar{t}}-\frac{1}{2}{\alpha},$ since the evolution equation (9.1) would imply in this case that the limit is a gradient shrinking soliton, and this is incompatible with $|Rm|({\bar{x}},{\bar{t}})=1.$ If the injectivity radii of the scaled metrics tend to zero, then we can change the scaling factor, to make the scaled metrics converge to a flat manifold with finite injectivity radius; in this case it is not hard to choose $\tilde{t}$ in such a way that $\int_B{v}\to -\infty.$ The positive lower bound for $-\int_B{v}$ will be denoted by ${\beta}.$ Our next goal is to construct an appropriate cut-off function. We choose it in the form $h(y,t)=\phi(\frac{\tilde{d}(y,t)}{10A{\epsilon}}),$ where $\tilde{d}(y,t)=d(y,t)+200n\sqrt{t},$ and $\phi$ is a smooth function of one variable, equal one on $(-\infty,1]$ and decreasing to zero on $[1,2].$ Clearly, $h$ vanishes at $t=0$ outside $B(x_0,20A{\epsilon});$ on the other hand, it is equal to one near $({\bar{x}},{\bar{t}}).$ Now $\Box h=\frac{1}{10A{\epsilon}}(d_t-\triangle d+\frac{100n}{\sqrt{t}})\phi '-\frac{1}{(10A{\epsilon})^2}\phi ''.$ Note that $d_t-\triangle t+\frac{100n}{\sqrt{t}}\ge 0$ on the set where $\phi '\neq 0 \ \ -$ this follows from the lemma 8.3(a) and our assumption (10.4). We may also choose $\phi$ so that $\phi ''\ge -10\phi, (\phi ')^2\le 10\phi.$ Now we can compute $(\int_M{hu})_t=\int_M{(\Box h)u}\le \frac{1}{(A{\epsilon})^2},$ so $\int_M{hu}\mid_{t=0}\ge \int_M{hu}\mid_{t={\bar{t}}}-\frac{{\bar{t}}}{(A{\epsilon})^2}\ge 1-A^{-2}.$ Also, by (9.1), $(\int_M{-hv})_t\le \int_M{-(\Box h)v}\le \frac{1}{(A{\epsilon})^2}\int_M{-hv},$ so by Claim 3, $ -\int_M{hv}\mid_{t=0}\ge {\beta}\mbox{exp}(-\frac{{\bar{t}}}{(A{\epsilon})^2})\ge {\beta}(1-A^{-2}).$ From now on we"ll work at $t=0$ only. Let $\tilde{u}=hu$ and correspondingly $\tilde{f}=f-\mbox{log}h.$ Then $${\beta}(1-A^{-2})\le -\int_M{hv}=\int_M{[(-2\triangle f+|\nabla f|^2-R){\bar{t}}-f+n]hu}$$ $$=\int_M{[-{\bar{t}}|\nabla \tilde{f}|^2-\tilde{f}+n]\tilde{u}}+ \int_M{[{\bar{t}}(|\nabla h|^2/h-Rh)-h\mbox{log}h]u}$$ $$\le\int_M{[-{\bar{t}}|\nabla\tilde{f}|^2-\tilde{f}-n]\tilde{u}}+A^{-2}+100{\epsilon}^2$$ ( Note that $\int_M{-uh \log h}$ does not exceed the integral of  $u$  over\ $B(x_0,20A{\epsilon})\backslash B(x_0,10A{\epsilon}),$ and $\int_{B(x_0,10A{\epsilon})}{u}\ge \int_M{\bar{h}u}\ge 1-A^{-2},$\ where $\bar{h}=\phi(\frac{\tilde{d}}{5A{\epsilon}}))$ Now scaling the metric by the factor $\frac{1}{2}{\bar{t}}^{-1}$ and sending ${\epsilon},{\delta}$ to zero, we get a sequence of metric balls with radii going to infinity, and a sequence of compactly supported nonnegative functions $u=(2\pi)^{-\frac{n}{2}}e^{-f}$ with $\int{u}\to 1$ and $\int{[-\frac{1}{2}|\nabla f|^2-f+n]u}$ bounded away from zero by a positive constant. We also have isoperimetric inequalities with the constants tending to the euclidean one. This set up is in conflict with the Gaussian logarithmic Sobolev inequality, as can be seen by using spherical symmetrization. [**10.2 Corollary**]{}(from the proof) [*Under the same assumptions, we also have at time $t, 0<t\le ({\epsilon}r_0)^2,$ an estimate $Vol B(x,\sqrt{t})\ge c\sqrt{t}^n $ for $x\in B(x_0,{\epsilon}r_0),$ where $c=c(n)$ is a universal constant.*]{} [**10.3 Theorem.**]{} [*There exist ${\epsilon},{\delta}> 0$ with the following property. Suppose $g_{ij}(t)$ is a smooth solution to the Ricci flow on $[0,({\epsilon}r_0)^2],$ and assume that at $t=0$ we have $|Rm|(x)\le r_0^{-2}$ in $B(x_0,r_0),$ and $VolB(x_0,r_0)\ge (1-{\delta})\omega_n r_0^n,$ where $\omega_n$ is the volume of the unit ball in $\mathbb{R}^n.$ Then the estimate $|Rm|(x,t)\le ({\epsilon}r_0)^{-2}$ holds whenever $0\le t\le ({\epsilon}r_0)^2, \mbox{dist}_t(x,x_0)<{\epsilon}r_0.$*]{} The proof is a slight modification of the proof of theorem 10.1, and is left to the reader. A natural question is whether the assumption on the volume of the ball is superfluous. [**10.4 Corollary**]{}(from 8.2, 10.1, 10.2) [*There exist ${\epsilon},{\delta}> 0$ and for any $A>0$ there exists $\kappa(A)>0$ with the following property. If $g_{ij}(t)$ is a smooth solution to the Ricci flow on $[0,({\epsilon}r_0)^2],$ such that at $t=0$ we have $R(x)\ge -r_0^{-2}, Vol(\partial\Omega)^n\ge (1-{\delta})c_nVol(\Omega)^{n-1}$ for any $x,\Omega\subset B(x_0,r_0),$ and $(x,t)$ satisfies $A^{-1}({\epsilon}r_0)^2\le t\le ({\epsilon}r_0)^2, \mbox{dist}_t(x,x_0)\le Ar_0,$ then $g_{ij}(t)$ can not be $\kappa$-collapsed at $(x,t)$ on the scales less than $\sqrt{t}.$*]{} [**10.5**]{} [*Remark.*]{} It is straightforward to get from 10.1 a version of the Cheeger diffeo finiteness theorem for manifolds, satisfying our assumptions on scalar curvature and isoperimetric constant on each ball of some fixed radius $r_0>0.$ In particular, these assumptions are satisfied (for some controllably smaller $r_0$), if we assume a lower bound for $\mbox{Ric}$ and an almost euclidean lower bound for the volume of the balls of radius $r_0.$ (this follows from the Levy-Gromov isoperimetric inequality); thus we get one of the results of Cheeger and Colding \[Ch-Co\] under somewhat weaker assumptions. [**10.6\***]{} Our pseudolocality theorem is similar in some respect to the results of Ecker-Huisken \[E-Hu\] on the mean curvature flow. Ancient solutions with nonnegative curvature operator and bounded entropy ========================================================================= In this section we consider smooth solutions to the Ricci flow $(g_{ij})_t=-2R_{ij}, -\infty<t\le 0,$ such that for each $t$ the metric $g_{ij}(t)$ is a complete non-flat metric of bounded curvature and nonnegative curvature operator. Hamilton discovered a remarkable differential Harnack inequality for such solutions; we need only its trace version $$R_t+2<X,\nabla R>+2\mbox{Ric}(X,X)\ge 0$$ and its corollary, $R_t\ge 0.$ In particular, the scalar curvature at some time $t_0\le 0$ controls the curvatures for all $t\le t_0.$ We impose one more requirement on the solutions; namely, we fix some $\kappa >0$ and require that $g_{ij}(t)$ be $\kappa$-noncollapsed on all scales (the definitions 4.2 and 8.1 are essentially equivalent in this case). It is not hard to show that this requirement is equivalent to a uniform bound on the entropy $S,$ defined as in 5.1 using an arbitrary fundamental solution to the conjugate heat equation. Pick an arbitrary point $(p,t_0)$ and define $\tilde{V}(\tau), l(q,\tau)$ as in 7.1, for $\tau(t)=t_0-t.$ Recall that for each $\tau>0$ we can find $q=q(\tau),$ such that $l(q,\tau)\le \frac{n}{2}.$ [**Proposition.**]{}[*The scalings of $g_{ij}(t_0-\tau)$ at $q(\tau)$ with factors $\tau^{-1}$ converge along a subsequence of $\tau\to\infty$ to a non-flat gradient shrinking soliton.* ]{} [*Proof (sketch).*]{} It is not hard to deduce from (7.16) that for any ${\epsilon}>0$ one can find ${\delta}>0$ such that both $ l(q,\tau)$ and $\tau R(q,t_0-\tau)$ do not exceed ${\delta}^{-1}$ whenever $\frac{1}{2}\bar{\tau}\le \tau\le \bar{\tau}$ and $\mbox{dist}_{t_0-\bar{\tau}}^2(q,q(\bar{\tau}))\le {\epsilon}^{-1}\bar{\tau}$ for some $\bar{\tau}>0.$ Therefore, taking into account the $\kappa$-noncollapsing assumption, we can take a blow-down limit, say $\bar{g}_{ij}(\tau),$ defined for $\tau\in(\frac{1}{2},1), (\bar{g}_{ij})_{\tau}=2\bar{R}_{ij}.$ We may assume also that functions $l$ tend to a locally Lipschitz function $\bar{l},$ satisfying (7.13),(7.14) in the sense of distributions. Now, since $\tilde{V}(\tau)$ is nonincreasing and bounded away from zero (because the scaled metrics are not collapsed near $q(\tau)$) the limit function $\bar{V}(\tau)$ must be a positive constant; this constant is strictly less than $\mbox{lim}_{\tau\to 0}\tilde{V}(\tau)=(4\pi)^{\frac{n}{2}},$ since $g_{ij}(t)$ is not flat. Therefore, on the one hand, (7.14) must become an equality, hence $\bar{l}$ is smooth, and on the other hand, by the description of the equality case in (7.12), $\bar{g}_{ij}(\tau)$ must be a gradient shrinking soliton with $\bar{R}_{ij}+\bar{\nabla}_i\bar{\nabla}_j \bar{l}-\frac{1}{2\tau}\bar{g}_{ij}=0.$ If this soliton is flat, then $\bar{l}$ is uniquely determined by the equality in (7.14), and it turns out that the value of $\bar{V}$ is exactly $(4\pi)^{\frac{n}{2}},$ which was ruled out. There is only one oriented two-dimensional solution, satisfying the assumptions stated in 11.1, - the round sphere. [*Proof.*]{} Hamilton \[H 10\] proved that round sphere is the only non-flat oriented nonnegatively curved gradient shrinking soliton in dimension two. Thus, the scalings of our ancient solution must converge to a round sphere. However, Hamilton \[H 10\] has also shown that an almost round sphere is getting more round under Ricci flow, therefore our ancient solution must be round. Recall that for any non-compact complete riemannian manifold $M$ of nonnegative Ricci curvature and a point $p\in M,$ the function $VolB(p,r)r^{-n}$ is nonincreasing in $r>0;$ therefore, one can define an asymptotic volume ratio ${\mathcal{V}}$ as the limit of this function as $r\to\infty.$ [ **Proposition.**]{}[ *Under assumptions of 11.1, ${\mathcal{V}}=0$ for each $t.$*]{} [*Proof.*]{} Induction on dimension. In dimension two the statement is vacuous, as we have just shown. Now let $n\ge 3,$ suppose that ${\mathcal{V}}>0$ for some $t=t_0,$ and consider the asymptotic scalar curvature ratio ${\mathcal{R}}=\mbox{lim sup}R(x,t_0)d^2(x)$ as $d(x)\to\infty.$ ($d(x)$ denotes the distance, at time $t_0,$ from $x$ to some fixed point $x_0$) If ${\mathcal{R}}=\infty,$ then we can find a sequence of points $x_k$ and radii $r_k>0,$ such that $r_k/d(x_k)\to 0, R(x_k)r_k^2\to\infty , $ and $R(x)\le 2R(x_k)$ whenever $x\in B(x_k,r_k).$ Taking blow-up limit of $g_{ij}(t)$ at $(x_k,t_0)$ with factors $R(x_k),$ we get a smooth non-flat ancient solution, satisfying the assumptions of 11.1, which splits off a line (this follows from a standard argument based on the Aleksandrov-Toponogov concavity). Thus, we can do dimension reduction in this case (cf. \[H 4,$\S 22$\]). If $0<{\mathcal{R}}<\infty ,$ then a similar argument gives a blow-up limit in a ball of finite radius; this limit has the structure of a non-flat metric cone. This is ruled out by Hamilton’s strong maximum principle for nonnegative curvature operator. Finally, if ${\mathcal{R}}=0,$ then (in dimensions three and up) it is easy to see that the metric is flat. For every ${\epsilon}>0$ there exists $A< \infty $ with the following property. Suppose we have a sequence of ( not necessarily complete) solutions $(g_k)_{ij}(t)$ with nonnegative curvature operator, defined on $M_k\times[t_k,0],$ such that for each $k$ the ball $B(x_k,r_k)$ at time $t=0$ is compactly contained in $M_k,$ $\frac{1}{2}R(x,t)\le R(x_k,0)=Q_k$ for all $(x,t), t_kQ_k\to -\infty , r_k^2Q_k\to\infty$ as $k\to\infty.$ Then $VolB(x_k,A/\sqrt{Q_k})\le{\epsilon}(A/\sqrt{Q_k})^n$ at $t=0$ if $k$ is large enough. [*Proof.* ]{} Assuming the contrary, we may take a blow-up limit (at $(x_k,0)$ with factors $Q_k$) and get a non-flat ancient solution with positive asymptotic volume ratio at $t=0,$ satisfying the assumptions in 11.1, except, may be, the $\kappa$-noncollapsing assumption. But if that assumption is violated for each $\kappa>0,$ then ${\mathcal{V}}(t)$ is not bounded away from zero as $t\to -\infty.$ However, this is impossible, because it is easy to see that ${\mathcal{V}}(t)$ is nonincreasing in $t.$ (Indeed, Ricci flow decreases the volume and does not decrease the distances faster than $C\sqrt{R}$ per time unit, by lemma 8.3(b)) Thus, $\kappa$-noncollapsing holds for some $\kappa>0,$ and we can apply the previous proposition to obtain a contradiction. For every $w>0$ there exist $B=B(w)<\infty , C=C(w)<\infty , \tau_0=\tau_0(w)>0,$ with the following properties. \(a) Suppose we have a (not necessarily complete) solution $g_{ij}(t)$ to the Ricci flow, defined on $M\times [t_0,0],$ so that at time $t=0$ the metric ball $B(x_0,r_0)$ is compactly contained in $M.$ Suppose that at each time $t, t_0\le t\le 0,$ the metric $g_{ij}(t)$ has nonnegative curvature operator, and $VolB(x_0,r_0)\ge wr_0^n.$ Then we have an estimate $R(x,t)\le Cr_0^{-2}+B(t-t_0)^{-1}$ whenever $\mbox{dist}_t(x,x_0)\le \frac{1}{4}r_0.$ \(b) If, rather than assuming a lower bound on volume for all $t,$ we assume it only for $t=0,$ then the same conclusion holds with $-\tau_0r_0^2$ in place of $t_0,$ provided that $-t_0\ge \tau_0r_0^2.$ [*Proof.*]{} By scaling assume $r_0=1.$ (a) Arguing by contradiction, consider a sequence of $B,C\to \infty,$ of solutions $g_{ij}(t)$ and points $(x,t),$ such that $\mbox{dist}_t(x,x_0)\le \frac{1}{4}$ and $ R(x,t)> C+ B(t-t_0)^{-1}.$ Then, arguing as in the proof of claims 1,2 in 10.1, we can find a point $({\bar{x}},{\bar{t}}),$ satisfying $\mbox{dist}_{{\bar{t}}}({\bar{x}},x_0)<\frac{1}{3}, Q=R({\bar{x}},{\bar{t}})>C+B({\bar{t}}-t_0)^{-1},$ and such that $R(x',t')\le 2Q$ whenever ${\bar{t}}-AQ^{-1}\le t'\le {\bar{t}}, \mbox{dist}_{{\bar{t}}}(x',{\bar{x}})<AQ^{-\frac{1}{2}},$ where $A$ tends to infinity with $B,C.$ Applying the previous corollary at $({\bar{x}},{\bar{t}})$ and using the relative volume comparison, we get a contradiction with the assumption involving $w.$ \(b) Let $B(w),C(w)$ be good for (a). We claim that $B=B(5^{-n}w),C=C(5^{-n}w)$ are good for (b) , for an appropriate $\tau_0(w)>0.$ Indeed, let $g_{ij}(t)$ be a solution with nonnegative curvature operator, such that $VolB(x_0,1)\ge w$ at $t=0,$ and let $[-\tau ,0]$ be the maximal time interval, where the assumption of (a) still holds, with $5^{-n}w$ in place of $w$ and with $-\tau$ in place of $t_0.$ Then at time $t=-\tau$ we must have $VolB(x_0,1)\le 5^{-n}w.$ On the other hand, from lemma 8.3 (b) we see that the ball $B(x_0,\frac{1}{4})$ at time $t=-\tau$ contains the ball $B(x_0,\frac{1}{4}-10(n-1)(\tau\sqrt{C}+2\sqrt{B\tau})) $ at time $t=0,$ and the volume of the former is at least as large as the volume of the latter. Thus, it is enough to choose $\tau_0=\tau_0(w)$ in such a way that the radius of the latter ball is $>\frac{1}{5}.$ Clearly, the proof also works if instead of assuming that curvature operator is nonnegative, we assumed that it is bounded below by $-r_0^{-2}$ in the (time-dependent) metric ball of radius $r_0,$ centered at $x_0.$ From now on we restrict our attention to oriented manifolds of dimension three. Under the assumptions in 11.1, the solutions on closed manifolds must be quotients of the round $\mathbb{S}^3$ or $\mathbb{S}^2\times\mathbb{R}$ - this is proved in the same way as in two dimensions, since the gradient shrinking solitons are known from the work of Hamilton \[H 1,10\]. The noncompact solutions are described below. [**Theorem.**]{} *The set of non-compact ancient solutions , satisfying the assumptions of 11.1, is compact modulo scaling. That is , from any sequence of such solutions and points $(x_k,0)$ with $R(x_k,0)=1,$ we can extract a smoothly converging subsequence, and the limit satisfies the same conditions.* Proof. To ensure a converging subsequence it is enough to show that whenever $R(y_k,0)\to\infty,$ the distances at $t=0$ between $x_k$ and $y_k$ go to infinity as well. Assume the contrary. Define a sequence $z_k$ by the requirement that $z_k$ be the closest point to $x_k$ (at $t=0$), satisfying $R(z_k,0)\mbox{dist}_0^2(x_k,z_k)=~1.$ We claim that $R(z,0)/R(z_k,0)$ is uniformly bounded for $z\in B(z_k,2R(z_k,0)^{-\frac{1}{2}}).$ Indeed, otherwise we could show, using 11.5 and relative volume comparison in nonnegative curvature, that the balls $B(z_k,R(z_k,0)^{-\frac{1}{2}})$ are collapsing on the scale of their radii. Therefore, using the local derivative estimate, due to W.-X.Shi (see \[H 4,$\S 13$\]), we get a bound on $R_t(z_k,t)$ of the order of $R^2(z_k,0).$ Then we can compare $1=R(x_k,0)\ge cR(z_k,-cR^{-1}(z_k,0))\ge cR(z_k,0)$ for some small $c>0, $ where the first inequality comes from the Harnack inequality, obtained by integrating (11.1). Thus, $R(z_k,0)$ are bounded. But now the existence of the sequence $y_k$ at bounded distance from $x_k$ implies, via 11.5 and relative volume comparison, that balls $B(x_k,c)$ are collapsing - a contradiction. It remains to show that the limit has bounded curvature at $t=0.$ If this was not the case, then we could find a sequence $y_i$ going to infinity, such that $R(y_i,0)\to\infty$ and $R(y,0)\le 2R(y_i,0)$ for $y\in B(y_i,A_iR(y_i,0)^{-\frac{1}{2}}), A_i\to\infty .$ Then the limit of scalings at $(y_i,0)$ with factors $R(y_i,0)$ satisfies the assumptions in 11.1 and splits off a line. Thus by 11.3 it must be a round infinite cylinder. It follows that for large $i$ each $y_i$ is contained in a round cylindrical “neck” of radius $(\frac{1}{2}R(y_i,0))^{-\frac{1}{2}}\to 0,$ - something that can not happen in an open manifold of nonnegative curvature. Fix ${\epsilon}>0.$ Let $g_{ij}(t)$ be an ancient solution on a noncompact oriented three-manifold $M,$ satisfying the assumptions in 11.1. We say that a point $x_0\in M$ is the center of an ${\epsilon}$-neck, if the solution $g_{ij}(t)$ in the set $\{(x,t): -({\epsilon}Q)^{-1}<t\le 0, \mbox{dist}_0^2(x,x_0)<({\epsilon}Q)^{-1}\},$ where $Q=R(x_0,0),$ is, after scaling with factor $Q,$ ${\epsilon}$-close (in some fixed smooth topology) to the corresponding subset of the evolving round cylinder, having scalar curvature one at $t=0.$ [**Corollary**]{} (from theorem 11.7 and its proof) [*For any ${\epsilon}>0$ there exists $C=C({\epsilon},\kappa)>0,$ such that if $g_{ij}(t)$ satisfies the assumptions in 11.1, and $M_{\epsilon}$ denotes the set of points in $M,$ which are not centers of ${\epsilon}$-necks, then $M_{{\epsilon}}$ is compact and moreover, $\mbox{diam}M_{{\epsilon}} \le CQ^{-\frac{1}{2}},$ and $C^{-1}Q\le R(x,0)\le CQ$ whenever $x\in M_{{\epsilon}},$ where $Q=R(x_0,0)$ for some $x_0\in \partial M_{{\epsilon}}.$* ]{} [**11.9**]{} [*Remark.*]{} It can be shown that there exists $\kappa_0>0,$ such that if an ancient solution on a noncompact three-manifold satisfies the assumptions in 11.1 with some $\kappa>0,$ then it would satisfy these assumptions with $\kappa=\kappa_0.$ This follows from the arguments in 7.3, 11.2, and the statement (which is not hard to prove) that there are no noncompact three-dimensional gradient shrinking solitons, satisfying 11.1, other than the round cylinder and its $\mathbb{Z}_2$-quotients. Furthermore, I believe that there is only one (up to scaling) noncompact three-dimensional $\kappa$-noncollapsed ancient solution with bounded positive curvature - the rotationally symmetric gradient steady soliton, studied by R.Bryant. In this direction, I have a plausible, but not quite rigorous argument, showing that any such ancient solution can be made eternal, that is, can be extended for $t\in (-\infty ,+\infty);$ also I can prove uniqueness in the class of gradient steady solitons. [**11.10\***]{} The earlier work on ancient solutions and all that can be found in \[H 4, $\S 16-22,25,26$\]. Almost nonnegative curvature in dimension three =============================================== [**12.1**]{} Let $\phi $ be a decreasing function of one variable, tending to zero at infinity. A solution to the Ricci flow is said to have $\phi$-almost nonnegative curvature if it satisfies $Rm(x,t)\ge -\phi (R(x,t))R(x,t)$ for each $(x,t).$ [**Theorem.**]{} [*Given ${\epsilon}>0,\kappa>0$ and a function $\phi$ as above, one can find $r_0>0$ with the following property. If $g_{ij}(t), 0\le t\le T$ is a solution to the Ricci flow on a closed three-manifold $M,$ which has $\phi$-almost nonnegative curvature and is $\kappa$-noncollapsed on scales $<r_0,$ then for any point $(x_0,t_0)$ with $t_0\ge 1$ and $Q=R(x_0,t_0)\ge r_0^{-2},$ the solution in $\{(x,t):\mbox{dist}^2_{t_0}(x,x_0)<({\epsilon}Q)^{-1}, t_0-({\epsilon}Q)^{-1}\le t\le t_0\}$ is , after scaling by the factor $Q,$ ${\epsilon}$-close to the corresponding subset of some ancient solution, satisfying the assumptions in 11.1.*]{} [*Proof.*]{} An argument by contradiction. Take a sequence of $r_0$ converging to zero, and consider the solutions $g_{ij}(t),$ such that the conclusion does not hold for some $(x_0,t_0);$ moreover, by tampering with the condition $t_0\ge 1$ a little bit, choose among all such $(x_0,t_0),$ in the solution under consideration, the one with nearly the smallest curvature $Q.$ (More precisely, we can choose $(x_0,t_0)$ in such a way that the conclusion of the theorem holds for all $(x,t),$ satisfying $R(x,t)>2Q, t_0-HQ^{-1}\le t\le t_0,$ where $H\to\infty $ as $r_0\to 0)$ Our goal is to show that the sequence of blow-ups of such solutions at such points with factors $Q$ would converge, along some subsequence of $r_0\to 0,$ to an ancient solution, satisfying 11.1. [**Claim 1.**]{} [*For each $({\bar{x}},{\bar{t}})$ with $t_0-HQ^{-1}\le {\bar{t}}\le t_0$ we have $R(x,t)\le 4\bar{Q}$ whenever ${\bar{t}}-c\bar{Q}^{-1}\le t\le {\bar{t}}$ and $\mbox{dist}_{{\bar{t}}}(x,{\bar{x}})\le c\bar{Q}^{-\frac{1}{2}},$ where $\bar{Q}=Q+R({\bar{x}},{\bar{t}})$ and $c=c(\kappa)>0$ is a small constant.*]{} [*Proof of Claim 1.*]{} Use the fact ( following from the choice of $(x_0,t_0)$ and the description of the ancient solutions) that for each $(x,t)$ with $R(x,t)>2Q$ and $t_0-HQ^{-1}\le t\le t_0$ we have the estimates $|R_t(x,t)|\le CR^2(x,t)$, $ |\nabla R|(x,t)\le CR^{\frac{3}{2}}(x,t).$ [**Claim 2.**]{} [*There exists $c=c(\kappa)>0$ and for any $A>0$ there exist $D=D(A)<\infty , \rho_0=\rho_0(A)>0,$ with the following property. Suppose that $r_0<\rho_0,$ and let $\gamma$ be a shortest geodesic with endpoints ${\bar{x}},x$ in $g_{ij}({\bar{t}}), $ for some ${\bar{t}}\in[t_0-HQ^{-1},t_0],$ such that $R(y,{\bar{t}})>2Q$ for each $y\in\gamma.$ Let $z\in\gamma$ satisfy $cR(z,{\bar{t}})>R({\bar{x}},{\bar{t}})=\bar{Q}.$ Then $\mbox{dist}_{{\bar{t}}}({\bar{x}},z)\ge A\bar{Q}^{-\frac{1}{2}}$ whenever $R(x,{\bar{t}})\ge D\bar{Q}.$*]{} [*Proof of Claim 2.*]{} Note that from the choice of $(x_0,t_0)$ and the description of the ancient solutions it follows that an appropriate parabolic (backward in time) neighborhood of a point $y\in\gamma$ at $t={\bar{t}}$ is ${\epsilon}$-close to the evolving round cylinder, provided $c^{-1}\bar{Q}\le R(y,{\bar{t}})\le cR(x,{\bar{t}})$ for an appropriate $c=c(\kappa).$ Now assume that the conclusion of the claim does not hold, take $r_0$ to zero, $R(x,{\bar{t}})$ - to infinity, and consider the scalings around $({\bar{x}},{\bar{t}})$ with factors $\bar{Q}.$ We can imagine two possibilities for the behavior of the curvature along $\gamma$ in the scaled metric: either it stays bounded at bounded distances from ${\bar{x}},$ or not. In the first case we can take a limit (for a subsequence) of the scaled metrics along $\gamma$ and get a nonnegatively curved almost cylindrical metric, with $\gamma$ going to infinity. Clearly, in this case the curvature at any point of the limit does not exceed $c^{-1};$ therefore, the point $z$ must have escaped to infinity, and the conclusion of the claim stands. In the second case, we can also take a limit along $\gamma;$ it is a smooth nonnegatively curved manifold near ${\bar{x}}$ and has cylindrical shape where curvature is large; the radius of the cylinder goes to zero as we approach the (first) singular point, which is located at finite distance from ${\bar{x}};$ the region beyond the first singular point will be ignored. Thus, at $t={\bar{t}}$ we have a metric, which is a smooth metric of nonnegative curvature away from a single singular point $o$. Since the metric is cylindrical at points close to $o,$ and the radius of the cylinder is at most ${\epsilon}$ times the distance from $o,$ the curvature at $o$ is nonnegative in Aleksandrov sense. Thus, the metric near $o$ must be cone-like. In other words, the scalings of our metric at points $x_i\to o$ with factors $R(x_i,{\bar{t}})$ converge to a piece of nonnegatively curved non-flat metric cone. Moreover, using claim 1, we see that we actually have the convergence of the solutions to the Ricci flow on some time interval, and not just metrics at $t={\bar{t}}.$ Therefore, we get a contradiction with the strong maximum principle of Hamilton \[H 2\]. Now continue the proof of theorem, and recall that we are considering scalings at $(x_0,t_0)$ with factor $Q.$ It follows from claim 2 that at $t=t_0$ the curvature of the scaled metric is bounded at bounded distances from $x_0.$ This allows us to extract a smooth limit at $t=t_0$ (of course, we use the $\kappa$-noncollapsing assumption here). The limit has bounded nonnegative curvature (if the curvatures were unbounded, we would have a sequence of cylindrical necks with radii going to zero in a complete manifold of nonnegative curvature). Therefore, by claim 1, we have a limit not only at $t=t_0,$ but also in some interval of times smaller than $t_0.$ We want to show that the limit actually exists for all $t<t_0.$ Assume that this is not the case, and let $t'$ be the smallest value of time, such that the blow-up limit can be taken on $(t',t_0].$ From the differential Harnack inequality of Hamilton \[H 3\] we have an estimate $R_t(x,t)\ge -R(x,t)(t-t')^{-1},$ therefore, if $\tilde{Q}$ denotes the maximum of scalar curvature at $t=t_0,$ then $R(x,t)\le \tilde{Q}\frac{t_0-t'}{t-t'}.$ Hence by lemma 8.3(b) $\mbox{dist}_t(x,y)\le \mbox{dist}_{t_0}(x,y)+C$ for all $t,$ where $C=10n(t_0-t')\sqrt{\tilde{Q}}.$ The next step is needed only if our limit is noncompact. In this case there exists $D>0,$ such that for any $y$ satisfying $d=\mbox{dist}_{t_0}(x_0,y)>D,$ one can find $x$ satisfying $\mbox{dist}_{t_0}(x,y)=d, \mbox{dist}_{t_0}(x,x_0)>\frac{3}{2}d.$ We claim that the scalar curvature $R(y,t)$ is uniformly bounded for all such $y$ and all $t\in (t',t_0].$ Indeed, if $R(y,t)$ is large, then the neighborhood of $(y,t)$ is like in an ancient solution; therefore, (long) shortest geodesics $\gamma$ and $\gamma_0,$ connecting at time $t$ the point $y$ to $x$ and $x_0$ respectively, make the angle close to $0$ or $\pi$ at $y;$ the former case is ruled out by the assumptions on distances, if $D>10C;$ in the latter case, $x$ and $x_0$ are separated at time $t$ by a small neighborhood of $y,$ with diameter of order $R(y,t)^{-\frac{1}{2}},$ hence the same must be true at time $t_0,$ which is impossible if $R(y,t)$ is too large. Thus we have a uniform bound on curvature outside a certain compact set, which has uniformly bounded diameter for all $t\in (t',t_0].$ Then claim 2 gives a uniform bound on curvature everywhere. Hence, by claim 1, we can extend our blow-up limit past $t'$ - a contradiction. [**12.2 Theorem.**]{} [ *Given a function $\phi$ as above, for any $A>0$ there exists $K=K(A)<\infty $ with the following property. Suppose in dimension three we have a solution to the Ricci flow with $\phi$-almost nonnegative curvature, which satisfies the assumptions of theorem 8.2 with $r_0=1.$ Then $R(x,1)\le K$ whenever $\mbox{dist}_1(x,x_0)<A.$*]{} [*Proof.*]{} In the first step of the proof we check the following [**Claim.**]{} [*There exists $K=K(A)<\infty ,$ such that a point $(x,1)$ satisfies the conclusion of the previous theorem 12.1 (for some fixed small ${\epsilon}>0$), whenever $R(x,1)>K$ and $\mbox{dist}_1(x,x_0)<A.$*]{} The proof of this statement essentially repeats the proof of the previous theorem (the $\kappa$-noncollapsing assumption is ensured by theorem 8.2). The only difference is in the beginning. So let us argue by contradiction, and suppose we have a sequence of solutions and points $x$ with $\mbox{dist}_1(x,x_0)<A$ and $R(x,1)\to\infty,$ which do not satisfy the conclusion of 12.1. Then an argument, similar to the one proving claims 1,2 in 10.1, delivers points $({\bar{x}},{\bar{t}})$ with $\frac{1}{2}\le {\bar{t}}\le 1, \mbox{dist}_{{\bar{t}}}({\bar{x}},x_0)<2A,$ with $Q=R({\bar{x}},{\bar{t}})\to\infty ,$ and such that $(x,t)$ satisfies the conclusion of 12.1 whenever $R(x,t)>2Q, {\bar{t}}-DQ^{-1}\le t\le{\bar{t}}, \mbox{dist}_{{\bar{t}}}({\bar{x}},x)<DQ^{-\frac{1}{2}},$ where $D\to\infty.$ (There is a little subtlety here in the application of lemma 8.3(b); nevertheless, it works, since we need to apply it only when the endpoint other than $x_0$ either satisfies the conclusion of 12.1, or has scalar curvature at most $2Q$) After such $({\bar{x}},{\bar{t}})$ are found, the proof of 12.1 applies. Now, having checked the claim, we can prove the theorem by applying the claim 2 of the previous theorem to the appropriate segment of the shortest geodesic, connecting $x$ and $x_0.$ [**12.3 Theorem.**]{} [*For any $w>0$ there exist $\tau=\tau(w)>0, K=K(w)<\infty, \rho=\rho(w)>0$ with the following property. Suppose we have a solution $g_{ij}(t)$ to the Ricci flow, defined on $M\times [0,T),$ where $M$ is a closed three-manifold, and a point $(x_0,t_0),$ such that the ball $B(x_0,r_0)$ at $t=t_0$ has volume $\ge wr_0^n,$ and sectional curvatures $\ge -r_0^{-2}$ at each point. Suppose that $g_{ij}(t)$ is $\phi$-almost nonnegatively curved for some function $\phi$ as above. Then we have an estimate $R(x,t)<Kr_0^{-2}$ whenever $t_0\ge 4\tau r_0^2, t\in [t_0-\tau r_0^2,t_0], \mbox{dist}_t(x,x_0)\le \frac{1}{4}r_0,$ provided that $\phi(r_0^{-2})<\rho.$*]{} [*Proof.*]{} If we knew that sectional curvatures are $\ge -r_0^{-2}$ for all $t,$ then we could just apply corollary 11.6(b) (with the remark after its proof) and take $\tau(w)=\tau_0(w)/2, K(w)=C(w)+2B(w)/\tau_0(w).$ Now fix these values of $\tau ,K,$ consider a $\phi$-almost nonnegatively curved solution $g_{ij}(t),$ a point $(x_0,t_0)$ and a radius $r_0>0,$ such that the assumptions of the theorem do hold whereas the conclusion does not. We may assume that any other point $(x',t')$ and radius $r'>0$ with that property has either $t'>t_0$ or $t'<t_0 -2\tau r_0^2,$ or $2r'>r_0.$ Our goal is to show that $\phi(r_0^{-2})$ is bounded away from zero. Let $\tau '>0 $ be the largest time interval such that $Rm(x,t)\ge -r_0^{-2}$ whenever $t\in[t_0-\tau 'r_0^2,t_0], \mbox{dist}_t(x,x_0)\le r_0.$ If $\tau '\ge 2\tau,$ we are done by corollary 11.6(b). Otherwise, by elementary Aleksandrov space theory, we can find at time $t'=t_0-\tau 'r_0^2$ a ball $B(x',r')\subset B(x_0,r_0)$ with $VolB(x',r')\ge \frac{1}{2}\omega_n(r')^n,$ and with radius $r'\ge cr_0$ for some small constant $c=c(w)>0.$ By the choice of $(x_0,t_0)$ and $r_0,$ the conclusion of our theorem holds for $(x',t'),r'.$ Thus we have an estimate $R(x,t)\le K(r')^{-2}$ whenever $t\in [t'-\tau (r')^2,t'], \mbox{dist}_t(x,x')\le \frac{1}{4}r'.$ Now we can apply the previous theorem (or rather its scaled version) and get an estimate on $R(x,t)$ whenever $t\in [t'-\frac{1}{2}\tau (r')^2,t'], \mbox{dist}_t(x',x)\le 10r_0.$ Therefore, if $r_0>0$ is small enough, we have $Rm(x,t)\ge -r_0^{-2}$ for those $(x,t),$ which is a contradiction to the choice of $\tau '.$ [**12.4 Corollary**]{} (from 12.2 and 12.3) [*Given a function $\phi$ as above, for any $w>0$ one can find $\rho>0$ such that if $g_{ij}(t)$ is a $\phi$-almost nonnegatively curved solution to the Ricci flow, defined on $M\times [0,T),$ where $M$ is a closed three-manifold, and if $B(x_0,r_0)$ is a metric ball at time $t_0\ge 1,$ with $r_0<\rho,$ and such that $\min Rm(x,t_0)$ over $x\in B(x_0,r_0)$ is equal to $-r_0^{-2},$ then $VolB(x_0,r_0)\le wr_0^n.$*]{} The global picture of the Ricci flow in dimension three ======================================================= [**13.1**]{} Let $g_{ij}(t)$ be a smooth solution to the Ricci flow on $M\times [1,\infty),$ where $M$ is a closed oriented three-manifold. Then, according to \[H 6, theorem 4.1\], the normalized curvatures $\tilde{Rm}(x,t)=tRm(x,t)$ satisfy an estimate of the form $\tilde{Rm}(x,t)\ge -\phi(\tilde{R}(x,t))\tilde{R}(x,t),$ where $\phi$ behaves at infinity as $\frac{1}{\mbox{log}}.$ This estimate allows us to apply the results 12.3,12.4, and obtain the following [**Theorem.**]{} [*For any $w>0$ there exist $K=K(w)<\infty , \rho=\rho(w)>0,$ such that for sufficiently large times $t$ the manifold $M$ admits a thick-thin decomposition $M=M_{thick}\bigcup M_{thin}$ with the following properties. (a) For every $x\in M_{thick}$ we have an estimate $|\tilde{Rm}|\le K$ in the ball $B(x,\rho(w)\sqrt{t}).$ and the volume of this ball is at least $\frac{1}{10}w(\rho(w)\sqrt{t})^n.$ (b) For every $y\in M_{thin}$ there exists $r=r(y), 0<r<\rho(w)\sqrt{t},$ such that for all points in the ball $B(y,r)$ we have $Rm\ge -r^{-2},$ and the volume of this ball is $<wr^n.$*]{} Now the arguments in \[H 6\] show that either $M_{thick}$ is empty for large $t,$ or , for an appropriate sequence of $t\to 0$ and $w\to 0,$ it converges to a (possibly, disconnected) complete hyperbolic manifold of finite volume, whose cusps (if there are any) are incompressible in $M.$ On the other hand, collapsing with lower curvature bound in dimension three is understood well enough to claim that, for sufficiently small $w>0,$ $\ M_{thin}$ is homeomorphic to a graph manifold. The natural questions that remain open are whether the normalized curvatures must stay bounded as $t\to \infty,$ and whether reducible manifolds and manifolds with finite fundamental group can have metrics which evolve smoothly by the Ricci flow on the infinite time interval. [**13.2**]{} Now suppose that $g_{ij}(t)$ is defined on $M\times [1,T), T<\infty ,$ and goes singular as $t\to T.$ Then using 12.1 we see that, as $t\to T,$ either the curvature goes to infinity everywhere, and then $M$ is a quotient of either $\mathbb{S}^3$ or $\mathbb{S}^2\times \mathbb{R},$ or the region of high curvature in $g_{ij}(t)$ is the union of several necks and capped necks, which in the limit turn into horns (the horns most likely have finite diameter, but at the moment I don’t have a proof of that). Then at the time $T$ we can replace the tips of the horns by smooth caps and continue running the Ricci flow until the solution goes singular for the next time, e.t.c. It turns out that those tips can be chosen in such a way that the need for the surgery will arise only finite number of times on every finite time interval. The proof of this is in the same spirit, as our proof of 12.1; it is technically quite complicated, but requires no essentially new ideas. It is likely that by passing to the limit in this construction one would get a canonically defined Ricci flow through singularities, but at the moment I don’t have a proof of that. (The positive answer to the conjecture in 11.9 on the uniqueness of ancient solutions would help here) Moreover, it can be shown, using an argument based on 12.2, that every maximal horn at any time $T,$ when the solution goes singular, has volume at least $cT^n;$ this easily implies that the solution is smooth (if nonempty) from some finite time on. Thus the topology of the original manifold can be reconstructed as a connected sum of manifolds, admitting a thick-thin decomposition as in 13.1, and quotients of $\mathbb{S}^3$ and $\mathbb{S}^2\times\mathbb{R}.$ [**13.3\***]{} Another differential-geometric approach to the geometrization conjecture is being developed by Anderson \[A\]; he studies the elliptic equations, arising as Euler-Lagrange equations for certain functionals of the riemannian metric, perturbing the total scalar curvature functional, and one can observe certain parallelism between his work and that of Hamilton, especially taking into account that, as we have shown in 1.1, Ricci flow is the gradient flow for a functional, that closely resembles the total scalar curvature. References {#references .unnumbered} ==========     \[A\] M.T.Anderson Scalar curvature and geometrization conjecture for three-manifolds. Comparison Geometry (Berkeley, 1993-94), MSRI Publ. 30 (1997), 49-82. \[B-Em\] D.Bakry, M.Emery Diffusions hypercontractives. Seminaire de Probabilites XIX, 1983-84, Lecture Notes in Math. 1123 (1985), 177-206. \[Cao-C\] H.-D. Cao, B.Chow Recent developments on the Ricci flow. Bull. AMS 36 (1999), 59-74. \[Ch-Co\] J.Cheeger, T.H.Colding On the structure of spaces with Ricci curvature bounded below I. Jour. Diff. Geom. 46 (1997), 406-480. \[C\] B.Chow Entropy estimate for Ricci flow on compact two-orbifolds. Jour. Diff. Geom. 33 (1991), 597-600. \[C-Chu 1\] B.Chow, S.-C. Chu A geometric interpretation of Hamilton’s Harnack inequality for the Ricci flow. Math. Res. Let. 2 (1995), 701-718. B.Chow, S.-C. Chu A geometric approach to the linear trace Harnack inequality for the Ricci flow. Math. Res. Let. 3 (1996), 549-568. E.D’Hoker String theory. Quantum fields and strings: a course for mathematicians (Princeton, 1996-97), 807-1011. K.Ecker Logarithmic Sobolev inequalities on submanifolds of euclidean space. Jour. Reine Angew. Mat. 522 (2000), 105-118. \[E 2\] K.Ecker A local monotonicity formula for mean curvature flow. Ann. Math. 154 (2001), 503-525. K.Ecker, G.Huisken In terior estimates for hypersurfaces moving by mean curvature. Invent. Math. 105 (1991), 547-569. \[Gaw\] K.Gawedzki Lectures on conformal field theory. Quantum fields and strings: a course for mathematicians (Princeton, 1996-97), 727-805. \[G\] L.Gross Logarithmic Sobolev inequalities and contractivity properties of semigroups. Dirichlet forms (Varenna, 1992) Lecture Notes in Math. 1563 (1993), 54-88. \[H 1\] R.S.Hamilton Three manifolds with positive Ricci curvature. Jour. Diff. Geom. 17 (1982), 255-306. \[H 2\] R.S.Hamilton Four manifolds with positive curvature operator. Jour. Diff. Geom. 24 (1986), 153-179. \[H 3\] R.S.Hamilton The Harnack estimate for the Ricci flow. Jour. Diff. Geom. 37 (1993), 225-243. R.S.Hamilton Formation of singularities in the Ricci flow. Surveys in Diff. Geom. 2 (1995), 7-136. \[H 5\] R.S.Hamilton Four-manifolds with positive isotropic curvature. Commun. Anal. Geom. 5 (1997), 1-92. \[H 6\] R.S.Hamilton Non-singular solutions of the Ricci flow on three-manifolds. Commun. Anal. Geom. 7 (1999), 695-729. \[H 7\] R.S.Hamilton A matrix Harnack estimate for the heat equation. Commun. Anal. Geom. 1 (1993), 113-126. \[H 8\] R.S.Hamilton Monotonicity formulas for parabolic flows on manifolds. Commun. Anal. Geom. 1 (1993), 127-137. \[H 9\] R.S.Hamilton A compactness property for solutions of the Ricci flow. Amer. Jour. Math. 117 (1995), 545-572. \[H 10\] R.S.Hamilton The Ricci flow on surfaces. Contemp. Math. 71 (1988), 237-261. G.Huisken Asymptotic behavior for singularities of the mean curvature flow. Jour. Diff. Geom. 31 (1990), 285-299. T.Ivey Ricci solitons on compact three-manifolds. Diff. Geo. Appl. 3 (1993), 301-307. P.Li, S.-T. Yau On the parabolic kernel of the Schrodinger operator. Acta Math. 156 (1986), 153-201. \[Lott\] J.Lott Some geometric properties of the Bakry-Emery-Ricci tensor. arXiv:math.DG/0211065. [^1]: St.Petersburg branch of Steklov Mathematical Institute, Fontanka 27, St.Petersburg 191011, Russia. Email: perelman@pdmi.ras.ru or perelman@math.sunysb.edu ; I was partially supported by personal savings accumulated during my visits to the Courant Institute in the Fall of 1992, to the SUNY at Stony Brook in the Spring of 1993, and to the UC at Berkeley as a Miller Fellow in 1993-95. I’d like to thank everyone who worked to make those opportunities available to me.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A dynamic logic ${\mathbf B}$ can be assigned to every automaton ${\mathcal A}$ without regard if ${\mathcal A}$ is deterministic or nondeterministic. This logic enables us to formulate observations on ${\mathcal A}$ in the form of composed propositions and, due to a transition functor $T$, it captures the dynamic behaviour of ${\mathcal A}$. There are formulated conditions under which the automaton ${\mathcal A}$ can be recovered by means of ${\mathbf B}$ and $T$.' author: - 'Ivan Chajda[^1]' - Jan Paseka title: Dynamic logic assigned to automata --- Introduction ============ The aim of the paper is to assign a certain logic to a given automaton without regard to whether it is deterministic or nondeterministic. This logic has to be dynamic in the sense to capture dynamicity of working automaton. We consider an [*automaton*]{} as ${\mathcal A}=(X,S,R)$, where $X$ is a non-empty set of [*inputs*]{}, $S$ is a non-empty set of [*states*]{} and $R\subseteq X\times S\times S$ is the set of [*labelled transitions*]{}. In this case we say that $R$ is a [*state-transition relation*]{} and it is considered as a dynamics of ${\mathcal A}$. Hence, the automaton ${\mathcal A}$ can be visualized as a graph whose vertices are states and edges denote (possibly multiple) transitions $s\xrightarrow{x} t$ from one state $s$ to another state $t$ provided an input $x$ is coming; this is visualized by a label $x$ on the edge $(s,t)$. In particular, motivated by the above considerations and e.g. by the paper [@perinotti] where a denumerable set of vertices is used in studying quantum automata to recover the Weyl, Dirac and Maxwell dynamics in the relativistic limit we have to assume that the sets $X$ and $S$ can have arbitrarily large cardinality. Any physical system can be in some sense considered as an automaton. Its states are then states of the automaton and the transitition relation is a transition of a physical system from a given state to an admissible one. It should be noted that a quantum physical system is nondeterministic since particles can pass through a so-called superposition, i.e., they may randomly select a state from the set of admissible states. On the other hand, we often formulate certain propositions on an automaton ${\mathcal A}$ and deduce conclusions about the behaviour of ${\mathcal A}$ in the present (i.e., a [*description*]{}) or in a (near) future (i.e., a [*forecast*]{}). It is apparent that for this aim we need a certain logic which is derived from a given automaton and which enables us to formulate propositions on ${\mathcal A}$ and to deduce conclusions and consequences. Due to the mentioned dynamics of ${\mathcal A}$, our logic ${\mathbf B}$ should contain a tool for a certain dynamics. This tool will be called a [*transition functor*]{}. This transition functor will assign to every proposition $p\in {\mathbf B}$ and input $x\in X$ another proposition $q$. In a certain case, this functor can be considered as a modal functor with one more input from $X$. The above mentioned approach has a sense if our logic ${\mathbf B}$ with a transition functor $T$ enables us to reconstruct the dynamics of a given automaton ${\mathcal A}$. One can compare our approach with the approach from [@perinotti] where an automaton can be represented by an operator over a Hilbert space or with the approach from [@yongming] or [@mendivil] where the role of the transition functor is played by a map from $S$ to $({\mathbf M}^{S})^{X}$ where ${\mathbf M}$ is a bounded lattice of truth-values or by a map from $S$ to $({[0,1]}^{S})^{X}$. In what follows, we are going to involve a systematic approach how to reach such a transition functor and the logic ${\mathbf B}$ such that the reconstruction of the state-transition relation $R$ is possible. Since the conditions of our approach are formulated in a pure algebraic way, we need to develop an algebraic background (see e.g. also in [@Blyth]). It is worth noticing that the transition functor will be constructed formally in a similar way as tense operators introduced by J. Burgess [@burges] for the classical logic and developped by the authors for several non-classical logics, see [@dyn], [@dem] and [@doa], and also the monograph [@monochapa]. Because we are not interested in outputs of the automaton ${\mathcal A}$, we will consider ${\mathcal A}$ as the so-called [*acceptor*]{} only. It is worth noticing that certain (temporal) logics assigned to automata were already investigated by several authors, see e.g. the seminal papers on temporal logics for programs by Vardi [@vardibuchi], [@vardilinear], the papers [@dixon; @konur] and the monograph [@fisher] for additional results and references. However, our approach is different. Namely, our logic assigned to an automaton is equipped with the so-called transition operator which makes the logic to be dynamic. Besides of the previous, the observer or a user of an automaton can formulate propositions revealing our knowledge about it depending on the input. The truth-values of these propositions depend on states and inputs and let us assume that these propositions can acquire only two values, namely either TRUE of FALSE. For example, if we fix an input $x\in X$, the proposition $p/x$ can be true if the automaton ${\mathcal A}$ is in the state $s$ but false if ${\mathcal A}$ is not in the state $s$. Hence, for each state $s\in S$ we can evaluate the truth-value of $p/x$, it is denoted by $p/x(s)$. As mentioned above, $p/x(s)\in \{0, 1\}$ where $0$ indicates the truth-value FALSE and $1$ indicates TRUE. Denote by $B$ the set of propositions about the automaton ${\mathcal A}$ formulated by the observer. We can introduce the order $\leq$ on $B$ as follows: $$\text{for}\ p,q\in B, p\leq q\ \text{if and only if}\ p(s)\leq q(s)\ \text{for all}\ s\in S.$$ One can immediately check that the contradiction, i.e., the proposition with constant truth-value $0$, is the least element and the tautology, i.e., the proposition with the constant truth-value $1$ is the greatest element of the partially ordered set $(B;\leq)$; this fact will be expressed by the notation ${\mathbf B}=(B;\leq, 0, 1)$ for the bounded partially ordered set of propositions about the automaton ${\mathcal A}$. We summarize our description as follows: 1. every automaton ${\mathcal A}$ will be identified with the triple $(B,X, S)$, where $B$ is the set of propositions about ${\mathcal A}$, $X$ is the set of possible inputs and $S$ is the set of states on ${\mathcal A}$; 2. we are given a set of labelled transitions $R\subseteq X\times S\times S$ such that, for an input $x\in X$, ${\mathcal A}$ can go from $s$ to $t$ provided $(x, s,t)\in R$; 3. the set $B$ is partially ordered by values of propositions as shown above. If $s\xrightarrow{x} t_1$ and $s\xrightarrow{x} t_2$ yields $t_1=t_2$ for all $s, t_1, t_2\in S$ and $x\in X$ we say that ${\mathcal A}$ is a [*deterministic automaton*]{}. If ${\mathcal A}$ is not deterministic we say that it is [*nondeterministic*]{}. To shed light on the previous concepts, let us present the following example. \[firef\]\[expend1\] At first, let us present a very simple automaton ${\mathcal A}$ describing a SkyLine Terminal Transfer Service at an airport between Terminals 1 and 2. The SkyLine train is housed, repaired and maintained in the engine shed and the only way how to get there is through Terminal 2. The observer can distinguish three states as follows: 1. $s_1$ means that the SkyLine train is in Terminal 1, 2. $s_2$ means that the SkyLine train is in Terminal 2, 3. $s_3$ means that the SkyLine train is in the engine shed. There are two possible actions: 1. $x_1$ means that the passengers entered the SkyLine train, 2. $x_2$ means that the SkyLine train has to be moved to the engine shed. If the SkyLine train is in Terminal 1 or in Terminal 2 then, after the passengers entered it, it moves to the other terminal. If the SkyLine train is in Terminal 2 then, after the request that the SkyLine train has to be moved to the engine shed is issued, it moves to the engine shed. If the SkyLine train is in the engine shed then, regardless of what action is requested, it stays there. The set $R$ of labelled transitions on the set $S=\{s_1, s_2, s_3\}$ of states under actions from the set $X=\{x_1, x_2\}$ is of the form $$R=\{(x_1,s_1, s_2), (x_1,s_2, s_1), (x_1,s_3,s_3), (x_2,s_2, s_3), (x_2,s_3, s_3)\}$$ and it can be vizualized as follows. (s\_1) at (0,0) [$s_1$]{}; (s\_2) \[right of=s\_1\] [$s_2$]{}; (s\_3) \[right of=s\_2\] [$s_3$]{}; (s\_1) – (s\_2) node\[pos=.5,sloped,above\] [$x_1$]{}; (s\_2) – (s\_3) node\[pos=.5,sloped,above\] [$x_2$]{}; (s\_2) .. controls +(up:1.5cm) .. (s\_1) node\[pos=.5,sloped,above\] [$x_1$]{}; (s\_3) edge \[loop above\] node [$x_1$]{} (s\_3); (s\_3) edge \[loop below\] node [$x_2$]{} (s\_3); The set $B=\{0, p, q, r, p', q', r', 1\}$ of possible propositions $B$ about the automaton ${\mathcal A}$ is as follows: 1. $0$ means that the SkyLine train is in no state of $S$, 2. $p$ means that the SkyLine train is in Terminal 1, 3. $q$ means that the SkyLine train is in Terminal 2, 4. $r$ means that the SkyLine train is in the engine shed, 5. $1$ means that the SkyLine train is in at least one state of $S$. Considering ${\mathbf B}$ as a classical logic (represented by a Boolean algebra $(B; \vee, \wedge, ', 0, 1)$), we can apply logical connectives conjunction $\wedge$, disjunction $\vee$, negation $'$ and implication ${\Longrightarrow}$ to create new propositions about ${\mathcal A}$. In our case, we can get e.g. $p'=q\vee r$ which means that the SkyLine train is either in Terminal 2 or in the engine shed, etc. Altogether, we obtain eight propositions. We may identify $\mathbf B$ with the Boolean algebra $\{0, 1\}^S$ as follows: ---------------- ----------------- ----------------- ---------------- $0=(0,0,0)$, $p=(1, 0, 0)$, $q=(0, 1, 0)$, $r=(0, 0, 1)$, $p'=(0,1, 1)$, $q'=(1, 0, 1)$, $r'=(1, 1, 0)$, $1=(1,1,1)$. ---------------- ----------------- ----------------- ---------------- The interpretation of propositions from $B$ is as follows: for any $\alpha\in B$, $\alpha$ is true in the state $s_i$ of the automaton ${\mathcal A}$ if and only if $\alpha(s_i)=1$. Algebraic tools =============== For the above mentioned construction of a suitable logic with a transition functor and the reconverse of the given relation, we recall the following necessary algebraic tools and results in this section. Let $S$ be a non-empty set. Every subset $R\subseteq S\times S$ is called a [*relation on $S$*]{} and we say that the couple $(S, R)$ is a [*transition frame*]{}. The fact that $(s, t)\in R$ for $s, t\in S$ is expressed by the notation $s \mathrel{R} t$. Let $A$ be a non-empty set. A relation on $A$ is called a [*partial order*]{} if it is reflexive, antisymmetric and transitive. In what follows, partial order will be denoted by the symbol $\leq$ and the pair $\mathbf A=(A;\leq)$ will be referred to as a [*partially ordered set*]{} (shortly a [*poset*]{}). Let $(A;\leq)$ and $(B;\leq)$ be partially ordered sets, $f, g\colon A\to B$ mappings. We write $f\leq g$ if $f(a)\leq g(a)$, for all $a\in A$. A mapping $f$ is called [*order-preserving*]{} or [*monotone*]{} if $a, b \in A$ and $a \leq b$ together imply $f(a) \leq f(b)$ and [*order-reflecting*]{} if $a, b \in A$ and $f(a) \leq f(b)$ together imply $a \leq b$. A bijective order-preserving and order-reflecting mapping $f\colon A\to B$ is called an [*isomorphism*]{} and then we say that the partially ordered sets $(A;\leq)$ and $(B;\leq)$ are [*isomorphic*]{}. Let $(A;\leq)$ and $(B;\leq)$ be partially ordered sets. A mapping $f\colon A\to B$ is called [*residuated*]{} if there exists a mapping $g\colon B\to A$ such that $f(a)\leq b\ \text{if and only if}\ a\leq g(b)$ for all $a\in A$ and $b\in B$. In this situation, we say that $f$ and $g$ form a [*residuated pair*]{} or that the pair $(f,g)$ is a (monotone) [*Galois connection*]{}. The role of Galois connections is essential for our constructions. If a partially ordered set $\mathbf A$ has both a bottom and a top element, it will be called [*bounded*]{}; the appropriate notation for a bounded partially ordered set is $(A;\leq,0,1)$. Let $(A;\leq,0,1)$ and $(B;\leq,0,1)$ be bounded partially ordered sets. A [*morphism*]{} $f\colon A\to B$ [*of bounded partially ordered sets*]{} is an order, top element and bottom element preserving map. We can take the following useful result from [@dyn Observation 1]. \[obsik\] Let $\mathbf A$ and $\mathbf M$ be bounded partially ordered sets, $S$ a non-empty set, and $h_{s}\colon A\to M, s\in S$, morphisms of bounded partially ordered sets. The following conditions are equivalent: 1. $((\forall s \in S)\, h_{s}(a)\leq h_{s}(b))\implies a\leq b$ for any elements $a,b\in A$; 2. The map $i_{{}{\mathbf A}}^{S}\colon A \to M^{S}$ defined by $i_{{}{\mathbf A}}^{S}(a)=(h_s(a))_{s\in S}$ for all $a\in A$ is order reflecting. We then say that $\{h_{s}\colon A\to M; s\in S\}$ is a [*full set of order-preserving maps with respect to*]{} $M$. Note that we may in this case identify $\mathbf A$ with a bounded subposet of $\mathbf{M}^S$ since $i_{{}{\mathbf A}}^{S}$ is an order reflecting morphism alias [*embedding*]{} of bounded partially ordered sets. For any $s\in S$ and any $p=(p_t)_{t\in S}\in {M}^S$ we denote by $p(s)$ the $s$-th projection $p_s$. Note that $i_{{}{\mathbf A}}^{S}(a)(s)=h_s(a)$ for all $a\in A$ and all $s\in S$. Transition frames and transition operators ========================================== The aim of this section is to recall a construction of two operators on partially ordered sets derived by means of a given relation and a construction of relations induced by these operators. For more details see the paper [@transop]. In what follows, let $\mathbf{M}=(M;\leq,0, 1)$ be a bounded partially ordered set and the bounded subposets ${\mathbf{A}}=(A;\leq,0, 1)$ and ${\mathbf{B}}=(B;\leq,0, 1)$ of $\mathbf{M}^S$ will play the role of possibly different logics of propositions pertaining to our automaton ${\mathcal A}$, a corresponding set of states $S$, and a state-transition relation $R$ on $S$. The operator $T_R\colon B\to {M}^S$ will prescribe to a proposition $b\in B$ about ${\mathcal A}$ a new proposition $T_R(b)\in {M}^S$ such that the truth value of $T_R(b)$ in state $s\in S$ is the greatest truth value that is smaller or equal than the corresponding truth values of $b$ in all states that can be reached from $s$. If there is no such state the truth value of $T_R(b)$ in state $s$ will be $1$. Similarly, the operator $P_R\colon A\to {M}^S$ will prescribe to a proposition $a\in A$ about ${\mathcal A}$ a new proposition $P_R(a)\in {M}^S$ such that the truth value of $P_R(a)$ in state $t\in S$ is the smallest truth value that is greater or equal than the corresponding truth values of $b$ in all states such that $t$ can be reached from them. If there is no such state the truth value of $P_R(a)$ in state $t$ will be $0$. Specifically, if $M=\{ 0,1\}$ then $T_R(b)$ is true in state $s$ if and only if there is no state $t\in S$ that can be reached from $s$ and $b$ is false in $t$, and $P_R(a)$ is false in state $t$ if and only if there is no state $s\in S$ such that $t$ can be reached from $s$ and $b$ is true in $s$. Consider a complete lattice $\mathbf M=(M;\leq,{}0, 1)$ and let $\mathbf{A}=({A};\leq, 0,1)$ and $\mathbf{B}=({B};\leq,$ $0,1)$ be bounded partially ordered sets with a full set $S$ of morphisms of bounded partially ordered sets into a non-trivial complete lattice $\mathbf{M}$. We may assume that $\mathbf{A}$ and $\mathbf{B}$ are bounded subposets of $\mathbf{M}^{S}$. Further, let $(S,R)$ be a transition frame. Define mappings $P_R:A\to {M}^S$ and $T_R:B\to {M}^S$ as follows: For all $b\in B$ and all $s\in S$, $$\begin{array}{c}\mbox{$T_R(b)(s)=\bigwedge_{M}\{b(t)\mid s R t\} $}\phantom{.} \tag{$\star$} \end{array} \label{eqn:RTD}$$ and, for all $a\in A$ and all $t\in S$, $$\begin{array}{c} \mbox{${P}_R(a)(t)=\bigvee_{M}\{a(s)\mid s R t\} $}{.} \tag{$\star\star$} \end{array} \label{eqn:RPD}$$ Then we say that ${T}_R$ ($P_R$) is an [*upper transition functor*]{} ([*lower transition functor*]{}) [*constructed by means of the transition frame*]{} $(S,R)$, respectively. We have that ${T}_R$ is an order-preserving map such that $T_R(1)=1$ and similarly, ${P}_R$ is an order-preserving map such that $P_R(0)=0$. As an illustration of our approach we present the following example. \[expend2\] Consider the automaton ${\mathcal A}$ and the set of propositions $B$ of Example \[firef\]. Then $R=\{x_1\}\times R_{x_1}\cup \{x_2\}\times R_{x_2}$ where $R_{x_1}=\{(s_1, s_2), (s_2, s_1), (s_3,s_3)\}\ \text{and}\ R_{x_2}=\{(s_2, s_3), (s_3, s_3)\}. $ Using our formulas $(\star)$ and $(\star\star)$, we can compute the upper transition functors $T_{R_{x_1}}$, $T_{R_{x_2}}\colon B\to 2^{S}$ and the lower transition functors $P_{R_{x_1}}$, $P_{R_{x_2}}\colon B\to 2^{S}$ as follows: [c c]{} --------------------- ----------------------- $T_{R_{x_1}}(0)=0$, $T_{R_{x_1}}(1)=1$, $T_{R_{x_1}}(p)=q$, $T_{R_{x_1}}(p')=q'$, $T_{R_{x_1}}(q)=p$, $T_{R_{x_1}}(q')=p'$, $T_{R_{x_1}}(r)=r$, $T_{R_{x_1}}(r')=r'$, --------------------- ----------------------- & --------------------- ---------------------- $T_{R_{x_2}}(0)=p$, $T_{R_{x_2}}(1)=1$, $T_{R_{x_2}}(p)=p$, $T_{R_{x_2}}(p')=1$, $T_{R_{x_2}}(q)=p$, $T_{R_{x_2}}(q')=1$, $T_{R_{x_2}}(r)=1$, $T_{R_{x_2}}(r')=p$, --------------------- ---------------------- \ &\ --------------------- ----------------------- $P_{R_{x_1}}(0)=0$, $P_{R_{x_1}}(1)=1$, $P_{R_{x_1}}(p)=q$, $P_{R_{x_1}}(p')=q'$, $P_{R_{x_1}}(q)=p$, $P_{R_{x_1}}(q')=p'$, $P_{R_{x_1}}(r)=r$, $P_{R_{x_1}}(r')=r'$, --------------------- ----------------------- & --------------------- ---------------------- $P_{R_{x_2}}(0)=0$, $P_{R_{x_2}}(1)=r$, $P_{R_{x_2}}(p)=0$, $P_{R_{x_2}}(p')=r$, $P_{R_{x_2}}(q)=r$, $P_{R_{x_2}}(q')=r$, $P_{R_{x_2}}(r)=r$, $P_{R_{x_2}}(r')=r$. --------------------- ---------------------- E.g., $T_{R_{x_1}}(q)=p$ means that if the Skyline train is in Terminal 1 then, after any possible transition under the action that the passengers entered the Skyline train, it will change to Terminal 2, and $T_{R_{x_1}}(q')=p'$ means that if the Skyline train is in Terminal 2 or in the engine shed then, after any possible transition under the action that the passengers entered the Skyline train, it will be in Terminal 1 or in the engine shed. Similarly, $T_{R_{x_2}}(1)=1$ means that if the Skyline train is in at least one state of $S$ then, after any possible transition under the action that the SkyLine train has to be moved to the engine shed, it will be in at least one state of $S$, and $T_{R_{x_2}}(p)=p$ means that if the Skyline train is in Terminal 1 then, after any possible transition under the action that the SkyLine train has to be moved to the engine shed (which can be done only at Terminal 2 or at the engine shed), it will stay in Terminal 1. Let $P:A\to B$ and $T:B\to A$ be morphisms of partially ordered sets, $(A;\leq)$ and $(B;\leq)$ subposets of $\mathbf{M}^{S}$. Let us define the relations $$R_T=\{(s, t)\in S\times S\mid (\forall b\in B) (T(b)(s)\leq b(t))\} \tag{$\dagger$} \label{eqn:RT}$$ and $$R^{P}=\{(s, t)\in S\times S\mid (\forall a\in A) (a(s)\leq P(a)(t))\}.\tag{$\dagger\dagger$} \label{eqn:RP}$$ The relations $R_T$ and $R^{P}$ on $S$ will be called the [*upper $T$-induced relation by ${\mathbf M}$*]{} (shortly [*$T$-induced relation by ${\mathbf M}$*]{}) and [*lower $P$-induced relation by ${\mathbf M}$*]{} (shortly [*$P$-induced relation by ${\mathbf M}$*]{}), respectively. \[expend3\] Consider the automaton ${\mathcal A}$ of Example \[expend1\]. Let $P$ be a restriction of the operator $P_{R_{x_2}}$ of Example \[expend2\] and let $T$ be a restriction of the operator $T_{R_{x_2}}$ of the same example. Let us compute $R_T$ and $R^{P}$. We have $R_T=R^{P}=\{(s_2, s_3), (s_3, s_3)\}$. Hence the transition relation $R_{x_2}$ of Example \[expend2\] coincides with our induced transitions relations $R_T$ and $R^{P}$. We can see from above that the operator $T_{R_{x_2}}$ bears the maximal amount of information about the transition relation $R_{x_2}$ on the subposet of all fixpoints of $P_{R_{x_2}}\circ T_{R_{x_2}}$. The same conclusion holds for the operator $P_{R_{x_2}}$. Now, let let $(S, R)$ be a transition frame and $T_R$, $P_R$ functors constructed by means of the transition frame $(S,R)$. We can ask under what conditions the relation $R$ coincides with the relation $R_{T_R}$ constructed as in ($\dagger$) or with the relation $R^{P_R}$ constructed as in ($\dagger\dagger$). If this is the case we say that $R$ [*is recoverable from*]{} $T_R$ or that $R$ [*is recoverable from*]{} $P_R$. We say that $R$ is [*recoverable*]{} if it is recoverable both from $T_R$ and $P_R$. \[expend4\] Consider the automaton ${\mathcal A}$ of Example \[expend1\]. Let us put $A=B=\{0, 1\}^S$. Let $P\colon \{0, 1\}^S\to \{0, 1\}^S$ and $T\colon \{0, 1\}^S\to \{0, 1\}^S$ be morphisms of partially ordered sets given as follows: ----------- ----------- ----------- ----------- ------------- ------------- ------------- ----------- $T(0)=0$, $T(p)=q$, $T(q)=p$, $T(r)=r$, $T(p')=q'$, $T(q')=p'$, $T(r')=r'$, $T(1)=1$, $P(0)=0$, $P(p)=q$, $P(q)=p$, $P(r)=r$, $P(p')=q'$, $P(q')=p'$, $P(r')=r'$, $P(1)=1$. ----------- ----------- ----------- ----------- ------------- ------------- ------------- ----------- Note that $P$ coincides with the operator $P_{R_{x_1}}$ of Example \[expend2\], and $T$ coincides with the operator $T_{R_{x_1}}$ of the same example. We have $R_T=R^{P}=\{(s_1, s_2), (s_2, s_1),(s_3, s_3)\}$. The transition relation $R_{x_1}$ of Example \[expend1\] coincides with our induced transitions relations $R_T$ and $R^{P}$. The connection between relations induced by means of transition functors $T$ and $P$ is shown in the following lemma and theorem. [@transop]\[xreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $S$ a non-empty set such that $\mathbf{A}$ and $\mathbf{B}$ are bounded subposets of $\mathbf{M}^{S}$. Let $P:A\to {M}^{S}$ and $T:B\to {M}^{S}$ be morphisms of partially ordered sets such that, for all $a\in A$ and all $b\in B$, $$P(a)\leq b\ {\Longleftrightarrow}\ a\leq T(b).$$ 1. If $P(A)\subseteq B$ then $R_T\subseteq R^{P}$. 2. If $T(B)\subseteq A$ then $R^{P}\subseteq R_T$. 3. If $P(A)\subseteq B$ and $T(B)\subseteq A$ then $R_T= R^{P}$. Among other things, the following theorem shows that if a given transition relation $R$ can be recovered by the upper transition functor then, under natural conditions, it can be recovered by the lower transition functor and vice versa. [@transop]\[reldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $(S,R)$ a transition frame. Let $\mathbf{A}$ and $\mathbf{B}$ be bounded subposets of $\mathbf{M}^{S}$. Let $P_R:A\to {M}^{S}$ and $T_R:B\to {M}^{S}$ be functors [constructed by means of the transition frame]{} $(S,R)$. Then, for all $a\in A$ and all $b\in B$, $$P_R(a)\leq b\ {\Longleftrightarrow}\ a\leq T_R(b).$$ Moreover, the following holds. 1. Let for all $t\in S$ exist an element $b^t\in B$ such that, for all $s\in S$, $(s,t)\notin R$, we have $\bigwedge_{M}\{u(b^{t})\mid s R u\}\not\leq t(b^{t})\not =1$. Then $R=R_{T_R}$. 2. Let for all $s\in S$ exist an element $a^s\in A$ such that, for all $t\in S$, $(s,t)\notin R$, we have $\bigvee_{M}\{u(a_{s})\mid u R t\}\not\geq s(a^{s})\not =0$. Then $R=R^{P_R}$. 3. If $R=R_{T_R}$ and $T_R(B)\subseteq A$ then $R=R_{T_R}=R^{P_R}$. 4. If $R=R^{P_R}$ and $P_R(A)\subseteq B$ then $R=R_{T_R}=R^{P_R}$. The following corollary of Theorem \[reldreprest\] shows that if the set $B$ of propositions on the system $(B,S)$ is large enough, i.e., if it contains the full set $\{0,1\}^S$ then the transition relation $R$ can be recovered by each of the transition functors. [@transop]\[fcorreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $(S,R)$ a transition frame. Let $\mathbf{B}$ be a bounded subposet of $\mathbf{M}^{S}$ such that $\{0,1\}^{S}\subseteq B$. Let $P_R:B\to {M}^{S}$ and $T_R:B\to {M}^{S}$ be functors [constructed by means of the transition frame]{} $(S,R)$. Then $R=R^{P_R}=R_{T_R}$. The labelled transition functor characterizing the automaton ============================================================ The aim of this section is to derive the logic $\mathbf B$ with transition functors corresponding to a given automaton ${\mathcal A}=(X,S,R)$. This logic $\mathbf B$ will be represented via the partially ordered set of its propositions. In the rest of the paper, truth-values of our logic $\mathbf B$ will be considered to be from the complete lattice $\mathbf M$. Thus $\mathbf B$ will be a bounded subposet of ${\mathbf M}^S$ for the complete lattice ${\mathbf M}$ of truth-values. Let us consider an automaton ${\mathcal A}=(X,S,R)$. Clearly, $R$ can be written in the following form $$R=\bigcup_{x\in X}\{x\}\times R_{x}$$ where $R_x\subseteq S\times S$ for all $x\in X$. Hence, for all $x\in X$, using our formulas $(\star)$ and $(\star\star)$, we obtain the upper transition functor $T_{R_x}\colon B\to M^{S}$ and the lower transition functor $P_{R_x}\colon B\to M^{S}$. It follows that we have functors $T_R=(T_{R_{x}})_{x\in X}\colon B\to (M^{S})^{X}$ and $P_R=(P_{R_{x}})_{x\in X}\colon B\to (M^{S})^{X}$. We say that $T_R$ is the [*labelled upper transition functor constructed by means of ${\mathcal A}$*]{} and $P_R$ is the [*labelled lower transition functor constructed by means of ${\mathcal A}$*]{}. Note that any mapping $T\colon B\to (M^{S})^{X}$ corresponds uniquely to a mapping $\widetilde{T}\colon X\times B\to M^{S}$ such that, for all $x\in X$, $T=(\widetilde{T}(x,-))_{x\in X}$. Hence, $T_R$ and $P_R$ will play the role of our transition functor. Now, let $P=(P_x)_{x\in X}:B\to ({M}^{S})^{X}$ and $T=(T_x)_{x\in X}:B\to ({M}^{S})^{X}$ be morphisms of partially ordered sets. For all $x\in X$, let $R^{P_x}$ be the lower $P_x$-induced relation by $\mathbf{M}$ and $R_{T_x}$ be the upper $T_x$-induced relation by $\mathbf{M}$. Then $R^{P}=\bigcup_{x\in X}\{ x\}\times R^{P_x}$ is called the [*lower $P$-induced state-transition relation*]{} and $R_{T}=\bigcup_{x\in X}\{ x\}\times R_{T_x}$ is called the [*upper $T$-induced state-transition relation*]{}. The automaton ${\mathcal A}^{P}=(X,S,R^{P})$ is said to be the [*lower $P$-induced automaton*]{} and the automaton ${\mathcal A}_{T}=(X,S,R_{T})$ is said to be the [*upper $T$-induced automaton*]{}. We say that the automaton ${\mathcal A}$ [*is recoverable from*]{} $T_R$ ($P_R$) if, for all $x\in X$, $R_x$ [is recoverable from]{} $T_{R_x}$ ($P_{R_x}$), i.e., if ${\mathcal A}={\mathcal A}_{T_R}$ (${\mathcal A}={\mathcal A}^{P_R}$). The following results follow immediately from Lemma \[xreldreprest\], Theorem \[reldreprest\] and Corollary \[fcorreldreprest\]. \[labxreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $S, X$ non-empty sets such that $\mathbf{B}$ is a bounded subposet of $\mathbf{M}^{S}$. Let $P:B\to ({M}^{S})^{X}$ and $T:B\to ({M}^{S})^{X}$ be morphisms of partially ordered sets such that, for all $a, b\in B$ and all $x\in X$, $$P_{x}(a)\leq b\ {\Longleftrightarrow}\ a\leq T_{x}(b).$$ 1. If $P(B)\subseteq B^{X}$ then $R_T\subseteq R^{P}$. 2. If $T(B)\subseteq B^{X}$ then $R^{P}\subseteq R_T$. 3. If $P(B)\subseteq B^{X}$ and $T(B)\subseteq B^{X}$ then $R_T= R^{P}$ and ${\mathcal A}_{T}={\mathcal A}^{P}$. Hence, using Theorem \[labxreldreprest\], we can ask whether the functors computed by $(\star)$ and $(\star\star)$ can recover a given relation $R$ on the set of states. The answer is in the following theorem. \[relxxxdreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and $S, X$ non-empty sets equipped with a set of labelled transitions $R\subseteq X\times S\times S$. Let $\mathbf{B}$ be a bounded subposet of $\mathbf{M}^{S}$. Let $P_R\colon B\to (M^{S})^{X}$ and $T_R:B\to (M^{S})^{X}$ be labelled transition functors [constructed by means of]{} $R$. Then, for all $a, b\in B$ and all $x\in X$, $$P_{R_x}(a)\leq b\ {\Longleftrightarrow}\ a\leq T_{R_x}(b).$$ Moreover, the following holds. 1. If $R=R_{T_R}$ and $T_R(B)\subseteq B^{X}$ then $R=R_{T_R}=R^{P_R}$. 2. If $R=R^{P_R}$ and $P_R(B)\subseteq B^{X}$ then $R=R_{T_R}=R^{P_R}$. The following corollary illustrates the situation in the case when our partially ordered set $\mathbf{B}$ of propositions is large enough, i.e., the case when $\{0,1\}^{S}\subseteq B$. \[labfcorreldreprest\] Let $\mathbf{M}$ be a non-trivial complete lattice and ${\mathcal A}=(X,S,R)$ an automaton. Let $\mathbf{B}$ be a bounded subposet of $\mathbf{M}^{S}$ such that $\{0,1\}^{S}\subseteq B$. Then the automaton ${\mathcal A}$ is recoverable both from $P_R$ and ${T_R}$. We can illustrate previous results in the following example. \[example2\]\[ex2\] Consider the automaton ${\mathcal A}$, the set of propositions $B$ and the state-transition relation $R$ of Example \[firef\]. From Example \[expend2\] we know the labelled upper transition functor $T_R=(T_{R_{x_1}}, T_{R_{x_2}})$ and the labelled lower transition functor $P_R=(P_{R_{x_1}}, P_{R_{x_2}})$ from $B$ to $(2^{S})^{X}$. Since $B=2^{S}$ we have $T_{R_{x_1}}(B)\cup T_{R_{x_2}}(B) \subseteq B$ and $P_{R_{x_1}}(B)\cup P_{R_{x_2}}(B) \subseteq B$. Now, we use $T_R$ for computing the transition relations $R_{T_{R_{x_1}}}$ and $R_{T_{R_{x_2}}}$ (by the formula $(\dagger)$ and Example \[expend4\]) and $P_R$ for computing the transition relations $R^{P_{R_{x_1}}}$ and $R^{P_{R_{x_2}}}$ (by the formula $(\dagger\dagger)$ and Example \[expend4\]). We obtain by Corollary \[fcorreldreprest\] that $R_{T_{R_{x_1}}}=R^{P_{R_{x_1}}}=R_{x_1}$ and $R_{T_{R_{x_2}}}=R^{P_{R_{x_2}}}=R_{x_2}$. It follows that $R_{T_R}=R^{P_{R}}=\{x_1\}\times R_{T_{R_{x_1}}}\cup \{x_2\}\times R_{T_{R_{x_2}}}=R$, i.e., our given state-transition relation $R$ is simultaneously recoverable by the transition functors $T_R$ and $P_R$. Hence these functors are characteristics of the triple $(B,X,S)$. Constructions of automata ========================= By a [*synthesis*]{} in Theory of Systems is usually meant the task to construct an automaton ${\mathcal A}$ which realizes a dynamic process at least partially known to the user. Hence, we are given a description of this dynamic process and we know the set $X$ of inputs. Our task is to set up the set $S$ of states and a relation $R$ on $S$ labelled by elements from $X$ such that the constructed automaton $(X, S, R)$ induces the logic, i.e., the partially ordered set of propositions, which corresponds to the original description. The algebraic tools collected in previous sections enable us to solve the mentioned task. In what follows we involve a construction of $S$ and $R$ provided our logic with the transition functor representing the dynamics of our system is given. As in the previous section, our logic ${\mathbf B}$ will be considered to be a bounded subposet $\mathbf B$ of a power ${\mathbf M}^S$ where ${\mathbf M}$ is a complete lattice of truth-values. Our logic ${\mathbf B}$ is equipped with a transition functor $T:B\to (M^{S})^{X}$ where $X$ is a set of possible inputs. We ask that either $T=T_{R}$ or $T=P^{R}$. Depending on the respective type of our considered logic and of the properties of $T$ we will present some partial solutions to this task. Automata via partially ordered sets {#autopres} ----------------------------------- Recall that (see e.g. [@Markowsky]), for any bounded partially ordered set $\mathbf{B}=({B};\leq, 0,1)$, we have a full set $S_{\mathbf B}$ of morphisms of bounded partially ordered set into the two-element Boolean algebra considered as a bounded partially ordered set ${\mathbf 2}=(\{0, 1\}; \leq, 0, 1))$. The elements $h_D: B\to \{0, 1\}$ of $S_{\mathbf B}$ (indexed by proper down-sets $D$ of $\mathbf{B}$) are morphisms of bounded partially ordered sets defined by the prescription ${h_{D}}(a)=0$ iff $a\in D$. In other words, every bounded partially ordered set ${}{\mathbf B}$ can be embedded into a Boolean algebra ${\mathbf 2}^{S}$ for a certain set $S$ via the mapping $i_{{}{\mathbf B}}^{S}$. Hence, it looks hopeful to use the bounded partially ordered set ${\mathbf 2}=(\{0, 1\}; \leq, 0, 1)$ for the construction of our state-transition relation $R_T\subseteq X\times S_{\mathbf B} \times S_{\mathbf B}$. As mentioned in the beginning of this section, we are interested in a construction of an automaton ${\mathcal A}=(X,S,R)$ for a given set $X$ of inputs and determined by a certain partially ordered set of propositions. We cannot assume that this set of propositions is necessarily a Boolean algebra. In the previous part we supposed that this logic ${\mathbf B}$ is a bounded partially ordered set ${\mathbf B}=(B,\leq,0,1)$. Now, we are going to solve the situation when it is only a subset $C$ of $B$. \[boolgaldreprest\] Let $\mathbf{B}=({B};\leq, 0,1)$ be a bounded partially ordered set such that $\mathbf{B}$ is a bounded subposet of $2^{S_{\mathbf B}}$. Let $(C;\leq, 1)$ be a subposet of $\mathbf{B}$ containing $1$, and $X$ a non-empty set. Let $T=(T_x)_{x\in X}$ where $T_{x}\colon{}C\to 2^{S_{\mathbf B}}$ are morphisms of partially ordered sets such that $T_x(1)=1$ for all $x\in X$. Let $R_T$ be the upper $T$-induced state-transition relation and $T_{R_T}\colon{}B\to (2^{S_{\mathbf B}})^{X}$ be the labelled upper transition functor constructed by means of the upper T-induced automaton ${\mathcal A}_T=(X, S_{\mathbf B},R_T)$. Then, for all $b\in C$, $$T(b)=T_{R_T}(b).$$ Clearly, $T_{R_T}=((T_{R_T})_{x})_{x\in X}$ where $(T_{R_T})_{x}:B\to 2^{S_{\mathbf B}}$ are morphisms of partially ordered sets for all $x\in X$. We write $R_{T}=\bigcup_{x\in X}\{ x\}\times R_{T_x}$ where $R_{T_x}$, $x\in X$ are the upper $T_x$-induced relation by $\mathbf{2}$. Let us choose $b\in C$ and $x\in X$ arbitrarily, but fixed. We have to check that $T_x(b)=(T_{R_T})_{x}(b)$. Assume that $s\in S_{\mathbf B}$. It is enough to verify that $T_x(b)(s)= \bigwedge\{b(t)\mid s R_{T_x} t\}$. Evidently, for all $t\in S_{\mathbf B}$ such that $s R_{T_x} t$, $T_x(b)(s) \leq b(t)$. Hence $T_x(b)(s)\leq \bigwedge\{b(t)\mid s R_{T_x} t\}$. To get the other inequality assume that $T_x(b)(s)< \bigwedge\{b(t)\mid s R_{T_x} t\}$. Then $T_x(b)(s)=0$ and $\bigwedge\{b(t)\mid s R_{T_x} t\}=1$. Put $V_{x}=\{z\in B\mid (\exists y\in C)(T_x(y)(s)=1\ \text{and}\ y\leq z)\}$. It follows that $b\notin V_x$ and $V_x$ is an upper set of ${\mathbf B}$ such that $1\in V_x$ (since $T_x(1)(s)=1(s)=1$). Let $W_x$ be a maximal proper upper set of ${\mathbf B}$ including $V_x$ such that $b\notin W_x$. Put $U_x=B\setminus W_x$. Then $U_x$ is a proper down-set, $0\in U_x$, ${h_{U_x}}(b)=0$ and ${h_{U_x}}(z)=1$ for all $z\in V_x$, i.e., ${h_{U_x}}\in S_{\mathbf B}$ such that $T_x(a)(s)\leq a({h_{U_x}})$ for all $a\in C$. But this yields that $s R_{T_x} h_{U_x}$, i.e., $1=\bigwedge\{b(t)\mid s R_{T_x} t\}\leq b({h_{U_x}})={h_{U_x}}(b)=0$, a contradiction. Using the relation $R^P$ instead of $R_T$, we can obtain a statement dual to Theorem \[boolgaldreprest\]. Automata via Boolean algebras {#autoboolpres} ----------------------------- As for bounded partially ordered sets we have that, for any Boolean algebra ${\mathbf B}=(B;\vee, \wedge, {}{'}, 0,$ $1)$, there is a full set $S_{\mathbf B}^{\text{bool}}$ of morphisms of Boolean algebras into the two-element Boolean algebra $\mathbf{2}=(\{0, 1\};\vee, \wedge, {}{'}, 0, 1)$. In what follows, we will modify our Theorem \[boolgaldreprest\] for the more special case when the considered subposet ${\mathbf C}$ is closed under finite infima. We are now ready to show under which conditions our transition functor can be recovered. \[fullbooldreprest\] Let $\mathbf{B}=({B};\vee, \wedge, {}{'}, 0,1)$ be a Boolean algebra such that $\mathbf{B}$ is a sub-Boolean algebra of ${\mathbf 2}^{S_{\mathbf B}^{\text{bool}}}$. Let ${\mathbf C}=(C;\leq, 1)$ be a subposet of $\mathbf{B}$ containing $1$ such that $x, y\in C$ implies $x\wedge y\in C$, and $X$ a non-empty set. Let $T=(T_x)_{x\in X}$ where $T_{x}:C\to 2^{S_{\mathbf B}^{\text{bool}}}$ are mappings preserving finite meets such that $T_x(1)=1$ for all $x\in X$. Let $R_T$ be the upper $T$-induced state-transition relation and $T_{R_T}\colon{}B\to (2^{S_{\mathbf B}})^{X}$ be the labelled upper transition functor constructed by means of the upper T-induced automaton ${\mathcal A}_T=(X, S_{\mathbf B}^{\text{bool}},R_T)$. Then, for all $b\in C$, $$T(b)=T_{R_T}(b).$$ Let us choose $b\in C$ and $x\in X$ arbitrarily, but fixed. Assume that $s\in S_{\mathbf B}^{\text{bool}}$. As in Theorem \[boolgaldreprest\] it is enough to verify that $T_x(b)(s)= \bigwedge\{b(t)\mid s R_{T_x} t\}$. By the same considerations as in the proof of Theorem \[boolgaldreprest\] we have $T_x(b)(s)\leq \bigwedge\{b(t)\mid s R_{T_x} t\}$. To get the other inequality assume that $T_x(b)(s)< \bigwedge\{b(t)\mid s R_{T_x} t\}$. Then $T_x(b)(s)=0$ and $\bigwedge\{b(t)\mid s R_{T_x} t\}=1$. Put $V_{x}=\{z\in B\mid (\exists y\in C)(T_x(y)(s)=1\ \text{and}\ y\leq z)\}$. It follows that $b\notin V_x$ and $V_x$ is a filter of ${\mathbf B}$ such that $1\in V_x$ (since $y, z\in V_x\cap C$ implies $T_x(y\wedge z)(s)=(T_x(y)\wedge T_x(z))(s)=T_x(y)(s)\wedge T_x(z)(s)=1\wedge 1=1$ and $T_x(1)(s)=1(s)=1$). Let $W_x$ be a maximal proper filter of ${\mathbf B}$ including $V_x$ such that $b\notin W_x$. Then $W_x$ is an ultrafilter of ${\mathbf B}$. The ultrafilter $W_x$ determines a map $g_{W_x}\in S_{{\mathbf B}}^{\text{bool}}$ such that ${g_{W_x}}(b)=0$ and ${g_{W_x}}(z)=1$ for all $z\in V_x$, i.e., ${g_{W_x}}\in S_{{\mathbf B}}^{\text{bool}}$ is such that $T_x(a)(s)\leq {g_{W_x}}(a)=a({g_{W_x}})$ for all $a\in C$. This yields that $s R_{T_x} g_{W_x}$, i.e., $1=\bigwedge\{b(t)\mid s R_{T_x} t\}\leq b({g_{W_x}})={g_{W_x}}(b)=0$, a contradiction. The example below shows an application of Theorem \[fullbooldreprest\]. \[apthbool\] Consider again the set $S=\{s_1, s_2, s_3\}$ of states, the set $X=\{x_1, x_2\}$, and the set of propositions $B=2^{S}$ of Example \[firef\]. Recall that in this case $S=S_{\mathbf B}^{\text{bool}}$. Assume that $C=\{0, r, p', q', 1\}\subseteq B$ from the logic ${\mathbf B}$ of Example \[expend1\]. Assume further that our partially known transition operator $T$ from $C$ to $(2^{S})^{X}$ is given as follows: [c c]{} ------------------- --------------------- $T_{{x_1}}(0)=0$, $T_{{x_1}}(1)=1$, $T_{{x_1}}(r)=r$, $T_{{x_1}}(p')=q'$, $T_{{x_1}}(q')=p'$, ------------------- --------------------- & ------------------- -------------------- $T_{{x_2}}(0)=p$, $T_{{x_2}}(1)=1$, $T_{{x_2}}(r)=1$, $T_{{x_2}}(p')=1$, $T_{{x_2}}(q')=1$. ------------------- -------------------- \ Note that $T$ was chosen as a restriction of the operator $T_R$ from Example \[expend2\] on the set $C$. Then, by an easy computation, we obtain from ($\dagger$) that $R_{T}=\{x_1\}\times R_{T_{x_1}}\cup \{x_2\}\times R_{T_{x_2}}$ where $$R_{T_{x_1}}=\{(s_1, s_2), (s_2, s_1), (s_3,s_3)\}\ \text{and}\ R_{T_{x_2}}=\{(s_2, s_3), (s_3, s_3)\}.$$ From Theorem \[fullbooldreprest\] we have that $T$ is a restriction of the operator $T_{R_T}$ on the set $C$. Moreover, we can see that our state-transition relation $R$ from Example \[firef\] coincides with the induced state-transition relation $R_T$, i.e., our partially known transition operator $T$ gives us a full information about the automaton ${\mathcal A}$ from Example \[firef\]. Conclusion ========== We have shown in our paper that to every automaton considered as an acceptor a certain dynamic logic can be assigned. The dynamic nature of an automaton is expressed via its transition relation labelled by inputs. The logic consists of propositions on the given automaton and its dynamic nature is expressed by means of the so-called transition functor. However, this logic enables us to derive again a certain relation on the set of states which is labelled by inputs. The main task is whether the relation derived from the logic and the transition functor is faithful, i.e., whether it coincides with the original transition relation of the automaton. In fact, we have shown that if our set of propositions is large enough this recovering of the transition relation is possible. Several examples are included. Conversely, having a set $B$ of propositions that describe behaviour of our intended automaton and the transition functor which express the dynamicity of this process together with the set $X$ of inputs (going from environment), we presented a construction of a set of states $S$ and of a state-transition relation $R$ on $S$ such the constructed automaton $(X,S,R)$ realizes the description given by the propositions. It is shown that for every large enough set of states the induced transition functor coincides with the original one. We believe that this theory enables us to consider automata from a different point of view which is more close to logical treatment and which enables us to make estimations and forecasts of the behaviour of automaton particularly in a nondeterministic mode. The next task will be to testify which type of automaton is determined by a suitable sort of logic. Acknowledgement {#acknowledgement .unnumbered} =============== This is a pre-print of an article published in International Journal of Theoretical Physics. The final authenticated version of the article is available online at: https://link.springer.com/article/10.1007/s10773-017-3311-0. [99]{} BISIO, A.— D’ARIANO G.M.— PERINOTTI P.—TOSINI A.: *Free Quantum Field Theory from Quantum Cellular Automata*, Foundations of Physics **45**, (2015), 1137–1152. *Lattices and ordered algebraic structures*, Springer-Verlag London Limited, 2005. BURGESS, J.: *Basic tense logic*, in: Handbook of Philosophical Logic, vol. II (D. M. Gabbay, F. Günther, eds.), D. Reidel Publ. Comp., 1984, pp. 89–139. CHAJDA, I.—PASEKA, J.: *Dynamic Effect Algebras and their Representations*, Soft Computing **16**, (2012), 1733–1741. CHAJDA, I.—PASEKA, J.: *Tense Operators and Dynamic De Morgan Algebras*, In: Proc. 2013 IEEE 43rd Internat. Symp. Multiple-Valued Logic, Springer, (2013), 219–224. CHAJDA, I.—PASEKA, J.: [*Dynamic Order Algebras as an Axiomatization of Modal and Tense Logics*]{}, [International Journal of Theoretical Physics]{}, **54** (2015), 4327–4340. CHAJDA, I.—PASEKA, J.: [*Algebraic Approach to Tense Operators*]{}, Heldermann Verlag, Lemgo, 2015. CHAJDA, I.—PASEKA, J.: *Transition operators assigned to physical systems*, Reports on Mathematical Physics, **78** (2016), 259–280. DIXON, C.—BOLOTOV, A.—FISHER, M.: *Alternating automata and temporal logic normal forms*, Annals of Pure and Applied Logic, **135** (2005), 263–285. FISHER, M.: *An Introduction to Practical Formal Methods Using Temporal Logic*, John Wiley & Sons, 2011. GONZÁLEZ DE MENDÍVIL, J. R.—GARITAGOITIA, J. R.: *Determinization of fuzzy automata via factorization of fuzzy states*, Information Sciences **283** (2014), 165–179. KONUR, S.—FISHER, M.—SCHEWE, S.: *Combined model checking for temporal, probabilistic, and real-time logics*, Theoretical Computer Science **503** (2013), 61–88. <span style="font-variant:small-caps;">MARKOWSKY, G.:</span> *The representation of posets and lattices by sets*, Algebra Universalis [**11**]{} (1980), 173–192. , *The complementation problem for Büchi automata with applications to temporal logic*, Theoretical Computer Science, **49** (1987), 217–237. *An automata-theoretic approach to linear temporal logic*, in: Proceedings of the VIII Banff Higher Order Workshop, in: Lecture Notes in Computer Science, vol. 1043, Springer-Verlag, 1996, pp. 238–266. YONGMING LI: *Finite automata theory with membership values in lattices*, Information Sciences **181** (2011), 1003–1017. [^1]: [Both authors acknowledge the support by a bilateral project New Perspectives on Residuated Posets financed by Austrian Science Fund (FWF): project I 1923-N25, and the Czech Science Foundation (GAČR): project 15-34697L]{}.
{ "pile_set_name": "ArXiv" }
--- abstract: 'By using a hydrostatic pressure, we have successfully tuned the ground state and superconductivity in LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystals. It is found that, with the increase of pressure, the original superconducting phase with $T_c$ $\sim$ 3.5 K can be tuned to a state with lower $T_c$, and then a new superconducting phase with $T_c$ $\sim$ 6.5 K emerges. Accompanied by this crossover, the ground state is switched from a semiconducting state to a metallic one. Accordingly, the normal state resistivity also shows a nonmonotonic change with the external pressure. Furthermore, by applying a magnetic field, the new superconducting state under pressure with $T_c$ $\sim$ 6.5 K is suppressed, and the normal state reveals a weak semiconducting feature again. These results illustrate a non-trivial relationship between the normal state property and superconductivity in this newly discovered superconducting system.' author: - 'Jianzhong Liu, Sheng Li, Yufeng Li, Xiyu Zhu$^{*}$, Hai-Hu Wen' title: 'Pressure Tuned Enhancement of Superconductivity and Change of Ground State Properties in LaO$_{0.5}$F$_{0.5}$BiSe$_2$ Single Crystals' --- According to the theory of Bardeen-Cooper-Schrieffer (BCS), superconductivity is achieved by the quantum condensation of Cooper pairs which are formed by the electrons with opposite momentum near the Fermi surface. The ground state when superconductivity is removed is thus naturally believed to be metallic. In some unconventional superconductors, such as cuprate, iron pnictide/phosphide, heavy fermion and organic superconductors, this may not be true. Recently, the BiS$_2$-based superconductors whose structures are similar to the cuprates[@Cuprates] and iron pnictides[@Iron-based], have been discovered and formed a new superconducting (SC) family. Many new SC compounds with the BiS$_2$ layer have been found, including Bi$_4$O$_4$S$_3$[@BiOS1; @Awana; @BiOS; @Li; @BiOS], REO$_{1-x}$F$_x$BiS$_2$ (RE=La, Nd, Ce, Pr and Yb)[@LaOBiS; @NdOBiS; @X.J; @CeOBiS; @PrOBiS; @Maple; @LnOBiS], Sr$_{1-x}$La$_x$FBiS$_2$[@SrLaFBiS] and La$_{1-x}$M$_x$OBiS$_2$ (M=Ti, Zr, Hf and Th)[@LaMOBiS], etc.. Among these compounds, the high pressure synthesized LaO$_{0.5}$F$_{0.5}$BiS$_2$ was reported to have a maximum $T_c$ $\sim$ 10.6 K[@LaOBiS]. The basic band structure obtained by first principle calculations indicates the presence of strong Fermi surface nesting at the wave vector ($\pi$,$\pi$)[@BiS; @theory; @WanXianGang; @theory; @Yildirim; @theory] when the doping is close to x=0.5 in, for example, LaO$_{1-x}$F$_x$BiS$_2$. Quite often the superconductivity is accompanied by a normal state with a clear semiconducting behavior with unknown reasons[@LaOBiS; @NdOBiS; @X.J; @CeOBiS; @PrOBiS; @Maple; @LnOBiS; @SrLaFBiS; @LaMOBiS]. In addition, possible triplet paring and weak topological superconductivity were suggested based on renormalization-group numerical calculation[@triplet], but this mechanism is still much debated. Moreover, the experiments on the NdO$_{0.5}$F$_{0.5}$BiS$_2$ single crystals also reveal interesting discoveries concerning the SC mechanisms in this new system[@NdOBiS; @Single; @crystal; @Liu; @single; @crystal; @DL; @Feng; @ARPES; @Ding; @Hong; @ARPES; @HQ; @yuan]. Through adjusting the lattice parameters and intimately the electronic band structure, high pressure has served as a very effective method, which can tune both the SC and normal state of superconductors. In this newly found BiS$_2$ family, high pressure has been recognized as an important tool to enhance both the superconductivity volume and transition temperatures except for Bi$_4$O$_4$S$_3$[@P; @BiOS; @LaOBiS; @Maple; @P; @La/CeOBiS; @Maple; @P; @Pr/NdOBiS; @Awana; @P; @SrLaFBiS; @Awana; @P; @CeOBiS; @up; @to; @18Gpa; @Awana; @P; @SrReFBiS2]. In particular, the SC transition temperature of REO$_{1-x}$F$_x$BiS$_2$ (RE=La, Ce, Nd, Pr)[@P; @BiOS; @LaOBiS; @Maple; @P; @La/CeOBiS; @Maple; @P; @Pr/NdOBiS] and Sr$_{1-x}$RE$_x$FBiS$_2$ (RE = La, Ce, Nd, Pr, Sm)[@Awana; @P; @SrLaFBiS; @Awana; @P; @SrReFBiS2] systems was enhanced tremendously by applying the hydrostatic pressure. Taking LaO$_{0.5}$F$_{0.5}$BiS$_2$ as an example, the $T_c$ of the sample can be increased from about 2 K under ambient pressure to $\sim$ 10 K under 2 GPa[@P; @BiOS; @LaOBiS]. And in the Sr$_{1-x}$RE$_x$FBiS$_2$ (R= Ce, Nd, Pr, Sm) system, the non-SC sample at ambient pressure can also be tuned to become a SC one with $T_c$ $\sim$ 10 K under a pressure of 2.5 GPa[@Awana; @P; @SrReFBiS2]. To understand the role of high pressure, X-ray diffraction measurements under pressures have been performed on LaO$_{0.5}$F$_{0.5}$BiS$_2$ system and suggest a structural phase transition from a tetragonal phase ($P4/nmm$) to a monoclinic phase ($P21/m$) under pressures[@up; @to; @18Gpa]. Very recently, a new superconductor LaO$_{0.5}$F$_{0.5}$BiSe$_2$ with the same structure as the LaO$_{0.5}$F$_{0.5}$BiS$_2$ was discovered with $T_c$ $\sim$ 3.5 K[@LaOBiSe; @LaOBiSe; @single; @crystal; @1; @LaOBiSe; @single; @crystal; @2]. It was reported that the electronic structure and Fermi surface in these two compounds are quite similar[@BiSe2; @theory]. Since the system now is selenium based, it is highly desired to do investigations on BiSe$_2$-based materials, better in form of single crystals. Furthermore it is curious to know how the high pressure influences the superconductivity and the ground state property in the BiSe$_2$-based superconductors. Here, we report the successful growth of the LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystals, and a systematic high-pressure study on two single crystals (hereafter named as Sample-1 and Sample-2). By increasing pressure, the ground state is switched from a semiconducting state to a metallic one, simultaneously the original SC $T_c$ $\sim$ 3.5 K (at ambient pressure) initially drops down to about 2 K and finally increases with pressure. As the pressure reaches about 1.2$\pm$0.2 GPa, a new SC phase with higher $T_c$ appears. At about 2.17 GPa, the $T_c$ of the new SC phase reaches about 6.5 K. Accompanied with the change of SC transition temperatures, the normal state resistivity ($\rho_n$) decreases first and then increases with pressure. This non-monotonic pressure dependence of $T_c$ and the normal state resistivity in the present BiSe$_2$-based system are very different from the BiS$_2$-based family. Furthermore, the SC phase with higher T$_c$ can be suppressed by applying a magnetic field, and a weak semiconducting feature in the normal state emerges again when superconductivity is suppressed. All these results show the competing feature between superconductivity and the underlying ground state associated with the semiconducting behavior. ![(Color online) (a) X-ray diffraction pattern for a LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal. The inset shows the back Laue X-ray diffraction pattern of a LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal. (b) Energy Dispersive X-ray microanalysis spectrum taken on a LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal. The inset shows the SEM photograph of the crystal with typical dimensions of about $1.4\times0.7\times0.04$ mm$^3$.[]{data-label="fig1"}](fig1.eps){width="8.5cm"} The LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystals were grown by using flux method[@Liu; @single; @crystal]. Powders of La$_2$O$_3$, LaF$_3$, Bi$_2$Se$_3$, Se and La scraps (all 99.9% purity) were mixed in stoichiometry as the formula of LaO$_{0.5}$F$_{0.5}$BiSe$_2$. The mixed powder was grounded together with CsCl/KCl powder (molar ratio CsCl : KCl : LaO$_{0.5}$F$_{0.5}$BiSe$_2$ = 12 : 8 : 1) and sealed in an evacuated quartz tube. Then it was heated up to 800$^\circ$C for 50 hours followed by cooling down at a rate of 3$^\circ$C/hour to 600$^\circ$C. Single crystals with lateral sizes of about 1 mm were obtained by washing with water. X-ray diffraction (XRD) measurements were performed on a Bruker D8 Advanced diffractometer with the Cu-K$_\alpha$ radiation. DC magnetization measurements were carried out with a SQUID-VSM-7T (Quantum Design). ![(Color online) (a) Temperature dependence of resistivity for Sample-1 at various pressures in the temperature range 2 K to 300 K. The inset shows the magnetic susceptibility of a LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal in an applied field of 10 Oe ($\parallel$c-axis) under the ambient pressure. Both the magnetic susceptibility measured in zero-field-cooled (ZFC) and field-cooled (FC) modes are shown. (b) and (c) Enlarged views of the resistive transition in the temperature range 2 K to 10 K at various pressures for Sample-1 and Sample-2, respectively. The superconducting transitions are rather sharp at ambient and high pressures.[]{data-label="fig2"}](fig2.eps){width="8.2cm"} Measurements of resistivity under pressure were performed up to $\sim$ 2.3GPa on PPMS-16T (Quantum Design) by using HPC-33 Piston type pressure cell with the Quantum Design DC resistivity and AC transport options. The LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal with the standard four-probe method was immersed in pressure transmitting medium (Daphene 7373) in a Teflon cap. Hydrostatic pressures were generated by a BeCu/NiCrAl clamped piston-cylinder cell. The pressure upon the sample was determined by measuring the pressure-dependent $T_c$ of a Sn sample with high purity. In Fig. \[fig1\](a) we present the X-ray diffraction (XRD) pattern for the LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal. It’s clear that only ($00l$) reflections can be observed yielding a $c$-axis lattice constant $c=14.05\pm0.03\AA$. The inset of Fig. \[fig1\](a) shows the Laue diffraction pattern of the LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal. Bright and symmetric spots can be clearly observed, indicating a good crystallinity. Energy dispersive X-ray spectrum (EDS) measurements were performed at an accelerating voltage of 20kV and working distance of 10 millimeters by a scanning electron microscope (Hitachi Co.,Ltd.). One set of the EDS result on LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal is shown in Fig. \[fig1\](b), and the composition of the single crystal can be roughly expressed as LaO$_y$F$_{0.48}$Bi$_{0.95}$Se$_{1.89}$. The atomic ratio is close to the nominal composition except for oxygen which can not be obtained accurately by the EDS measurement. The temperature dependence of resistivity for the LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystal (Sample-1) at various pressures with temperature ranging from 2 K to 300 K is illustrated in Fig. \[fig2\](a). The inset of Fig. \[fig2\](a) shows the temperature dependent magnetic susceptibility at ambient pressure under a magnetic field of 10 Oe, and a sharp SC transition is observed at about 3.5 K. An estimate on the Meissner screening volume through the magnetic susceptibility measured in the zero-field-cooled (ZFC) mode reveals a high superconducting volume. For Sample-1, we were not managed to measure the sample at a pressure higher than 2.04 GPa. As shown in Fig. \[fig2\](a), at ambient pressure the normal state resistivity shows a semiconducting behavior. This semiconducting behavior can be suppressed under a small pressure and turns to be a metallic one at about 0.54 GPa. With further increase of pressure, the metallic behavior maintains until the maximum pressure. This semiconducting to metallic transition with pressure has been noticed in Sr$_{1-x}$RE$_x$FBiS$_2$ (R= La, Ce, Nd, Pr, Sm) systems[@Awana; @P; @SrLaFBiS; @Awana; @P; @SrReFBiS2]. In the case of Sr$_{0.5}$La$_{0.5}$FBiS$_2$ polycrystalline sample, the semiconductor-metal transition was considered as coming from the change of F-Sr/La-F bond angle along with inter-atomic distances[@Awana; @P; @SrLaFBiS]. Interestingly, the semiconductor-metal transition under pressure for CeO$_{0.5}$F$_{0.5}$BiS$_2$ system has been proposed according to the first-principle calculations[@CeOBiS; @transition; @from; @theroy], but the transition was not observed in previous reports of experiment[@Maple; @P; @La/CeOBiS; @Awana; @P; @CeOBiS]. In particular, for LaO$_{0.5}$F$_{0.5}$BiS$_2$ polycrystalline samples, the normal state resistivity decreases monotonically with increasing pressure, but it still exhibits semiconducting behavior under a very high pressure (18 GPa)[@up; @to; @18Gpa]. In Fig. \[fig2\](b) and (c), we present enlarged views of SC transitions at low temperatures under various pressures for Sample-1 and Sample-2, respectively. Both samples exhibit very similar behavior. As one can see, the variation of both the SC transition temperature and normal state resistivity upon the external pressure are non-monotonic. The original $T_c$ $\sim$ 3.5 K (at ambient pressure) gradually drops down with increasing pressure and becomes below 2 K at about 1.95 GPa. At the same time, a high $T_c$ phase gradually emerges starting from about 1.2$\pm$0.2 GPa and enhances continuously with increasing pressure. It seems that the high T$_c$ phase with T$_c$ = 6.5K coexists with the low $T_c$ phase in the range from 1.2$\pm$0.2 GPa to about 1.95 GPa. With further increase of pressure, zero resistance corresponding to the high $T_c$ phase appears above 2 K and the SC transition becomes sharper at higher pressures. A similar behavior under pressure has been observed in some strongly correlated electronic systems, such as heavy fermion[@CeCu2Si2; @P; @Yuan], organic systems[@organic; @P] and iron chalcogenides[@KFe2Se2; @P]. In previous high pressure studies on BiS$_2$-based superconductors, the T$_c$ monotonically increases with the pressure without showing the coexistence of two transient phases. This indicates the distinction between our present BiSe$_2$-based superconductors and the earlier studied BiS$_2$-based systems. For Sample-1 we were not managed to measure the resistivity beyond 2.04 GPa. Two samples are from the same batch. One can see that the resistive transitions below 2.04 GPa are quite similar to each other. It is worth noting that the normal state resistivity presents a non-monotonic dependence on applied pressure. As shown in Fig. 2(b) and (c), the normal state resistivity just above the SC transition temperature gradually decreases with increasing pressure till about 1.95 GPa. Surprisingly, above the threshold pressure, the normal state resistivity begins to increase remarkably with increasing pressure. It is clear that this qualitative behavior is closely related to the pressure-dependent $T_c$, as we addressed below. ![(Color online)(a) Phase diagram of $T_c^{onset}$ versus pressure for the two LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystals investigated here. The dark and red symbols represent the $T_c^{onset}$ of Sample-1 and Sample-2, respectively. The filled symbols stand for the low $T_c$ phase, the open and crossed symbols stand for the high $T_c$ phase with and without zero resistance, respectively. (b) Resistivity at 8 K in the normal state at various pressures for Sample-1 (filled squares) and Sample-2 (filled circles) .[]{data-label="fig3"}](fig3.eps){width="8.5cm"} Fig. \[fig3\](a) and  \[fig3\](b) present the phase diagram of $T_c^{onset}$ versus pressure and pressure-dependent resistivity (8K), respectively. Here, the pressure for the absence of the second transition (about 1.95GPa) is defined as the critical one ($P_c$). Fig. \[fig3\](a) and  \[fig3\](b) clearly reveal two distinct SC phases: the low $T_c$ SC phase below $P_c$ and the high $T_c$ SC phase above $P_c$. In the low $T_c$ SC phase region, both $T_c$ and the normal state resistivity are suppressed with increasing pressure. On the contrary, in the high $T_c$ SC phase, $T_c$ is slightly enhanced and the normal state resistivity increases remarkably with raising pressure. In LaO$_{0.5}$F$_{0.5}$BiS$_2$ polycrystalline samples, a structural phase transition from a tetragonal phase ($P4/nmm$) to a monoclinic phase ($P21/m$) has been suggested by high-pressure X-ray diffraction measurements. And a high $T_c$ value of 10.7K in the high-pressure regime appears in the monoclinic structure[@up; @to; @18Gpa]. Therefore, considering the very weak change of the transition temperature of the high $T_c$ phase, and taking a comparison between LaO$_{0.5}$F$_{0.5}$BiS$_2$ and LaO$_{0.5}$F$_{0.5}$BiSe$_2$, we believe that there are two distinct SC phases in our LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystals in the intermediate pressure region (1.2$\pm$0.2 to $\sim$ 1.95 GPa). At a high pressure, all the phase becomes superconductive with T$_c$ $\sim$ 6.5 K. The transition from the low $T_c$ phase to the high $T_c$ one could be induced by the structural transition, which needs to be further checked. ![(Color online) (a) Temperature dependence of resistivity for Sample-1 under a pressure of 2.04 GPa at various magnetic fields. The inset shows the enlarged view of a weak semiconducting feature in the normal state under high magnetic fields. (b) Upper critical field determined by $T_c^{onset}$ and 90% normal state resistivity $\rho_{n}$.[]{data-label="fig4"}](fig4.eps){width="8.5cm"} In Fig. \[fig4\](a), we present the temperature dependent resistivity under magnetic field up to 14 T at 2.04 GPa ($T_c$ $\sim$ 6.3K). The upper critical field $H_{c2}$ versus $T_c$ is displayed in Fig. \[fig4\](b). We use different criterions of 90%$\rho_{n}$ and $T_c^{onset}$ (determined using the crossing point shown in Fig. \[fig2\](c)) to determine the $H_{c2}$. The upper critical field at zero temperature can be estimated by using the Werthamer-Helfand-Hohenberg (WHH) formula[@WHH; @formula] ${H_{c2}=-0.69T_{c}[dH_{c2}/dT]_{T_c}}$, and the estimated $H_{c2}(0)$ is about 35 T for $T_c^{onset}$. The inset of Fig. \[fig4\](a) shows the enlarged view of superconducting transitions as in the main panel. As shown in the inset, the SC is very robust and keeps presence above 2 K when the field is up to 14 T. That could be induced by the fact that the applied field was approximately parallel to $ab$ plane of the single crystal in the pressure cell during the measurement, and a large anisotropy has been discovered in LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystals[@LaOBiSe; @single; @crystal; @1]. An interesting phenomenon is that a weak semiconducting behavior re-emerges when the superconductivity is suppressed under a high magnetic field. A similar behavior was observed in NdO$_{0.5}$F$_{0.5}$BiS$_2$ single crystals[@Liu; @single; @crystal]. This phenomenon may be related to the semiconducting behavior of the sample at an ambient pressure, although it seems that the low T$_c$ phase does not show up here. The semiconducting ground states for either the low T$_c$ phase at an ambient pressure, or the one with high T$_c$ superconductivity under a high pressure but suppressed with a high magnetic field, may be caused by the same reason, both point to the competition of superconductivity with a tendency which underlines the semiconducting behavior. In summary, we have successfully tuned the ground state and superconductivity in LaO$_{0.5}$F$_{0.5}$BiSe$_2$ single crystals through a hydrostatic pressure. The ground state is switched from a semiconducting state to a metallic one with increasing pressure. Moreover, the original SC phase with $T_c$ $\sim$ 3.5 K can be tuned to a new SC state with $T_c$ $\sim$ 6.5 K. In the low $T_c$ SC phase region, both $T_c$ and the normal state resistivity are suppressed with increasing pressure. On the contrary, in the high $T_c$ SC phase, superconductivity is enhanced and the normal state resistivity increases remarkably with increasing pressure. Moreover, a weak semiconducting behavior re-emerges when the superconductivity under a high pressure is suppressed under magnetic field. These results illustrate a non-trivial relationship between the normal state property and superconductivity. Further theoretical and detailed structure investigations are highly desired to clarify the new high $T_c$ SC phase under a high pressure. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== We appreciate the kind help and discussions with Xiaojia Chen. We thank Xiang Ma and Dong Sheng for the assistance in SEM/EDS measurements. This work was supported by NSF of China, the Ministry of Science and Technology of China (973 projects: 2011CBA00102, 2012CB821403, 2010CB923002) and PAPD. [00]{} W. E. Pickett, *Electronic structure of the high-temperature oxide superconductors*, Rev. Mod. Phys. **61**, 433 (1989). Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, *Iron-Based Layered Superconductor* La\[O$_{1-x}$F$_x$\]FeAs(x = 0.05-0.12) *with $T_c$ = 26K*, J. Am. Chem. Soc. **130**, 3296 (2008). Y. Mizuguchi, H. Fujihisa, Y. Gotoh, K. Suzuki, H. Usui, K. Kuroki, S. Demura, Y. Takano, H. Izawa, and O. Miura, *BiS$_2$-based layered superconductor* Bi$_4$O$_4$S$_3$, Phys. Rev. B **86**, 220510(R) (2012). S. K. Singh, A. Kumar, B. Gahtori, G. Sharma, S. Patnaik, and V. P. S. Awana, *Bulk Superconductivity in Bismuth Oxysulfide* Bi$_4$O$_4$S$_3$, J. Am. Chem. Soc. **134**, 16504 (2012). S. Li, H. Yang, D. Fang, Z. Wang, J. Tao, X. Ding, and H. H. Wen, *Strong coupling superconductivity and prominent superconducting fluctuations in the new superconductor* Bi$_4$O$_4$S$_3$, Sci China-Phys Mech Astron **56**, 2019 (2013). Y. Mizuguchi, S. Demura, K. Deguchi, Y. Takano, H. Fujihisa, Y. Gotoh, H. Izawa, and O. Miura, *Superconductivity in Novel BiS$_2$-Based Layered Superconductor* LaO$_{1-x}$F$_{x}$BiS$_2$, J. Phys. Soc. Jpn. **81**, 114725 (2012). S. Demura, Y. Mizuguchi, K. Deguchi, H. Okazaki, H. Hara, T. Watanabe, S.J. Denholme, M. Fujioka, T. Ozaki, H. Fujihisa, Y. Gotoh, O. Miura, T. Yamaguchi, H. Takeya, and Y. Takano, *New Member of BiS$_2$-Based Superconductor* NdO$_{1-x}$F$_{x}$BiS$_2$, J. Phys. Soc. Jpn. **82**, 033708 (2013). J. Xing, S. Li, X. Ding, H. Yang, and H.H. Wen, *Superconductivity appears in the vicinity of semiconducting-like behavior in* CeO$_{1-x}$F$_{x}$BiS$_2$, Phys. Rev. B **86**, 214518 (2012). R. Jha, A. Kumar, S. K. Singh, and V.P.S. Awana, *Synthesis and Superconductivity of New BiS$_2$ Based Superconductor* PrO$_{0.5}$F$_{0.5}$BiS$_2$, J. Appl. Phys. **113**, 056102 (2013). D. Yazici, K. Huang, B. D. White, A. H. Chang, A. J. Friedman, and M. B. Maple, *Superconductivity of F-substituted* LnOBiS$_2$ (Ln=La, Ce, Pr, Nd, Yb) *Compounds*, Philos. Mag. **93**, 673 (2012). X. Lin, X. Ni, B. Chen, X. Xu, X. Yang, J. Dai, Y. Li, X. Yang, Y. Luo, Q. Tao, G. Cao, and Z. Xu, *Superconductivity induced by La doping in* Sr$_{1-x}$La$_{x}$FBiS$_2$, Phys. Rev. B **87**, 020504 (2013). D. Yazici, K. Huang, B. D. White, I. Jeon, V. W. Burnett, A. J. Friedman, I. K. Lum, M. Nallaiyan, S. Spagna, and M. B. Maple, *Superconductivity induced by electron doping in* La$_{1-x}$M$_{x}$FBiS$_2$ (M= Ti, Zr, Hf, Th), Phys. Rev. B **87**, 174512 (2013). H. Usui, K. Suzuki, and K. Kuroki, *Minimal electronic models for superconducting BiS$_2$ layers*, Phys. Rev. B **86**, 220501(R) (2012). X. Wan, H. Ding, S. Y. Savrasov, and C. Duan, *Electron-phonon superconductivity near charge-density-wave instability in* LaO$_{0.5}$F$_{0.5}$BiS$_2$ *Density-functional calculations*, Phys. Rev. B **87**, 115124 (2013). T. Yildirim, *Ferroelectric soft phonons, charge density wave instability, and strong electron-phonon coupling in BiS$_2$ layered superconductors: A first-principles study*, Phys. Rev. B **87**, 020506 (2013). Y. Yang, W. S. Wang, Y. Y. Xiang, Z. Z. Li, and Qiang-Hua Wang, *Triplet pairing and possible weak topological superconductivity in BiS$_2$-based superconductors*, Phys. Rev. B **88**, 094519 (2013). M. Nagao, S. Demura, K. Deguchi, A. Miura, S. Watauchi, T. Takei, Y. Takano, N. Kumada, and I. Tanaka, *Structural Analysis and Superconducting Properties of F-Substituted* NdOBiS$_2$ *Single Crystals*, J. Phys. Soc. Jpn. **82**, 13701 (2013). J. Liu, D. Fang, Z. Wang, J. Xing, Z. Du, X. Zhu, H. Yang, and H. H. Wen, *Giant superconducting fluctuation and anomalous semiconducting normal state in* NdO$_{1-x}$F$_{x}$Bi$_{1-y}$S$_2$ *Single Crystals*, EPL **106**, 67002 (2014). Z. R. Ye, H. F. Yang, D. W. Shen, J. Jiang, X. H. Niu, D. L. Feng, Y. P. Du, X. G. Wan, J. Z. Liu, X. Y. Zhu, H. H. Wen, and M. H. Jiang, *Electronic Structure of Single Crystalline* NdO$_{0.5}$F$_{0.5}$BiS$_2$ *Studied by Angle-resolved Photoemission Spectroscopy*, arXiv:1402.2860. L. K. Zeng, X. B.Wang, J. Ma, P. Richard, S. M. Nie, H. M. Weng, N. L. Wang, Z. Wang, T. Qian, and H. Ding, *Observation of anomalous temperature dependence of spectrum on small Fermi surfaces in a BiS$_2$-based superconductor*, arXiv:1402.1833. L. Jiao, Z. F. Weng, J. Z. Liu, J. L. Zhang, G. M. Pang, C. Y. Guo, F. Gao, X. Y. Zhu, H. H. Wen, and H. Q. Yuan, *BCS-like superconductivity in* NdO$_{1-x}$F$_{x}$BiS$_2$ (x = 0.3 and 0.5) *single crystals*, arXiv:1406.6791. H. Kotegawa, Y. Tomita, H. Tou, H. Izawa, Y. Mizuguchi, O. Miura, S. Demura, K. Deguchi, and Y.Takano, *Pressure Study of BiS$_2$-Based Superconductors* Bi$_4$O$_4$S$_3$ *and* La(O,F)BiS$_2$, J. Phys. Soc. Jpn. **81**, 103702 (2012). C. T. Wolowiec, D. Yazici, B. D. White, K. Huang, and M. B. Maple, *Pressure-induced enhancement of superconductivity and suppression of semiconducting behavior in* LnO$_{0.5}$F$_{0.5}$BiS$_2$ (Ln = La, Ce) *compounds*, Phys. Rev. B **88**, 064503 (2013). C. T. Wolowiec, B. D. White, I. Jeon, D. Yazici, K. Huang, and M. B. Maple, *Enhancement of superconductivity near the pressure-induced semiconductor¨Cmetal transition in the BiS$_2$-based superconductors* LnO$_{0.5}$F$_{0.5}$BiS$_2$ (Ln = La, Ce, Pr, Nd), J. Phys.: Condens. Matter **25**, 422201 (2013). R. Jha, B. Tiwari, and V. P. S. Awana, *Impact of Hydrostatic Pressure on Superconductivity of* Sr$_{0.5}$La$_{0.5}$FBiS$_2$, J. Phys. Soc. Jpn. **83**, 063707 (2014) R. Jha, H. Kishan, and V. P. S. Awana, *Significant enhancement of superconductivity under Hydrostatic pressure in* CeO$_{0.5}$F$_{0.5}$BiS$_2$ *supercondutor*, Solid State Communications **194**, 6-9 (2014) T. Tomita, M. Ebata, H. Soeda, H. Takahashi, H. Fujihisa, Y. Gotoh, Y. Mizuguchi, H. Izawa, O. Miura, S. Demura, K. Deguchi, and Y. Takano, *Pressure-induced Enhancement of Superconductivity in BiS$_2$-layered* LaO$_{1-x}$F$_{x}$BiS$_2$, J. Phys. Soc. Jpn. **83**, 063704 (2014) R. Jha, B. Tiwari, and V. P. S. Awana, *Appearance of bulk Superconductivity under Hydrostatic Pressure in* Sr$_{0.5}$RE$_{0.5}$FBiS$_2$ (RE = Ce, Nd, Pr and Sm) *new compounds*, arXiv:1407.3105 A. Krzton-Maziopa, Z. Guguchia, E. Pomjakushina, V. Pomjakushin, R. Khasanov, H. Luetkens, P. Biswas, A. Amato, H. Keller and K. Conder, *Superconductivity in a new layered bismuth oxyselenide:* LaO$_{0.5}$F$_{0.5}$BiSe$_2$, J. Phys. Condens. Matter **26**, 215702 (2014). M. Nagao, M. Tanaka, S. Watauchi, I. Tanaka, and Y. Takano, *Growth and superconducting anisotropies of F-substituted* LaOBiSe$_2$ *single crystals*, arXiv:1406.0921 M. Tanaka, M. Nagao, Y. Matsushita, M. Fujioka, S. Denholme, T. Yamaguchi, H. Takeya, and Y. Takano, *First single crystal growth and structural analysis of superconducting layered bismuth oxyselenide:* La(O,F)BiSe$_2$, arXiv:1406.0734 Y. Feng, H. Ding, Y. Du, X. Wan, B. Wang, S. Savrasov, and C. Duan, *Electron-Phonon Superconductivity in* LaO$_{0.5}$F$_{0.5}$BiSe$_2$, J. Appl. Phys. **115**, 233901 (2014) C. Morice, E. Artacho, S. Duton, H. Kim, and S. Saxena, *Electronic and magnetic properties of superconducting* LnO$_{1-x}$F$_{x}$BiS$_2$ (Ln = La, Ce, Pr and Nd) *from first principles*, arXiv:1312.2615 H. Yuan, F. Grosche, M. Deppe, C. Geibel, G. Sparn, and F. Steglich, *Observation of Two Distinct Superconducting Phases in* CeCu$_2$Si$_2$, Science **302**, 2104 (2003). T. Okuhata, T. Nagai, H. Taniguchi, K. Satoh, M. Hedo, and Y. Uwatoko, *High-pressure studies of doped-type organic superconductors*, J. Phys. Soc. Jpn. **76**, 188-189 (2007). L. Sun, X. Chen, J. Guo, P. Gao, Q. Huang, H. Wang, M. Fang, X. Chen, G. Chen, Q. Wu, C. Zhang, D. Gu, X. Dong, L. Wang, K. Yang, A. Li, X. Dai, H. Mao, and Z. Zhao, *Re-emerging superconductivity at 48 kelvin in iron chalcogenides*, Nature **483**, 67¨C69 (2012). N. R. Werthamer, E. Helfand, and P. C. Hohenberg, *Temperature and Purity Dependence of the Superconducting Critical Field, H$_{c2}$ III. Electron Spin and Spin-Orbit Effects*, Phys.Rev. **147**, 295 (1996).
{ "pile_set_name": "ArXiv" }
LU TP 99–31\ hep-ph/9910288\ October 1999 [**QCD Interconnection Effects[^1]**]{}\ [Torbjörn Sjöstrand[^2]]{}\ [*Department of Theoretical Physics,*]{}\ [*Lund University, Lund, Sweden*]{} [**Abstract**]{}\ Heavy objects like the $W$, $Z$ and $t$ are short-lived compared with typical hadronization times. When pairs of such particles are produced, the subsequent hadronic decay systems may therefore become interconnected. We study such potential effects at Linear Collider energies. This talk mainly reports on work done in collaboration with Valery Khoze [@work]. The widths of the $W$, $Z$ and $t$ are all of the order of 2 GeV. A Standard Model Higgs with a mass above 200 GeV, as well as many supersymmetric and other Beyond the Standard Model particles would also have widths in the range. Not far from threshold, the typical decay times $\tau = 1/\Gamma \approx 0.1 \, {\mathrm{fm}} \ll \tau_{\mathrm{had}} \approx 1 \, \mathrm{fm}$. Thus hadronic decay systems overlap, between pairs of resonances ($W^+W^-$, $Z^0Z^0$, $t\bar{t}$, $Z^0H^0$, …), so that the final state may not be just the sum of two independent decays. Pragmatically, one may distinguish three main eras for such interconnection: Perturbative: this is suppressed for gluon energies $\omega > \Gamma$ by propagator/timescale effects; thus only soft gluons may contribute appreciably. Nonperturbative in the hadroformation process: normally modelled by a colour rearrangement between the partons produced in the two resonance decays and in the subsequent parton showers. Nonperturbative in the purely hadronic phase: best exemplified by Bose–Einstein effects. The above topics are deeply related to the unsolved problems of strong interactions: confinement dynamics, $1/N^2_{\mathrm{C}}$ effects, quantum mechanical interferences, etc. Thus they offer an opportunity to study the dynamics of unstable particles, and new ways to probe confinement dynamics in space and time [@GPZ; @ourrec], [*but*]{} they also risk to limit or even spoil precision measurements [@ourrec]. So far, studies have mainly been performed in the context of $W$ mass measurements at LEP2. Perturbative effects are not likely to give any significant contribution to the systematic error, $\langle \delta m_W \rangle {\raisebox{-0.8mm}{\hspace{1mm}$\stackrel{<}{\sim}$\hspace{1mm}}}5$ MeV [@ourrec]. Colour rearrangement is not understood from first principles, but many models have been proposed to model effects [@ourrec; @otherrec; @HR], and a conservative estimate gives $\langle \delta m_W \rangle {\raisebox{-0.8mm}{\hspace{1mm}$\stackrel{<}{\sim}$\hspace{1mm}}}40$ MeV. For Bose–Einstein again there is a wide spread in models, and an even wider one in results, with about the same potential systematic error as above [@ourBE; @otherBE; @HR]. The total QCD interconnection error is thus below $m_{\pi}$ in absolute terms and 0.1% in relative ones, a small number that becomes of interest only because we aim for high accuracy. ------------------------------------------------------------------------ More could be said if some experimental evidence existed, but a problem is that also other manifestations of the interconnection phenomena are likely to be small in magnitude. For instance, near threshold it is expected that colour rearrangement will deplete the rate of low-momentum particle production [@lowmom], Fig. \[figlowmom\]. Even with full LEP2 statistics, we are only speaking of a few sigma effects, however. Bose-Einstein appear more promising to diagnose, but so far experimental results are contradictory [@BEstatus]. One area where a linear collider could contribute would be by allowing a much increased statistics in the LEP2 energy region. A 100 fb$^{-1}$ $W^+W^-$ threshold scan would give a $\sim 6$ MeV accuracy on the $W$ mass [@Wilson], with negligible interconnection uncertainty. This would shift the emphasis from $m_W$ to the understanding of the physics of hadronic cross-talk. A high-statistics run, e.g. 50 fb$^{-1}$ at 175 GeV, would give a comfortable signal for the low-momentum depletion mentioned above, and also allow a set of other tests [@othertest; @lowmom]. Above the $Z^0Z^0$ threshold, the single-$Z^0$ data will provide a unique $Z^0Z^0$ no-reconnection reference. Thus, high-luminosity, LEP2-energy LC (Linear Collider) runs would be excellent to [*establish*]{} a signal. To explore the [*character*]{} of effects, however, a knowledge of the energy dependence could give further leverage. ------------------------------------------------------------------------ In QED, the interconnection rate dampens with increasing energy roughly like $(1 - \beta)^2$, with $\beta$ the velocity of each $W$ in the CM frame [@QED]. By contrast, the nonperturbative QCD models we studied show an interconnection rate dropping more like $(1 - \beta)$ over the LC energy region (with the possibility of a steeper behaviour in the truly asymptotic region), Fig. \[figprob\]. If only the central region of $W$ masses is studied, also the mass shift dampens significantly with energy, Fig. \[figprob\]. However, if also the wings of the mass distribution are included (a difficult experimental proposition, but possible in our toy studies), the average and width of the mass shift distribution do not die out. Thus, with increasing energy, the hadronic cross-talk occurs in fewer events, but the effect in these few is more dramatic. ------------------------------------------------------------------------ The depletion of particle production at low momenta, close to threshold, turns into an enhancement at higher energies [@lowmom]. However, in the inclusive $W^+W^-$ event sample, this and other signals appear too small for reliable detection. One may instead turn to exclusive signals, such as events with many particles at low momenta, or at central rapidities, or at large angles with respect to the event axis, Fig. \[figexclusive\]. Unfortunately, even after such a cut, fluctuations in no-reconnection events as well as ordinary QCD four-jet events (mainly $q\bar{q}gg$ split in $qg + \bar{q}g$ hemispheres, thus with a colour flow between the two) give event rates that overwhelm the expected signal. It could still be possible to observe an excess, but not to identify reconnections on an event-by-event basis. The possibility of some clever combination of several signals still remains open, however. ------------------------------------------------------------------------ Since the $Z^0$ mass and properties are well-known, $Z^0Z^0$ events provide an excellent hunting ground for interconnection. Relative to $W^+W^-$ events, the set of production Feynman graphs and the relative mixture of vector and axial couplings is different, however, and this leads to non-negligible differences in angular distributions, Fig. \[figzzww\]. Furthermore, the higher $Z^0$ mass means that a $Z^0$ is slower than a $W^{\pm}$ at fixed energy, and the larger $Z^0$ width also brings the decay vertices closer. Taken together, at 500 GeV, the reconnection rate in $Z^0Z^0$ hadronic events is likely to be about twice as large as in $W^+W^-$ events, while the cross section is lower by a factor of six. Thus $Z^0Z^0$ events are interesting in their own right, but comparisons with $W^+W^-$ events will be nontrivial. (2880,1728)(0,0) (1836,1049)[(0,0)\[r\][[$\Delta R$]{}]{}]{} (1836,1149)[(0,0)\[r\][[$\langle\delta m_{W}^{4j}\rangle$ $\mathrm{BE}_m'$]{}]{}]{} (1836,1249)[(0,0)\[r\][[$\langle\delta m_{W}^{4j}\rangle$ $\mathrm{BE}_m$]{}]{}]{} (2896,907)[(0,0)\[l\][(fm)]{}]{} (2896,1021)[(0,0)\[l\][$\Delta R$]{}]{} (2741,251)[(0,0)\[l\][0.0]{}]{} (2741,536)[(0,0)\[l\][0.2]{}]{} (2741,821)[(0,0)\[l\][0.4]{}]{} (2741,1107)[(0,0)\[l\][0.6]{}]{} (2741,1392)[(0,0)\[l\][0.8]{}]{} (2741,1677)[(0,0)\[l\][1.0]{}]{} (1648,31)[(0,0)[$E_{\mbox{cm}}$ (GeV)]{}]{} (220,964) (0,0)\[b\] (2587,151)[(0,0)[1000]{}]{} (2145,151)[(0,0)[800]{}]{} (1704,151)[(0,0)[600]{}]{} (1262,151)[(0,0)[400]{}]{} (821,151)[(0,0)[200]{}]{} (540,1677)[(0,0)\[r\][1000]{}]{} (540,1392)[(0,0)\[r\][800]{}]{} (540,1107)[(0,0)\[r\][600]{}]{} (540,821)[(0,0)\[r\][400]{}]{} (540,536)[(0,0)\[r\][200]{}]{} (540,251)[(0,0)\[r\][0]{}]{} ------------------------------------------------------------------------ As noted above, the Bose–Einstein interplay between the hadronic decay systems of a pair of heavy objects is at least as poorly understood as is colour reconnection, and less well studied for higher energies. In some models [@ourBE], the theoretical mass shift increases with energy, when the separation of the $W$ decay vertices is not included, Fig. \[figboei\]. With this separation taken into account, the theoretical shift levels out at around 200 MeV. How this maps onto experimental observables remains to be studied, but experience from LEP2 energies indicates that the mass shift is significantly reduced, and may even switch sign. ------------------------------------------------------------------------ The $t\bar{t}$ system is different from the $W^+W^-$ and $Z^0Z^0$ ones in that the $t$ and $\bar{t}$ always are colour connected. Thus, even when both tops decay semileptonically, $t \to b W^+ \to b \ell^+ \nu_{\ell}$, the system contains nontrivial interconnection effects. For instance, the total hadronic multiplicity, and especially the multiplicity at low momenta, depends on the opening angle between the $b$ and $\bar{b}$ jets: the smaller the angle, the lower the multiplicity [@topmult], Fig. \[figtop\]. On the perturbative level, this can be understood as arising from a dominance of emission from the $b\bar{b}$ colour dipole at small gluon energies [@dipole], on the nonperturbative one, as a consequence of the string effect [@string]. Uncertainties in the modelling of these phenomena imply a systematic error on the top mass of the order of 30 MeV already in the semileptonic top decays. When hadronic $W$ decays are included, the possibilities of interconnection multiply. This kind of configurations have not yet been studied, but realistically we may expect uncertainties in the range around 100 MeV. In summary, LEP2 may clarify the Bose–Einstein situation and provide some hadronic cross-talk hints. A high-luminosity LEP2-energy LC run would be the best way to establish colour rearrangement, however. Both colour rearrangement and BE effects (may) remain significant over the full LC energy range: while the fraction of the (appreciably) affected events goes down with energy, the effect per such event comes up. If the objective is to do electroweak precision tests, it appears feasible to reduce the $WW/ZZ$ “interconnection noise” to harmless levels at high energies, by simple proper cuts. It should also be possible, but not easy, to dig out a colour rearrangement signal at high energies, with some suitably optimized cuts that yet remain to be defined. The $Z^0Z^0$ events should display about twice as large interconnection effects as $W^+W^-$ ones, but cross sections are reduced even more. The availability of a single-$Z^0$ calibration still makes $Z^0Z^0$ events of unique interest. While detailed studies remain to be carried out, it appears that the direct reconstruction of the top mass could be uncertain by maybe 100 MeV. Finally, in all of the studies so far, it has turned out to be very difficult to find a clean handle that would help to distinguish between the different models proposed, both in the reconnection and Bose–Einstein areas. Much work thus remains for the future. [99]{} V.A. Khoze and T. Sjöstrand, LU TP 99-23, hep-ph/9908408, to appear in the Proceedings of the International Workshop on Linear Colliders, Sitges (Barcelona), Spain, April 28 - May 5, 1999 G. Gustafson, U. Pettersson and P. Zerwas, Phys. Lett. [**B209**]{} (1988) 90. T. Sjöstrand and V.A. Khoze, Z. Physik [**C62**]{} (1994) 281, Phys. Rev. Lett. [**72**]{} (1994) 28.. G.Gustafson and J.Häkkinen, Z. Physik [**C64**]{} (1994) 659;\ L. Lönnblad, Z. Physik [**C70**]{} (1996) 107;\ Š. Todorova–Nová, DELPHI Internal Note 96-158 PHYS 651;\ J. Ellis and K. Geiger, Phys. Rev. [**D54**]{} (1996) 1967, Phys. Lett. [**B404**]{} (1997) 230;\ B.R. Webber, J. Phys. [**G24**]{} (1998) 287. J. Häkkinen and M. Ringnér, Eur. Phys. J. [**C5**]{} (1998)275. L. Lönnblad and T. Sjöstrand, Phys. Lett. [**B351**]{} (1995)293, Eur. Phys. J. [**C2**]{} (1998) 165. S. Jadach and K. Zalewski, Acta Phys. Polon. [**B28**]{} (1997) 1363;\ V. Kartvelishvili, R. Kvatadze and R. M[ø]{}ller, Phys. Lett. [**B408**]{} (1997) 331;\ K. Fia[ł]{}kowski and R. Wit, Acta Phys. Polon. [**B28**]{} (1997) 2039, Eur. Phys. J. [**C2**]{} (1998) 691;\ Š. Todorova–Nová and J. Rameš, hep-ph/9710280. V.A. Khoze and T. Sjöstrand, Eur. Phys. J. [**C6**]{} (1999) 271. F. Martin, presented at XXXIV Rencontres de Moriond, France, March 20—27, 1999, preprint LAPP–EXP 99.04. G. Wilson, presented at the International Workshop on Linear Colliders, Sitges (Barcelona), Spain, April 28 – May 5, 1999 E. Norrbin and T. Sjöstrand, Phys. Rev. [**D55**]{} (1997) R5. A.P. Chapovsky and V.A. Khoze, Eur. Phys. J. [**C9**]{} (1999) 449. V.A. Khoze and T. Sjöstrand, Phys. Lett. [**B328**]{} (1994) 466. Ya.I. Azimov, Yu.L. Dokshitzer, V.A. Khoze and S.I. Troyan, Phys. Lett. [**B165**]{} (1985) 147. B. Andersson, G. Gustafson, G. Ingelman and T. Sjöstrand, Phys. Rep. [**97**]{} (1983) 31. [^1]: To appear in the Proceedings of the Workshop on the development of future linear electron-positron colliders for particle physics studies and for research using free electon lasers, Lund, Sweden, 23–26 September 1999 [^2]: torbjorn@thep.lu.se
{ "pile_set_name": "ArXiv" }
--- abstract: 'Frequency comb assisted diode laser spectroscopy, employing both the accuracy of an optical frequency comb and the broad wavelength tuning range of a tunable diode laser, has been widely used in many applications. In this letter we present a novel method using cascaded frequency agile diode lasers, which allows extending the measurement bandwidth to 37.4 THz (1355 – 1630 nm) at MHz resolution with scanning speeds above 1 THz/s. It is demonstrated as a useful tool to characterize a broadband spectrum for molecular spectroscopy and in particular it enables to characterize the dispersion of integrated microresonators up to the fourth order.' author: - Junqiu Liu - Victor Brasch - 'Martin H. P. Pfeiffer' - Arne Kordts - 'Ayman N. Kamel' - Hairun Guo - Michael Geiselmann - 'Tobias J. Kippenberg' title: | Frequency Comb Assisted Broadband Precision Spectroscopy\ with Cascaded Diode Lasers --- =1 Frequency combs [@Udem:02; @Cundiff:03], providing an equidistant grid of lines with precisely known frequencies over a broad spectral range, have substantially advanced precision spectroscopy over the past decades. To date, diverse spectroscopic methods employing frequency combs have been invented, such as direct frequency comb spectroscopy[@Foltynowicz:11], Fourier transform spectroscopy [@Mandon:09] and dual-comb spectroscopy [@Bernhardt:10]. Among these methods, frequency comb assisted diode laser spectroscopy [@DelHaye:09], enabling broadband spectral characterization with fast measurement speed (&gt; 1 THz/s) and simple implementation, has been successfully applied for distance measurement [@Baumann:13; @Baumann:14], dynamic waveform detection [@Giorgetta:10], plasma diagnostics [@Urabe:12] and molecular spectroscopy [@Nishiyama:13; @Nishiyama:14]. One application benefiting from these advantages is the dispersion characterization of high-Q microresonators [@Riemensberger:12; @Herr:14; @Kordts:16; @DelHaye:15; @Pfeiffer:16], while alternative methods using direct frequency comb [@Thorpe:05; @Schliesser:06], white light source [@Savchenkov:08] or sideband spectroscopy [@Li:12] have several limitations including system complexity, low measurement speed, narrow bandwidth and inability to measure microresonators with free spectral ranges (FSR) exceeding 100 GHz. The dispersion characterization is important for the dispersion engineering of integrated high-Q microresonators for Kerr frequency comb generation [@DelHaye:07; @Kippenberg:11] and bright dissipative Kerr soliton formation [@HerrNP:14; @Yi:15; @Joshi:16; @Brasch:15]. In addition, properly engineered higher order dispersion can lead to the emission of a dispersive wave via the process of soliton Cherenkov radiation [@Brasch:15; @Milian:14; @Jang:14; @Karpov:16]. Several techniques based on geometry variation [@Yang:16] and additional material layers [@Riemensberger:12; @Jiang:14] have been demonstrated to tailor the dispersion. However, to measure the higher order dispersion of microresonators, frequency comb assisted diode laser spectroscopy is currently limited by its measurement bandwidth, which is mainly determined by the wavelength tuning range of the used laser. Therefore, using more than one laser to cover different spectral ranges is desired to overcome the bandwidth limitation thus enabling measuring the higher order dispersion. In this letter we demonstrate a method to extend the measurement bandwidth by cascading two widely tunable lasers covering the wavelength range from 1355 nm to 1630 nm. The validity of our method is examined by molecular absorption spectroscopy. We subsequently use this method to characterize the dispersion of a photonic chip-based silicon nitride (Si$_3$N$_4$) microresonator [@Moss:13] whose FSR is approximately 1 THz. This is the first time that higher order dispersion is directly measured for such microresonators. The experimental setup is shown in Fig. \[Fig:figure1\] and is based on the setup described in Ref. [@DelHaye:09]. Two widely tunable mode-hop-free external cavity diode lasers (ECDL 1, ECDL 2, Santec TSL-510) with wavelength tuning ranges of 1355 – 1505 nm and 1500 – 1630 nm beat with a fully-stabilized spectrally broadened erbium-doped-fiber-laser-based frequency comb (MenloSystems FC1500, repetition rate $f_\text{rep}\approx$250 MHz, range 1050 – 2100 nm). A 1310 nm/1550 nm wavelength-division multiplexer (WDM) splits the frequency comb into two branches. Two optical switches are synchronized, such that ECDL 1 beats with the 1310 nm branch, and successively ECDL 2 beats with the 1550 nm branch. The purpose of using the WDM and synchronizing the optical switches is to suppress the comb lines which do not contribute to the beat signals, thus to prevent the photodiode saturation and to improve the signal-to-noise ratio of the beat signals. The two ECDLs scan one after the other with a scan speed of 10 nm/s. By using the band-pass filters labeled as BP 1, BP 2 (center frequencies $f_\text{BP 1}$, $f_\text{BP 2}$), the scanning ECDL generates four “calibration markers” per $f_\text{rep}$ interval when the frequency distance to its nearest comb line is $\pm f_\text{BP 1}$ or $\pm f_\text{BP 2}$ [@DelHaye:09]. The key problem is the combination of the two individual traces generated by the two ECDL scans into a single continuous trace which covers the full measurement range. This is solved by using an auxiliary reference laser whose wavelength $\lambda_\text{ref}$ is set in the 1500 – 1505 nm range where both ECDLs overlap spectrally. With a low-pass filter (BP 3), in each trace a “reference marker” is recorded when the ECDL scans over the reference laser. The reference laser is set initially $f_\text{rep}/2$ from its nearest comb line, and its long-term drift is measured as &lt;20 MHz/h as shown in Fig. \[Fig:figure2\](b). As long as the reference laser drifts less than $f_\text{rep}/2$ from its initial position within the measurement time ($\approx$60 s), which can be monitored by an electrical spectrum analyzer (ESA), we can unambiguously assume that the calibration markers adjacent to the reference marker are generated by the same comb line in both traces. Therefore, using the reference marker, the indices of the calibration markers in both traces can be matched. As shown in Fig. \[Fig:figure2\](a), by combining the data before the reference marker in trace 1 with the data after the reference marker in trace 2, a complete continuous trace from 1355 nm to 1630 nm is formed. Generally to combine individual traces into a single trace, it is required that: (1) each ECDL scans mode-hop-free; (2) any two adjacent ECDLs have a shared wavelength range; (3) a low-drift stable reference laser exists whose wavelength falls in each shared wavelength range; (4) calibration markers are well-resolved in each trace. Once the above conditions are satisfied, such setup can be extended with more than two ECDLs, enabling even broader measurement bandwidth. The reference laser sets the frequency reference point, and the frequency axis is calibrated with respect to the frequency comb based on $f_\text{rep}$. Assuming that the ECDLs scan uniformly in each $f_\text{rep}$ interval, the instantaneous frequency can be interpolated with respect to $f_\text{BP 1}$ and $f_\text{BP 2}$. The precision of frequency determination is limited by a 2.2 MHz/point resolution limit due to the oscilloscope’s maximum 10 million data points per trace, and the error due to the 2 MHz bandwidth of the BPs. Therefore the total error is estimated as 4.2 MHz/point. However such error will not be prominent for spectral features with &gt;100 MHz linewidth, as the center frequency is usually obtained by line profile fitting whose precision is then limited by the signal quality, fitting function used, resonance splitting etc. The reference laser’s wavelength $\lambda_\text{ref}$ is read from a wavelength meter with an imprecision of few picometers (corresponding frequency imprecision is few hundreds MHz). Such imprecision will introduce a global offset to the absolute frequency calibration, but will not affect the continuity of the trace combination and the relative frequency calibration with respect to $\lambda_\text{ref}$. The global offset (i.e. an absolute frequency measurement) is much less important for certain measurements such as the dispersion measurement of microresonators where the entire frequency scan needs to exhibit a precise relative frequency calibration, but not necessarily an absolute frequency calibration. In order to examine the validity of our method, e.g. the continuity of the combined trace and the relative frequency calibration, we implemented a molecular absorption spectroscopy of a gas cell composed of water (H$_2$O), carbon monoxide (CO) and acetylene (C$_2$H$_2$), and compared it with the absorption line data from HITRAN*online* [@Rothman:13] (www.hitran.org). The normalized transmission spectrum is shown in Fig. \[Fig:figure3\](a) and the absorption lines for each kind of molecule are marked. As the spectroscopy is not Doppler-free, a Gaussian function is used to fit the Doppler broadening profile [@Preston:96], and the linewidth distribution is shown in Fig. \[Fig:figure3\](b). The fitted line-center frequencies $f_\text{fit}$ are then compared with the known frequencies $f_\text{HIT}$ from HITRAN*online*, and a global offset $f_\text{off}\approx100$ MHz is observed due to the reason mentioned above. Subtracting the global offset, the frequency deviations defined by $\delta=f_\text{fit}-f_\text{HIT}-f_\text{off}$ are plotted in Fig. \[Fig:figure3\](c). All the frequency deviations are distributed in a $\pm$100 MHz range around 0 MHz, showing the continuity of the combined trace and a good global accuracy of the relative frequency calibration. As the CO lines suffer the least Doppler broadening effect due to the largest molecular mass of CO, their frequencies are more precisely fitted with smaller deviations. Therefore to examine the precision of the relative frequency calibration, we fit the deviation distribution of CO lines with a Gaussian distribution and a 13.6 MHz standard deviation is shown. It should be emphasized that the 13.6 MHz standard deviation is not due to the method itself, but mainly due to the GHz Doppler broadening and the resulting higher statistical fitting error in determining the line-center frequencies. To eliminate the Doppler broadening and to demonstrate better precision, a Doppler-free spectroscopy could be built [@Nishiyama:13; @Nishiyama:14]. Parameter TM$_{00}$ mode TM$_{10}$ mode ------------ ---------------- ---------------- $D_1/2\pi$ 980.2 GHz 958.9 GHz $D_2/2\pi$ –26.08 MHz –559.5 MHz $D_3/2\pi$ 5.62 MHz 46.3 MHz $D_4/2\pi$ –0.200 MHz –2.03 MHz : **Fitted dispersion parameters** \[tab:DisParameter\] An important application of our method is to characterize the dispersion of microresonators. In this context, dispersion describes the variation of the FSR over frequency. It can be expressed in a Taylor series in analogue to the fiber case as $$\begin{aligned} \omega_{\mu}&=\omega_0+D_1\mu+\frac{1}{2}D_2\mu^2+\frac{1}{6}D_3\mu^3+\frac{1}{24}D_4\mu^4+...\\ &=\omega_0+D_1\mu+D_\text{int}(\mu) \end{aligned} \label{Eq:1}$$ Here $\omega_{\mu}$ is the angular frequency of the $\mu$-th resonance relative to the reference resonance $\omega_0$. $D_1/2\pi$ corresponds to the FSR. A non-zero $D_2$ leads to a parabolic deviation from an equidistant $D_1$-spaced resonance grid. $D_\text{int}(\mu)$ is the integrated dispersion including $D_2$ and all higher order dispersion parameters, showing the total deviation. We measured the dispersion of a Si$_3$N$_4$ microresonator fabricated using the photonic Damascene process [@Pfeiffer:16]. The microresonator has a radius of 23 $\mu m$ (corresponding $D_1/2\pi\approx 1$ THz) and is coupled by a single-mode bus waveguide, as shown in Fig. \[Fig:figure4\](a). Both the fundamental transverse magnetic mode (TM$_{00}$) and a higher order mode (TM$_{10}$) of the microresonator are excited by the TM waveguide mode. Referred to the simulations, they can be distinguished due to the different FSRs and linewidths as shown in Fig. \[Fig:figure4\](b) and \[Fig:figure4\](c). Each resonance is fitted with the model derived in Ref. [@Gorodetsky:00], and the center frequency and the linewidth are extracted. Fig. \[Fig:figure4\](d) plots the $D_\text{int}/2\pi$ of the TM$_{00}$ mode, directly showing a visible $D_3$ contribution. Both a 3$^\text{rd}$ order and a 4$^\text{th}$ order weighted polynomial are used to fit the $D_\text{int}/2\pi$, while the reference resonance $\omega_0/2\pi$ is chosen at 193.12 THz (1553.5 nm). The fit is performed by weighting each resonance according to the inverse of its linewidth, as the center frequencies of broader resonances are less precisely fitted. As shown in Fig. \[Fig:figure4\](d), the 4$^\text{th}$ order weighted polynomial shows better fitting than the 3$^\text{rd}$ order one, indicating the necessity to consider the fourth order dispersion parameter $D_4$ in the fit. The dispersion parameters extracted from the 4$^\text{th}$ order polynomial fit for both TM$_{00}$ and TM$_{10}$ modes are shown in Table. \[tab:DisParameter\]. Fig. \[Fig:figure4\](e) plots the fitted linewidths of the TM$_{00}$ mode. The global trend showing decreasing linewidth with increasing frequency is due to the fact that the waveguide-resonator external coupling strength is wavelength-dependent [@Cai:00]. In addition, an avoided modal crossing [@Herr:14; @Kordts:16] is identified around 213 THz where both modes have approximately the same local resonance frequency. At such a modal crossing point, the resonances deviate from the fitted dispersion curve and a local linewidth broadening is observed. In conclusion, we have presented a novel way to extend frequency comb assisted diode laser spectroscopy with cascaded lasers, enabling a full measurement bandwidth of 37.4 THz (1355 – 1630 nm) at MHz resolution with scanning speeds above 1 THz/s. We show potential applications for molecular spectroscopy and for the measurement of higher order dispersion in microresonators which we demonstrate here for the first time in a Si$_3$N$_4$ photonic chip-based microresonator. Furthermore the described cascaded laser spectroscopy can be extended with more lasers, enabling a further increase of the measurement bandwidth. Funding Information {#funding-information .unnumbered} =================== We gratefully acknowledge funding via Defense Sciences Office (DSO), DARPA (W911NF-11-1-0202); European Space Agency (ESA) (ESTEC CN 4000105962/12/NL/PA); Swiss National Science Foundation (SNSF) (Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (SNF)). M.G. acknowledges support from the EPFL fellowship programme co-funded by Marie Curie, FP7 Grant agreement no. 291771. Acknowledgments {#acknowledgments .unnumbered} =============== The Si$_3$N$_4$ microresonator samples were fabricated in the EPFL center of MicroNanoTechnology (CMi). [38]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](http://dx.doi.org/10.1038/416233a) [****,  ()](\doibase 10.1103/RevModPhys.75.325) [****,  ()](http://dx.doi.org/10.1039/C1FD00005E) [****,  ()](http://dx.doi.org/10.1038/nphoton.2008.293) [****,  ()](http://dx.doi.org/10.1038/nphoton.2009.217) [****,  ()](http://dx.doi.org/10.1038/nphoton.2009.138) [****,  ()](\doibase 10.1364/OL.38.002026) [****,  ()](\doibase 10.1364/OE.22.024914) [****,  ()](http://dx.doi.org/10.1038/nphoton.2010.228) [****, ()](\doibase 10.1063/1.4742136) [****,  ()](\doibase 10.1364/JOSAB.30.002107) [****,  ()](\doibase 10.1364/OL.39.004923) [****,  ()](\doibase 10.1364/OE.20.027661) [****,  ()](\doibase 10.1103/PhysRevLett.113.123901) [****,  ()](\doibase 10.1364/OL.41.000452) [**** ()](http://dx.doi.org/10.1038/ncomms6668) [****,  ()](\doibase 10.1364/OPTICA.3.000020) [****,  ()](\doibase 10.1364/OPEX.13.000882) [****,  ()](\doibase 10.1364/OE.14.005975) [****,  ()](\doibase 10.1364/OE.16.004130) [****,  ()](\doibase 10.1364/OE.20.026337) [****,  ()](http://dx.doi.org/10.1038/nature06401) [****,  ()](\doibase 10.1126/science.1193968) [****,  ()](http://dx.doi.org/10.1038/nphoton.2013.343) [****,  ()](\doibase 10.1364/OPTICA.2.001078) @noop [ ()]{} [ ()](\doibase http://dx.doi.org/10.1126/science.aad4811) [****, ()](\doibase 10.1364/OE.22.003732) [****,  ()](\doibase 10.1364/OL.39.005503) [****,  ()](\doibase 10.1103/PhysRevLett.116.103902) [****,  ()](http://dx.doi.org/10.1038/nphoton.2016.36) [****, ()](\doibase http://dx.doi.org/10.1063/1.4890986) [****,  ()](http://dx.doi.org/10.1038/nphoton.2013.183) [****,  ()](\doibase http://dx.doi.org/10.1016/j.jqsrt.2013.07.002) [****,  ()](\doibase 10.1119/1.18457) [****,  ()](\doibase 10.1364/JOSAB.17.001051) [****,  ()](\doibase 10.1103/PhysRevLett.85.74)
{ "pile_set_name": "ArXiv" }
--- author: - | \ Department of Electrical & Computer Engineering, University of Washington bibliography: - 'reference.bib' title: Constrained Upper Confidence Reinforcement Learning --- Introduction {#sec:intro} ============ Related Work {#sec:related_work} ============ Constrained Upper Confidence Reinforcement Learning Algorithm {#sec:algo} ============================================================= Analysis: Regret Bounds and High-Probability Safety Guarantees {#sec:analysis} ==============================================================
{ "pile_set_name": "ArXiv" }
--- abstract: 'Feedback can be utilized to convert information into useful work, making it an effective tool for increasing the performance of thermodynamic engines. Using feedback reversibility as a guiding principle, we devise a method for designing optimal feedback protocols for thermodynamic engines that extract all the information gained during feedback as work. Our method is based on the observation that in a feedback-reversible process the measurement and the time-reversal of the ensuing protocol both prepare the system in the same probabilistic state. We illustrate the utility of our method with two examples of the multi-particle Szilard engine.' address: 'Departamento de Física Atómica, Molecular y Nuclear and GISC, Universidad Complutense de Madrid, 28040 Madrid, Spain, EU' author: - 'Jordan M. Horowitz and Juan M. R. Parrondo' bibliography: - 'Feedback.bib' - 'PhysicsTexts.bib' title: 'Designing optimal discrete-feedback thermodynamic engines' --- Introduction ============ An important application of feedback is to increase the performance of thermodynamic engines by converting the information gathered during feedback into mechanical work [@Leff; @Allahverdyan2008; @Suzuki2009; @Toyabe2010; @Kim2011; @Abreu2011; @Vaikuntanathan2011]. However, for feedback implemented discretely – through a series of feedback loops initiated at predetermined times – the second law of thermodynamics for discrete feedback limits the maximum amount of work that can be extracted [@Sagawa2008; @Sagawa2010; @Horowitz2010; @Suzuki2010; @Ponmurugan2010; @Sagawa2011b; @Sagawa2011]. Namely, the average work extracted $\langle W\rangle$ during a thermodynamic process with discrete feedback in which a system is driven from one equilibrium state at temperature $T$ to another equilibrium state at the same temperature is bounded by the difference between the information gained during feedback $\langle I\rangle$ and the average free energy difference $\langle\Delta F\rangle$: $$\label{eq:GenSecLaw} \langle W\rangle \le kT\langle I\rangle -\langle \Delta F\rangle,$$ where $k$ is Boltzmann’s constant. Here, $\langle I \rangle$ is the mutual information between the microscopic state of the system and the measurement outcomes, and $\langle \Delta F\rangle$ is the average free energy difference between the initial equilibrium state and the final equilibrium state, which may differ for each measurement outcome. Notice is expressed in terms of the extracted work, since we have in mind applications to thermodynamic engines. This differs from the more common convention of using the work done on the system, which is minus the work extracted [@Sagawa2008; @Sagawa2010; @Horowitz2010; @Suzuki2010; @Ponmurugan2010; @Sagawa2011b; @Sagawa2011]. *Optimal* thermodynamic engines extract the maximum amount of work, saturating the bound in \[$\langle W\rangle =kT\langle I\rangle -\langle\Delta F\rangle$\]. Their design often proceeds in two steps. One first selects a physical observable $M$ to be measured. Then, associated to each measurement outcome $m$, one chooses a unique protocol for varying a set of external parameters $\lambda$ during a time interval from $t=0$ to $\tau$, $\Lambda^m=\{\lambda_t^m\}_{t=0}^\tau$. For the process to be optimal the collection of protocols $\{\Lambda^m\}$ must be designed to extract as work all the information gained from the measurement. While at first it may not be obvious how to design a collection of optimal protocols [@Kim2011; @Abreu2011], there is a generic procedure for constructing such a collection given a physical observable $M$ [@Abreu2011; @Jacobs2009; @Erez2010; @Hasegawa2010; @Takara2010; @Esposito2011]; specifically, the optimal protocol is to instantaneously switch the Hamiltonian immediately after the measurement – through an instantaneous change of the external parameters – so that the probabilistic state of the system conditioned on the measurement outcome is an equilibrium Boltzmann distribution with respect to the new Hamiltonian. The external parameters are then reversibly adjusted to their final value, completing the protocol. While such a protocol can always be constructed theoretically, it may be difficult to realize experimentally: one may need access to an infinite number of external parameters in order to affect the instantaneous switching of the Hamiltonian [@Esposito2011]. Furthermore, there are optimal protocols that cannot be constructed by implementing this generic procedure. Hence, it is worthwhile to develop alternative procedures for engineering collections of optimal protocols. In a recent article, we characterized optimal feedback processes, demonstrating that they are *feedback reversible* – indistinguishable from their time-reversals [@Horowitz2011]. There we pointed to the possibility of exploiting feedback reversibility in the design of optimal thermodynamic engines. In this article, we take the next step by explicitly formulating a recipe for engineering a collection of optimal feedback protocols for a given observable $M$ using feedback reversibility as a guiding principle. We present our method in , generalizing the generic procedure outlined in the previous paragraph. We then illustrate our method in with two pedagogical models inspired by the multi-particle Szilard engine recently introduced in [@Kim2011], and subsequently analyzed in [@Kim2011b]: a classical two-particle Szilard engine with hard-core interactions, and a classical $N$-particle Szilard engine with short-ranged, repulsive interactions. In each model, we design a different collection of feedback protocols, demonstrating the utility and versatility of our method. Concluding remarks are offered in with a view towards potential applications of our method to quantum feedback. Measurement and preparation {#sec:prep} =========================== In this section, we describe a general method for designing optimal feedback protocols. Our analysis is based on a theoretical framework characterizing the thermodynamics of feedback formulated in [@Sagawa2010; @Horowitz2010; @Suzuki2010; @Ponmurugan2010; @Sagawa2011b; @Sagawa2011; @Horowitz2011]. Consider a classical system whose position in phase space at time $t$ is $z_t$. The system, initially in equilibrium at temperature $T$, is driven by varying a set of external control parameters $\lambda$ initially at $\lambda_0$ from time $t=0$ to $\tau$ using feedback. At time $t=t_m$, an observable $M$ is measured whose outcomes $m$ occur randomly with probability $P(m|z_{t_m})$ depending only on the state of the system at the time of measurement $z_{t_m}$. The protocol, denoted as $\Lambda^m=\{\lambda_t^m\}_{t=0}^\tau$, depends on the measurement outcome after time $t_m$. Thermal fluctuations cause the system to trace out a random trajectory through phase space $\gamma=\{z_t\}_{t=0}^\tau$. The work extracted along this trajectory is $W[\gamma; \Lambda^m]$, and the reduction in our uncertainty due to the measurement is [@Sagawa2010; @Horowitz2010; @Horowitz2011] $$\label{eq:I2} I[\gamma;\Lambda^m]=\ln\frac{P(m|z_{t_m})}{P(m)},$$ where $P(m)$ is the probability of obtaining measurement outcome $m$. For error-free measurements, which we consider in our illustrative examples below, the measurement outcome is uniquely determined by the state of the system at the time of measurement. Consequently, $P(m|z_{t_m})$ is always either zero or one. When $P(m|z_{t_m})=1$, reduces to $$\label{eq:I} I[\gamma;\Lambda^m]=-\ln P(m).$$ When $P(m|z_{t_m})=0$, is divergent; however, this divergence occurs with zero probability, and therefore does not contribute to the average in . Finally, the change in free energy from the initial equilibrium state, $F(\lambda_0)$, to the final equilibrium state, $F(\lambda^m_\tau)$, denoted as $\Delta F[\Lambda^m]=F(\lambda^m_\tau)-F(\lambda_0)$, is realization dependent, since the final external parameter value at time $\tau$ depends on the measurement outcome $m$. Associated to the feedback process is a distinct thermodynamic process called the reverse process [@Horowitz2010; @Sagawa2011b; @Horowitz2011]. The reverse process begins by first randomly selecting a protocol $\Lambda^m$ according to $P(m)$. The system is then prepared in an equilibrium state at temperature $T$ with external parameters set to $\lambda^m_\tau$. From time $t=0$ to $\tau$, the system is driven by varying the external parameters according to the time-reversed conjugate protocol $\tilde{\Lambda}^m=\{{\tilde \lambda}_t\}_{t=0}^\tau$, where $\tilde\lambda^m_t=\lambda^m_{\tau-t}$. For every trajectory $\gamma=\{z_t\}_{t=0}^\tau$ of the forward process there is a time-reversed conjugate trajectory $\tilde\gamma=\{\tilde{z}_t\}_{t=0}^\tau$, where $\tilde{z}_t=z_{\tau-t}^*$ and $*$ denotes momentum reversal. A feedback process that is indistinguishable from its reverse process is called *feedback reversible* [@Horowitz2011]. A useful microscopic expression for the present considerations is in terms of the phase space densities along the feedback process and the corresponding reverse process. Namely, the phase space density of the feedback process at time $t$ conditioned on executing protocol $ \Lambda^m$, $\rho(z_t|\Lambda^m)$, is identical to the phase space density in the reverse process at time $ \tau-t$ conditioned on executing protocol $ \tilde\Lambda^m$, $\tilde\rho(\tilde{z}_{\tau-t}| \tilde\Lambda^m)$: $$\label{eq:reversible} \rho(z_t|\Lambda^m)=\tilde\rho(\tilde{z}_{\tau-t}| \tilde\Lambda^m).$$ Additionally, $$\label{eq:WIequal} W[\gamma,\Lambda^m]=kTI[\gamma,\Lambda^m]-\Delta F[\Lambda^m]$$ for every realization [@Horowitz2011]. For cyclic ($\Delta F=0$) feedback-reversible processes, such as our illustrative examples, is simply $W[\gamma,\Lambda^m]=kTI[\gamma,\Lambda^m]$. We now utilize and to develop a method for designing optimal feedback processes (or equivalently feedback-reversible processes). Our method is based on the observation that has a noteworthy interpretation at the measurement time $t=t_m$: $$\label{eq:revMeas} \rho(z_{t_m}|\Lambda^m)=\tilde\rho(\tilde{z}_{\tau-t_m}|\tilde\Lambda^m).$$ Specifically, $\rho(z_{t_m}|\Lambda^m)$ is the phase space density of the system at the time of the measurement conditioned on implementing protocol $\Lambda^m$; it represents our knowledge about the microscopic state of the system immediately after the measurement. We therefore refer to it as the *post-measurement* state. The right hand side of , $\tilde\rho(\tilde{z}_{\tau-t_m}| \tilde\Lambda^m)$, is the phase space density at time $t=\tau-t_m$ produced by the reverse process when protocol $ \tilde\Lambda^m$ is executed; it is the probabilistic state of the system prepared (or produced) by using protocol $\tilde\Lambda^m$ in the reverse process. Thus, we refer to $\tilde\rho(\tilde{z}_{\tau-t_m}|\tilde\Lambda^m)$ as the *prepared* state. With this terminology, states that for a process to be feedback reversible the state prepared by the reverse process must be identical to the post-measurement state. This insight is our main tool for designing optimal feedback protocols. Instead of focusing on the feedback process, we search for a protocol that prepares the post-measurement state. We call this procedure *preparation*. Once we have chosen our protocols, we can verify their effectiveness by checking the equality in ; the deviation from equality in is a measure of the the reversibility of each of the protocols in $\{\Lambda^m\}$. Applications to the multi-particle Szilard engine {#sec:ex} ================================================= In this section, we apply the preparation method presented in to two classical extensions of the Szilard engine inspired by the quantum multi-particle Szilard engine considered by Kim *et. al. *in [@Kim2011]. In , we design a collection of optimal protocols for a classical Szilard engine composed of two square particles with hard-core interactions. An $N$-particle Szilard engine consisting of ideal point particles with short-ranged, repulsive interactions is analyzed in . In both examples, we verify that our protocols are optimal through analytic calculations of the work and information. Two-particle Szilard engine {#subsec:two} --------------------------- To illustrate the utility of our method, we now analyze a two-particle Szilard engine. We have in mind two indistinguishable square hard-core particles with linear dimension $d$ confined to a two-dimensional box of width $L_x$ and height $L_y$, pictured in . The particles have a hard-core interaction with the walls, entailing that the center of the particles must be at least a distance $d/2$ from the walls. The box is in weak thermal contact with a thermal reservoir at temperature $kT=1$. Work is extracted using a cyclic, isothermal feedback protocol performed infinitely slowly, as illustrated in . Since the process is cyclic, $\langle\Delta F\rangle=0$, and we only need to investigate the extracted work. In addition, since the process is infinitely slow and isothermal, the work can be expressed in terms of partition functions, as in [@Parrondo2001]. There are two configurational partition functions that will prove useful: the first, denoted $Z_2(x,y)$, is the partition function for the state when both particles are in the same box of width $x$ and height $y$; the second, $\bar{Z}_2(x,y)$, is the partition function for the state where the particles are in seperate boxes, each of width $x$ and height $y$. The calculation of these partition functions is a straightforward though lengthy exercise in integral calculus, which we outline in \[sec:appendix\]. We initiate the feedback protocol with the engine in thermal equilibrium at temperature $kT=1$. We then infinitely slowly insert a thin partition from below, dividing the box into two equal halves along the horizontal direction, as depicted in . Because the particles are hard-bodied and of finite size, the insertion of the partition extracts work. As we slowly insert the partition, the system remains in equilibrium and able to explore its entire phase space until the leading tip of the partition is one particle length $d$ from the box’s top wall. At which point, the particles are too large to pass between the left and right half of the box. At that moment, each particle becomes trapped in one half of the box; either they both become trapped in the same half of the box, or each is trapped in a separate half of the box. The partition function at that moment, being a sum over all distinct microscopic configurations, is then the sum of the partition function when they both become trapped in the left (or right) half, $Z_2(L_x/2,L_y)$, plus the partition function when they become trapped in separate halves, $\bar{Z}_2(L_x/2,L_y)$: $2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)$. The work extracted up to that instant is determined from the ratio of the partition function at that moment to the initial partition function $Z_2(L_x,L_y)$ as $$\label{eq:Win} W_{\rm part}(L_x,L_y)=\ln\left[\frac{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}{Z_2(L_x,L_y)}\right].$$ Once the distance between the leading tip of the partition and the far wall of the box is less than $d$, neither particle is able to fit in the space between the tip and the wall. The partition’s tip is no longer able to push on the particles, and as a result no additional work beyond that in is extracted. Next, we measure in which half of the box the two particles are located. There are three outcomes, which we label $A$, $B$, and $C$, see . Outcomes $A$ and $C$ occur when both particles are found in the same half of the box, whereas outcome $B$ occurs when each particle is found in a separate half of the box. Since the partition functions $Z_2$ and $\bar{Z}_2$ count the number of distinct microscopic configurations, we can express the change in uncertainties associated to each outcome by inserting these partition functions into : $$\begin{aligned} \label{eq:IA} I_A=I_C=-\ln\left[\frac{Z_2(L_x/2,L_y)}{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}\right], \\ \label{eq:IB} I_B=-\ln\left[\frac{\bar{Z}_2(L_x/2,L_y)}{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}\right].\end{aligned}$$ If both particles are found in the same half of the box (outcome $A$ or $C$), the optimal protocol is to quasi-statically shift the partition to the opposite end of the box, as in the single-particle Szilard engine [@Szilard1964], extracting work $$\label{eq:WorkExp} W_{\rm shift}=\ln\left[\frac{Z_2(L_x,L_y)}{Z_2(L_x/2,L_y)}\right].$$ Summing and , we find that the work extracted during the feedback protocol associated to measurement outcome $A$ (or $C$) is $$\begin{aligned} W_A&=W_{\rm part}(L_x,L_y)+W_{\rm shift} \\ &=\ln\left[\frac{2Z_2(L_x/2,L_y)+\bar{Z}_2(L_x/2,L_y)}{Z_2(L_x/2,L_y)}\right],\end{aligned}$$ which equals $I_A$ in . Thus, according to this protocol is optimal as expected, since this protocol when run in reverse clearly prepares the post-measurement state conditioned on $A$. When each particle is found in a separate half of the box (outcome $B$), the optimal protocol is less clear. The motion of the piston in either direction requires work rather than extracts it. Kim *et. al.,* for instance, opt to extract the partition without obtaining any useful work [@Kim2011]: the information in the measurement is wasted. However, our discussion in suggests a way to design an optimal cyclic protocol: the protocol must drive the system from the state post measurement outcome $B$ back to the initial state and when run in reverse must prepare the state associated to outcome $B$ by segregating each particle into a different half of the box. When the particles do not interact, there is no obvious optimal protocol. However, in our model we can exploit the particle interactions. Specifically due to the hard-core interactions, there is a greater likelihood of trapping the particles in separate halves of the box upon inserting the partition when the box is smaller. This observation suggests the following protocol executed in response to measurement outcome $B$. After the partition is inserted, we infinitely slowly compress the box until its width is $l_x>2d$ and its height is $l_y>d$. The extracted work during compression is $$\label{eq:Wcomp} W_{\rm comp}=\ln\left[\frac{\bar{Z}_2(l_x/2,l_y)}{\bar{Z}_2(L_x/2,L_y)}\right].$$ Next, the partition is removed infinitely slowly, extracting $-W_{\rm part}(l_x,l_y)$ \[see \] work. Finally, the box is expanded back to its original size extracting $$\label{eq:Wexp2} W_{\rm exp}=\ln\left[\frac{Z_2(L_x,L_y)}{Z_2(l_x,l_y)}\right].$$ Combining the sum of , , , and $-W_{\rm part}(l_x,l_y)$, with , we find, after a simple algebraic manipulation, that the deviation from reversibility \[cf. \] can be expressed as $$\label{eq:WminusI} W_B-I_B=-\ln\left[1+2\frac{Z_2(l_x/2,l_y)}{\bar{Z}_2(l_x/2,l_y)}\right].$$ Note that $W_B-I_B$ only depends on the size of the compressed box with dimensions $l_x\times l_y$. To investigate the reversibility of our protocol, we study the dependence of $W_B-I_B$ on the compressed box size. To simplify our analysis, we only consider boxes such that $l_x=2l_y$. In , we plot $W_B-I_B$ as a function of the box size parameter $\xi=l_x/d=2l_y/d$. The smaller $\xi$ the smaller the box. Notice that $W_B-I_B<0$. We also observe that the process becomes reversible ($W_B-I_B=0$) when $\xi<4$ ($l_x<4d$ and $l_y<2d$); the box is so small when $\xi<4$ that both particles cannot fit into the same half of the box. Consequently, when the partition is inserted during the reverse process each particle is confined to a separate half of the box, preparing the post-measurement state with probability one. To confirm that our protocol can be optimal, we plot in the total average work extracted $\langle W\rangle = P_AW_A+P_BW_B+P_CW_C$ – where $P_j$ is the probability to implement protocol $j=A,B,C$ – as a function of the box size parameter $\xi$. Again, we see that when $\xi<4$ our protocol becomes optimal: $\langle W\rangle = \langle I\rangle$. For comparison, we have included in the work extracted when implementing the protocol proposed in Ref. [@Kim2011], $\langle W_{\rm K}\rangle$, where the partition is slowly removed in response to outcome $B$. Further insight can be gained by noting that the ratio $Z_2/\bar{Z}_2$ in , which controls the degree of reversibility, has a simple physical interpretation in terms of the change in free energy during an irreversible mixing of two indistinguishable particles, each in separate boxes of sizes $l_x/2\times l_y$, into one box of the same size, $l_x/2\times l_y$: $$\Delta F_{\rm mix}=-\ln\left[\frac{Z_2(l_x/2,l_y)}{\bar{Z}_2(l_x/2,l_y)}\right].$$ Thus, this protocol is reversible when there is an infinite free energy difference between the states in which both particles are in the same box and where each particle is in a separate box. For an ideal gas $\Delta F_{\rm mix}=\ln2$: two indistinguishable ideal gas particles confined to the same box have half as many distinct microscopic configurations than when they are in seperate boxes. For ideal gases our protocol is not optimal ($\Delta F_{\rm mix}\neq\infty$ and $W_B-I_B\neq0$), as it exploits particle interactions. Nevertheless, there may exist other protocols that are optimal for ideal gases. In particular, such a collection could be devised using the generic procedure outlined in the Introduction, where the Hamiltonain is instantaneously switched immediately after the measurement so that the post-measurement state is described by an equilibrium Boltzmann distribution with respect to the new Hamiltonian [@Abreu2011; @Jacobs2009; @Erez2010; @Hasegawa2010; @Takara2010; @Esposito2011]; however, this new Hamiltonian would contain an interaction potential that forces the particles to segregate themselves into opposite halves of the box. $N$-particle Szilard engine {#subsec:many} --------------------------- As a final illustration, we present an optimal feedback protocol for a classical $N$-particle Szilard engine. Consider $N$ indistinguishable, classical, point particles with short-ranged, repulsive interactions confined to a box of volume $V$ in weak thermal contact with a thermal reservoir at temperature $kT=1$. The protocol begins by quickly and isothermally inserting an infinitely thin partition into the box dividing it into two equal halves of volume $V/2$. Since this is performed rapidly and the particles are infinitely small, the particles never have an opportunity to interact with the partition implying that this insertion requires no work. We then measure the number of particles in the left half of the box. Based on the outcome, we implement a cyclic, isothermal feedback protocol. The change in uncertainty when $n$ particles are found in the left half of the box ($N-n$ particles in the right half) is, from , $$\label{eq:In} I_n=-\ln\left[\frac{1}{2^N}\frac{N!}{n!(N-n)!}\right].$$ This information can be extracted completely as work by implementing the following protocol. First, we slowly lower $n$ ($N-n$) localized potential minima or trapping potentials to a depth $E$ in the left (right) half of the box. The trapping potentials are assumed to be deep compared to the thermal energy ($E\gg kT$), but shallow compared to the interaction energy; so that only one particle is confined in each trapping potential, as depicted in . The partition is then quickly removed, and the trapping potentials are slowly turned off. Work is only extracted when the trapping potentials are turned on or off. Since these processes are very slow, the work extracted can be computed in terms of partition functions. Assuming that the volume $V$ of the box is large compared with the interaction length, we can approximate the configurational partition function for the equilbrium state prior to inserting the partition as $$Z(V)=\frac{V^N}{N!}.$$ After making the measurement and finding $n$ particles in the left half of the box, the configurational partition function is $$Z_n(V)=\frac{1}{n!(N-n)!}\left(\frac{V}{2}\right)^N.$$ After lowering the trapping potentials to a depth $E$ each particle is confined to a unique trapping potential of volume $\emph{v}$. At which point the configurational partition function is $$\bar{Z}_n(\emph{v})=\emph{v}^Ne^{-NE}.$$ In terms of these partion functions, the work extracted while trapping the particles is $$\label{eq:trap} W_{\rm trap}=\ln\left[\frac{\bar{Z}_n(\emph{v})}{Z_n(V)}\right]=\ln\left[2^N\left(\frac{\emph{v}}{V}\right)^Nn!(N-n)!e^{-NE}\right],$$ and the work extracted when the trapping potentials are turned off is $$\label{eq:off} W_{\rm off}=\ln\left[\frac{Z(V)}{\bar{Z}_n(\emph{v})}\right]=\ln\left[\frac{1}{N!}\left(\frac{V}{\emph{v}}\right)^Ne^{NE}\right].$$ Summing and , we find the total work to be $$W_n=W_{\rm trap}+W_{\rm off}=\ln\left[2^N\frac{n!(N-n)!}{N!}\right],$$ which is independent of $E$ and is equal to the change in uncertainty $I_n$ in . This protocol is optimal and feedback reversible; run in reverse the protocol confines exactly $n$ particles in the left half with certainty. At first it may be surprising that work can be extracted from this protocol, since we are mearly adding and then removing potential minima. However, net work can be extracted, since the work extracted while slowly turning on or off a trapping potential depends on the total volume accesible to the particles. To see this, consider the simplest scenario of turning off one trapping potential with one particle confined to a box of volume $V$. As the depth of the potential minimum becomes shallower, work is done on the particle until it escapes from the range of the trapping potential. Once the particle leaves, turning off the potential requires no additional work until the particle returns. The time for the particle to return depends on the size of the box. For a box of larger volume, the time to return is longer, and the process requires less work. Going back to the $N$-particle protocol, the work extracted while turning on the trapping potentials after the partition has been inserted – when the available volume for each particle is $V/2$ – is more than the work done during the final step as the trapping potentials are removed, because the volume $V$ available for the particles to explore is larger. When the number of trapping potentials is not equal to the number of particles $N$, this protocol is no longer optimal. The reason being that work can only be extracted when a particle can fall into a potential being lowered; the more trapping potentials a particle has access to, the more work that can be extracted. If there were less trapping potentials then particles, overall less work would be extracted; as there would be fewer sites where energy was being removed. If more than $N$ trapping potentials are lowered, we are able to extract additional work. However, after the partition is removed, each particle can explore an even greater number of trapping potentials; the work to turn off the potentials would exceed that extracted by turning them on. Conclusion {#sec:conclusion} ========== Feedback-reversible processes are optimal, converting all the information acquired through feedback into work. In this article, we formulated a strategy, called preparation, for designing a collection of optimal protocols given a measured physical observable. In the preparation method, optimal protocols are selected by searching for an external parameter protocol whose time-reversal prepares the post-measurement state. To highlight the utility of the preparation method, we applied it to two pedagogical examples – a two- and $N$-particle Szilard engine – exhibiting a distinct collection of optimal protocols for each. In both examples, we addressed the simplest scenario of error-free measurements. When there are measurement errors – for example, if in the $N$-particle Szilard engine (), there were a chance to miscount the number of particles in the left half of the box – the preparation method still provides a useful procedure for selecting an optimal protocol. Furthermore, each of our optimal protocols contained at least one infinitely slow step. This is unavoidable as the process must be reversible before and after any measurements. Consequently, our method does not strictly apply to finite-time processes. However, the preparation method may still provide insight into the design of optimal finite-time processes, since an optimal finite-time protocol, roughly speaking, is as close to reversible as possible [@Abreu2011; @Schmiedl2007]. Generally, we expect the preparation method to be of use whenever the external parameter protocol forces a symmetry breaking in the system prior to the measurement, such as the insertion of the partition in the Szilard engine. Consider a thermodynamic process ${\cal P}$ during which a system is driven from an initial equilibrium state $A$ through a critical point, where the system chooses among several phases or macroscopic states $B_i$ with probability $p_i$. In addition, suppose there exists a collection of processes ${\cal P}'_i$ during which the symmetry is broken forcibly (not spontaneously), driving the system from $A$ to $B_i$ with probability one. Then, according to our recipe this spontaneous symmetry breaking transition can be exploited using the following optimal feedback protocol: start in state $A$, execute process ${\cal P}$, measure which state $B_i$ resulted from the symmetry breaking, and then run the corresponding process ${\cal P}_i^\prime$ in reverse to drive the system back to its initial state $A$. By construction, this process prepares the post-measurement state with unit probability, and therefore extracts as work $\langle W\rangle= - kT\sum_i p_i\log p_i$, which is $kT$ times the information gained in the measurement, $\langle I\rangle= - \sum_i p_i\log p_i$. One interesting instance of this setup is the Ising model, where a measurement of the system’s total magnetization after the symmetry breaking phase transition between the paramagnetic and ferromagnetic states can be exploited to extract work. This information can be utilized by modifying an external magnetic field, as demonstrated in [@Parrondo2001]. In the introduction, we outlined a general procedure for preparing a collection of optimal protocols, original presented in [@Abreu2011; @Jacobs2009; @Erez2010; @Hasegawa2010; @Takara2010; @Esposito2011], in which the Hamiltonian is instantaneously changed immediately following the measurement in order to make the post-measurment state an equilibrium Boltzmann distribution, followed by a reversible switching of the external parameters to their final values. These protocols prepare the post-measurement states; as such this generic procedure is a special case of the preparation method developed here. Though, the implementation of the preparation method can lead to a wider variety of protocols. Take for example the two-particle Szilard engine discussed in . Imagine we make a measurement and find outcome $B$, where each particle is confined to a separate half of the box. Let $\rho_B(z)$ denote the phase space density conditioned on this measurement outcome. In the generic procedure, immediately after the measurement we would change the Hamiltonian to $H_B(z)=-\ln\rho_B(z)$, which is a strange Hamiltonian that assigns infinite energy to configurations where both particles are in the same half of the box. In contrast, the preparation method led to a physically realizable protocol, in which we vary the size of the box. Finally, we formulated the preparation method only for classical systems. Though, the second law of thermodynamics for discrete feedback was originally predicted for quantum evolutions [@Sagawa2008]. Its mathematical structure resembles the classical version, which suggests that feedback-reversible processes are also optimal quantum feedback protocols and that the preparation method would also apply to quantum feedback engines. Applications of the preparation method to quantum systems holds interesting possibilities. For example, in both the classical multi-particle Szilard engines analyzed here, the optimal protocols required repulsive particle interactions. In a quantum multi-particle Szilard engine composed of fermions, the Pauli exclusion principle induces a repulsive interaction of purely quantum origin, which could be exploited to develop a collection of optimal feedback protocols. We acknowledge Hal Tasaki for suggesting the $N$-particle Szilard engine protocol. Financial support for this project came from Grant MOSAICO (Spanish Government) and MODELICO (Comunidad de Madrid). Partition functions for two square hard-core particles in a two-dimensional box {#sec:appendix} =============================================================================== In this appendix, we report the configurational partition functions employed in Sect. \[subsec:two\] for a gas composed of two square particles of width $d$ with hard-core interactions confined to a two-dimensional box of width $L_x$ and height $L_y$. The partition function for hard-core particles is the number of distinct microscopic configurations subject to the constraint that the centers of the particle be separated by a distance of at least $d$. In addition, the particles have a hard-core interaction with the walls enclosing the box, with the result that the center of each particle must be at least a distance $d/2$ from the edges of the box. Two partition functions are utlized in our analysis in . The first is the partition function for the equilibrium state when each particle is confined to separate box of dimensions $L_x \times L_y$: $$\begin{aligned} \bar{Z}_2(L_x,L_y)&=\int_{d/2}^{L_x-d/2}dx_1\, \int_{d/2}^{L_y-d/2}dy_1\, \int_{d/2}^{L_x-d/2}dx_2\, \int_{d/2}^{L_x-d/2}dy_2 \\ &=(L_x-d)^2(L_y-d)^2.\end{aligned}$$ The second is for the equilibrium state when both particles are confined to the same box of dimensions $L_x \times L_y$. This partition function can be expressed as the integral $$\begin{aligned} \nonumber \fl Z_2(L_x,L_y)=&\frac{1}{2}\int_{d/2}^{L_x-d/2}dx_1\, \int_{d/2}^{L_y-d/2}dy_1\, \int_{d/2}^{L_x-d/2}dx_2\, \int_{d/2}^{L_y-d/2}dy_2\, \\ \nonumber &\times[\Theta(|x_1-x_2|-d)+\Theta(|y_1-y_2|-d)-\Theta(|x_1-x_2|-d)\Theta(|y_1-y_2|-d)],\end{aligned}$$ where $\Theta(x)$ is the Heaviside step function and the preceding factor of $1/2$ is included because the particles are indistinguishable. The calculation of the above integral can be performed using standard methods of integral calculus, with the result, assuming $L_x>2d$, $$\fl Z_2(L_x,L_y)= \left\{ \begin{array}{ll} \frac{1}{2}(L_x-2d)^2(L_y-2d)^2+2d(L_x-2d)(L_y-2d)(L_x+L_y-4d) \\ \, \, \, +d^2\left[(L_x-2d)^2+(L_y-2d)^2\right], & L_y \ge 2d \\ \frac{1}{2}(L_y-d)^2(L_x-2d)^2, & d\le L_y< 2d \end{array} \right. .$$
{ "pile_set_name": "ArXiv" }
--- abstract: 'A careful analysis of the $HEAO1~A2~2-10~keV$ full-sky map of the X-ray background (XRB) reveals clustering on the scale of several degrees. After removing the contribution due to beam smearing, the intrinsic clustering of the background is found to be consistent with an auto-correlation function of the form $3.6 \pm 0.9 \times 10^{-4} \theta^{-1}$ where $\theta$ is measured in degrees. If current AGN models of the hard XRB are reasonable and the cosmological constant-cold dark matter ($\Lambda CDM$) cosmology is correct, this clustering implies an X-ray bias factor of $b_X \sim 2$. Combined with the absence of a correlation between the XRB and the cosmic microwave background (CMB), this clustering can be used to limit the presence of an integrated Sachs-Wolfe (ISW) effect and thereby to constrain the value of the cosmological constant, $\Omega_\Lambda \le 0.60$ (95% C.L.). This constraint is inconsistent with much of the $\Omega_\Lambda$ parameter space currently favored by other observations. Finally, we marginally detect the dipole moment of the diffuse XRB and find it to be consistent with the dipole due to our motion with respect to the mean rest frame of the XRB. The limit on the amplitude of any intrinsic dipole is $\delta I_x / I \le 5 \times 10^{-3}$ at the 95 % C.L. When compared to the local bulk velocity, this limit implies a constraint on the matter density of the universe of ${\Omega_m}^{0.6}/b_X(0) \gs 0.24$.' author: - 'S.P. Boughn' - 'R.G. Crittenden' - 'G.P. Koehrsen' title: 'The Large-Scale Structure of the X-ray Background and its Cosmological Implications' --- Introduction ============ The X-ray background (XRB) was discovered before the cosmic microwave background (CMB), but only now is its origin being fully understood. The hard ($2-10 ~ keV$) XRB has been nearly completely resolved into individual sources; most of these are active galactic nuclei (AGN), but there is a minor contribution from the hot, intergalactic medium in rich clusters of galaxies (Rosati et al. 2002; Cowie et al. 2002; & Mushotzky et al. 2000). In addition, the spectra of these faint X-ray sources are consistent with that of the “diffuse” XRB. If current models of the luminosity functions and evolution of these sources are reasonably correct, then the XRB arises from sources in the redshift range $0 < z < 4$, making them an important probe of density fluctuations intermediate between relatively nearby galaxy surveys ($z \ls 0.5$) and the CMB ($z \sim 1000$). While there have been several attempts to measure large scale, correlated fluctuations in the hard XRB, these have only yielded upper limits or, at best, marginal detections (e.g. Barcons et al. 2000, Treyer et al. 1998 and references cited therein). On small scales, a recent correlation analysis of 159 sources in the Chandra Deep Field South survey detected significant correlations for separations out to $100~arcsec$ (Giacconi et al. 2001). (At the survey flux level, these sources comprise roughly two thirds of the hard XRB.) On much larger scales, a recent analysis by Scharf et al. (2000) claims a significant detection of large-scale harmonic structure in the XRB with spherical harmonic order $1 \le \ell \le 10$ corresponding to structures on angular scales of $ \theta \gs 10^\circ$ The auto-correlation results we describe here complement this analysis, indicating clustering on angular scales of $3^{\circ}$ to $10^{\circ}$, corresponding to harmonic order of $ \ell \ls 30$. However, all three detections have relatively low signal to noise and require independent confirmation. The dipole moment of the XRB has received particular attention, primarily because of its relation to the dipole in the CMB, which is likely due to the Earth’s motion with respect to the rest frame of the CMB. If this is the case, one expects a similar dipole in the XRB with an amplitude that is 3.4 times larger because of the difference in spectral indices of the two backgrounds (Boldt 1987). In the X-ray literature, this dipole is widely known as the Compton-Getting effect (Compton & Getting 1935). In addition, it is quite likely that the XRB has an intrinsic dipole due to the asymmetric distribution in the local matter density that is responsible for the Earth’s peculiar motion in the first place. Searches for both these dipoles have concentrated on the hard XRB, since at lower energies the X-ray sky is dominated by Galactic structure. There have been several tentative detections of the X-ray dipole (e.g. Scharf et al. (2000)), but these have large uncertainties. A firm detection of an intrinsic dipole or even an upper limit on its presence would provide an important constraint on the inhomogeneity of the local distribution of matter via a less often used tracer of mass and a concomitant constraint on cosmological models (e.g., Lahav, Piran & Treyer 1997). This paper is organized as follows. In section §2 we describe the hard X-ray map used in the analysis, the determination of its effective beam size, and cuts made to remove the foreground contaminants. In section §3, we describe the remaining large scale structures in the map and the determination of their amplitudes. The dipole is of particular interest, and is the topic of section §4. The correlation function of the residual map and its implications for intrinsic correlations are discussed in section §5. In section §6, we compare our results to previous observations and discuss the cosmological implications of these results in §7. HEAO1 A2 ${\it 2 - 10~keV}$ X-ray Map ===================================== There has been much recent progress in understanding the X-ray background through instruments such as ROSAT, Chandra and XMM. However, these either have too low an energy threshold or have too small a field of view to study the large scale structure of the hard X-rays. The best observations relevant to large scale structure are still those from the HEAO1 A2 experiment that measured the surface brightness of the X-ray background in the $0.1 - 60~keV$ band (Boldt 1987). The HEAO1 data set we consider was constructed from the output of two medium energy detectors (MED) with different fields of view ($3^\circ \times 3^\circ$ and $3^\circ \times 1.5^\circ$) and two high energy detectors (HED3) with these same fields of view. These data were collected during the six month period beginning on day 322 of 1977. Counts from the four detectors were combined and binned in 24,576 $1.3^\circ \times 1.3^\circ$ pixels. The pixelization we use is an equatorial quadrilateralized spherical cube projection on the sky, the same as used for the COBE satellite CMB maps (White and Stemwedel 1992). The combined map has a spectral bandpass (quantum efficiency $\gs 50\%$) of approximately $3-17~keV$ (Jahoda & Mushotzky 1989) and is shown in Galactic coordinates in Figure \[fig:heao\]. For consistency with other work, all signals are converted to equivalent flux in the $2-10~keV$ band. Because of the ecliptic longitude scan pattern of the HEAO satellite, sky coverage and therefore photon shot noise are not uniform. However, the variance of the cleaned, corrected map, $2.1 \times 10^{-2}~(TOT~counts~s^{-1})^2$, is much larger than the variance of photon shot noise, $0.8 \times 10^{-2}~(TOT~counts~s^{-1})^2$, where $1~TOT~counts~s^{-1} \approx 2.1 \times 10^{-11} erg~s^{-1} cm^{-2}$ (Allen, Jahoda & Whitlock 1994). This implies that most of the variance in the X-ray map is due to “real” structure. For this reason and to reduce contamination from any systematics that might be correlated with the scan pattern, we chose to weight the pixels equally in this analysis. The point spread function ------------------------- To determine the level of intrinsic correlations, we must account for the effects of beam smearing and so it is essential to characterize the point spread function (PSF) of the above map. The PSF varies somewhat with position on the sky because of the pixelization and the asymmetric beam combined with the HEAO1 scan pattern. We obtained a mean PSF by averaging the individual PSFs of 60 strong HEAO1 point sources (Piccinotti 1982) that were located more than $20^{\circ}$ from the Galactic plane. The latter condition was imposed to avoid crowding and to approximate the windowing of the subsequent analysis (see §2.2). The composite PSF, shown in Figure \[fig:psf\], is well fit by a Gaussian with a full width, half maximum (FWHM) of $3.04^\circ$. As a check of this PSF, we generated Monte Carlo maps of sources observed with $3^\circ \times 3^\circ$ and $3^\circ \times 1.5^\circ$ (FWHM) triangular beams appropriate for the A2 detectors (Shafer 1983) and then combined the maps with quadcubed pixelization as above. The resulting average PSF from these trials is also well fit by a Gaussian with a FWHM of $2.91^\circ$, i.e., about $4.5\%$ less than that in the Figure \[fig:psf\]. Considering that the widths of the triangular beams given above are nominal, that the triangular beam pattern is only approximate (especially at higher energies), and that we did not take into account the slight smearing in the satellite scan direction (Shafer 1983), the agreement is remarkably good. In the following analysis, we use the $3.04^\circ$ fit derived from the observed map; however, changing the PSF FWHM by a few percent does not significantly affect the results of this paper. Cleaning the map ---------------- To remove the effects of the Galaxy and strong extra-galactic point sources, some regions of the map were excluded from the analysis. The dominant feature in the HEAO map is the Galaxy (see Figure \[fig:heao\]) so all data within $20^\circ$ of the Galactic plane or within $30^\circ$ of the Galactic center were cut from the map. In addition, large regions ($6.5^\circ \times 6.5^\circ$) centered on $92$ discrete X-ray sources with $2 - 10~keV$ fluxes larger than $3 \times 10^{-11} erg~s^{-1} cm^{-2}$ (Piccinotti 1982) were removed from the maps. Around the sixteen brightest of these sources (with fluxes larger than $1 \times 10^{-10} erg~s^{-1} cm^{-2}$) the cut regions were enlarged to $9^\circ \times 9^\circ$. Further enlarging the area of the excised regions had a negligible effect on the following analysis so we conclude that the sources have been effectively removed. The resulting “cleaned” map (designated Map A) has a sky coverage of $55.5\%$ and is our baseline map for further cuts. To test the possibility of further point source contamination, we also used the ROSAT All-Sky Survey (RASS) Bright Source Catalog (Voges et al. 1996) to identify relatively bright sources. While the RASS survey has somewhat less than full sky coverage ($92\%$), it has a relatively low flux limit that corresponds to a $2-10~keV$ flux of $\sim 2 \times 10^{-13}~erg~s^{-1}~cm^{-2}$ for a photon spectral index of $\alpha = -2$. Every source in the RASS catalog was assigned a $2-10~keV$ flux from its B-band flux by assuming a spectral index of $-3< \alpha < -1$ as deduced from its HR2 hardness ratio. For fainter sources, the computed value of $\alpha$ is quite uncertain; if it fell outside the typical range of most X-ray sources, $-3< \alpha < -1$, then $\alpha$ was simply forced to be $-1$ or $-3$. It is clear that extrapolating RASS flux to the $2-10~keV$ band is not accurate, so one must consider the level to which sources are masked with due caution. However, we are only using these fluxes to mask bright sources and so this procedure is unlikely to bias the results. We considered maps where the ROSAT sources were removed at three different inferred $2-10~keV$ flux thresholds. First, we identified sources with fluxes exceeding the Piccinotti level, $3 \times 10^{-11} erg~s^{-1} cm^{-2}$. Thirty-four additional, high Galactic latitude RASS sources were removed, resulting in a map with sky coverage of $52\%$ (designated Map B). In order to compare more directly with the results of Scharf et al. (2000) (see §6) we removed sources at their flux level, $2 \times 10^{-11} erg~s^{-1} cm^{-2}$. The map masked in this way has $47\%$ sky coverage (compared to the $48\%$ coverage of the Scharf et al. analysis) and is designated Map C. Finally, to check how sensitive our dipole results are to the particular masking of the map, we lowered the flux cut level to $1 \times 10^{-11} erg~s^{-1} cm^{-2}$, which reduced the sky coverage to $34\%$. The map resulting from this cut is designated Map D in Table 3. As an alternative to using the RASS sources, the map itself was searched for “sources” that exceeded the nearby background by a specified amount. Since the quad-cubed format lays out the pixels on an approximately square array, we averaged each pixel with its eight neighbors and then compared this value with the median value of the next nearest sixteen pixels (ignoring pixels within the masked regions). If the average flux associated with a given pixel exceeded the median flux of the background by a prescribed threshold, then all 25 pixels ($6.5^\circ \times 6.5^\circ$) were removed from further consideration. For a threshold corresponding to 2.2 times the mean shot noise in the map approximately 120 more “sources” were identified and masked resulting in a sky coverage of $42\%$. This map is labeled Map E in Table 3. Finally, we used an even more aggressive cut corresponding to 1.75 times the mean shot noise which resulted in a masked map with $33\%$ sky coverage. This map is labeled Map F. Modeling the Local Large-Scale Structure ========================================= Sources of large scale structure -------------------------------- There are several local sources of large-scale structure in the HEAO map which can not be eliminated by masking isolated regions. These include diffuse emission from the Galaxy, emission (diffuse and/or faint point sources) from the Local Supercluster, the Compton-Getting dipole, and a linear time drift in detector sensitivity. Since none of these are known *a priori*, we fit an eight parameter model to the data. Of course, the Compton-Getting dipole is known in principle if one assumes the kinetic origin of the dipole in the cosmic microwave background; however, there may also be an intrinsic X-ray dipole that is not accounted for. (See §4 below.) Only one correction was made *a priori* to the map and that was for the dipole due to the Earth’s motion around the sun; however, this correction has a negligible effect on the results. A more detailed account of the model is given in Boughn (1999). The X-ray background has a diffuse (or unresolved) Galactic component which varies strongly with Galactic latitude (Iwan et al. 1982). This emission is still significant at high Galactic latitude ($b_{II}>20^\circ$) and extrapolates to $\sim 1\%$ at the Galactic poles. We modeled this emission in two ways. The first model consisted of a linear combination of a secant law Galaxy with the Haslam $408~GHz$ full sky map (Haslam et al. 1982). The latter was included to take into account X-rays generated by inverse Compton scattering of CMB photons from high energy electrons in the Galactic halo, the source of much of the synchrotron emission in the Haslam map. As an alternative Galaxy model we also considered the two disk, exponentially truncated model of Iwan et al. (1982). Our results are independent of which model is used. In addition to the Galactic component, evidence has been found for faint X-ray emission from plane of the Local Supercluster (Jahoda 1993, Boughn 1999). Because of its faintness, very detailed models of this emission are not particularly useful. The model we use here is a simple “pillbox”, i.e uniform X-ray emissivity within a circular disk of thickness equal to a $1/4$ of the radius and with its center located $4/5$ of a radius from us in the direction of the Virgo cluster (see Boughn 1999 for details). The amplitude of this emission, while significant, is largely independent of the details of the model and, in any case, has only a small effect on the results. Time drifts in the detector sensitivity can also lead to apparent structure in the reconstructed X-ray map. At least one of the A-2 detectors changed sensitivity by $\sim 1\%$ in the six month interval of the current data set (Jahoda 1993). Because of the ecliptic scan pattern of the HEAO satellite, this results in a large-scale pattern in the sky which varies with ecliptic longitude with a period of $180^\circ$. If the drift is assumed to be linear, the form of the resulting large-scale structure in the map is completely determined. A linear drift of unknown amplitude is taken into account by constructing a sky map with the appropriate structure and then fitting for the amplitude simultaneously with the other parameters. We investigated the possibility of non-linear drift by considering quadratic and cubic terms as well; however, this did not significantly reduce the $\chi^2$ of the fit nor change the subsequent results. Modeling the maps ----------------- The eight parameters that characterize the amplitude of these structures are used to model the large-scale structure in the HEAO map. Let the X-ray intensity map be denoted by the vector $\bf{I}$, where the element $I_i$ is the intensity in the $i^{th}$ pixel. The observed intensity is modelled as the sum of eight templates with amplitudes described by the eight dimensional vector $\bf{a}$, $${\bf{I}} = \tilde{X}{\bf{a}} + {\bf{n}}$$ where $\tilde{X}$ is an $8 \times n_{pix}$ matrix whose elements are the values of each template function at each pixel of the map. As discussed above, these template functions include: a uniform map to represent the monopole of the X-ray background; the three components of a dipole (in equatorial coordinates); the large-scale pattern resulting from a linear instrumental gain drift; a Galactic secant law; the Haslam $408~GHz$ map ; and the amplitude of the “pillbox” model of the local supercluster. The noise vector ${\bf{n}}$ is assumed to be Gaussian distributed with correlations described by $\tilde{C}\equiv \langle {\bf{n \, n}}^T\rangle$. As discussed above (§2), we chose to weight each pixel equally since the shot noise is considerably less than the “real” fluctuations in the sky. For the purposes of fitting the map to the above model we consider both photon shot noise and fluctuations in the XRB (see Figure \[fig:acf\]) to be “noise”. This noise is correlated and a minimum $\chi^2$ fit must take such correlations into account. However, for simplicity, we ignore these correlations when finding the best fit model amplitudes and perform a standard least squares fit by minimizing $|{\bf{I}} - \tilde{X}{\bf{a}}|^2$ on the cleaned HEAO map. From the standard equations of linear regression the values of the parameters that minimize this sum are $${\bf{a}} = \tilde{B}^{-1} \tilde{X}^T {\bf{I}}$$ where $\tilde{B} = \tilde{X}^T \tilde{X}$ is a symmetric eight by eight matrix. This would be the maximum likelihood estimator if the correlation matrix were uniform and diagonal. Though this fit ignores correlations in the errors, it is unbiased and is likely to be very close to the minimum $\chi^2$ (maximum likelihood) fit, since the noise correlations are on a much smaller scale than the features we are attempting to fit. The correlated nature of the noise cannot be ignored when computing the uncertainties in the fit since there are far fewer noise independent data points than there are pixels in the map. It is straightforward to show that errors in the estimated parameters $\delta {\bf{a}}$ are given by $$\label{eqn:sigc} \langle \delta {\bf{a}} \, \delta {\bf{a}}^T \rangle = \tilde{B}^{-1} \tilde{X}^T \tilde{C} \tilde{X} \tilde{B}^{-1}$$ This error is likely to be only slightly larger than would be the case for the maximum likelihood estimator. $\tilde{C}$ is a combination of the uncorrelated shot noise and the correlated fluctuations indicated in Figure \[fig:acf\]. We assume it to be homogeneous and isotropic, i.e., that $\tilde{C}_{ij}$ depends only on the angular separation of the $i$ and $j$ pixels. ------- ------------------ ------------------ ------------------ $a_1$ background 328.6 $\pm$ 1.9 same $a_2$ $\hat{x}$ dipole -1.17 $\pm$ 0.62 -0.24 $\pm$ 0.62 $a_3$ $\hat{y}$ dipole -0.38 $\pm$ 0.98 -0.68 $\pm$ 0.98 $a_4$ $\hat{z}$ dipole -0.52 $\pm$ 0.69 -0.34 $\pm$ 0.69 $a_5$ time drift 7.15 $\pm$ 1.23 same $a_6$ secant law 3.28 $\pm$ 0.84 same $a_7$ Haslam map 0.03 $\pm$ 0.08 same $a_8$ Supercluster 4.11 $\pm$ 1.35 same ------- ------------------ ------------------ ------------------ [Eight fit parameters for Map C (sources brighter than $2\times10^{-11}~erg~s^{-1}cm^{-2}$ removed). The units are $0.01~ TOT~count~s^{-1} (4.5~deg^2)^{-1} \simeq 1.54\times 10^{-10}erg~s^{-1}cm^{-2}$. Fits are shown both for the original map and for the map corrected for the Compton-Getting (C-G) dipole. ]{} \[tab:comp\] Table \[tab:comp\] lists the values and errors of the parameters fit to Map C (see §2). Instrument time drift, the Galaxy and structure associated with the local supercluster all appear to be significant detections. The dipole is detected at about the $2~\sigma$ level and is consistent with that expected for the Compton-Getting dipole (see Table \[tab:dipole\]). Table \[tab:corrm\] lists the elements of the normalized correlation matrix of the fit parameters and it is apparent that the parameters are largely uncorrelated. This was supported by fits that excluded some of the parameters (see §4). ------- ------ ------ ------ ------ ------ ------ ------ ------ $a_1$ 1.0 0.0 -0.5 0.1 -0.4 -0.6 -0.1 -0.3 $a_2$ 0.0 1.0 -0.1 0.2 0.3 -0.1 0.0 0.1 $a_3$ -0.5 -0.1 1.0 0.1 0.0 0.7 0.0 -0.3 $a_4$ 0.1 0.2 0.1 1.0 -0.1 0.0 0.0 -0.1 $a_5$ -0.4 0.3 0.0 -0.1 1.0 0.0 0.0 0.2 $a_6$ -0.6 -0.1 0.7 0.0 0.0 1.0 0.0 -0.4 $a_7$ -0.1 0.0 0.0 0.0 0.0 0.0 1.0 -0.3 $a_8$ -0.3 0.1 -0.3 -0.1 0.2 -0.4 -0.3 1.0 ------- ------ ------ ------ ------ ------ ------ ------ ------ [Normalized correlation coefficients for the fit parameters in Table \[tab:comp\].]{} To compute the true $\chi^2 \equiv ({\bf{I}} - \tilde{X}{\bf{a}})^T \tilde{C}^{-1} ({\bf{I}} - \tilde{X}{\bf{a}})$ of the fit requires inverting $\tilde{C}$ which is a $11,531 \times 11,531$ matrix. Instead we compute an effective reduced $\chi^2$ using $$\chi_{eff}^2 \equiv {1 \over N} ({\bf{I}} - \tilde{X}{\bf{a}})^T \tilde{D}^{-1} ({\bf{I}} - \tilde{X}{\bf{a}})$$ where $\tilde{D}$ is the diagonal part of the correlation matrix, $\tilde{D}_{ii} = \sigma_{s,i}^2 +\sigma_b^2$, $\sigma_{s,i}$ is the shot noise in the $i^{th}$ pixel, ${\sigma_b}^2$ is the variance of the fluctuations in the XRB, and $N$ is the number of pixels minus eight, the number of degrees of freedom in the fit. The shot noise in a given pixel is inversely proportional to the number of photons received and we assume is inversely proportional to the coverage of that pixel. This is approximately true since all the non-flagged pixels are exposed to approximately the same flux. We find ${\chi_{eff}}^2 = 1.00$ for this fit, which we take as an indication that we have properly characterized the amplitude of the noise so that the errors quoted in the table have neither been underestimated nor overestimated. However, it should be emphasized that ${\chi_{eff}}^2$ is not to be interpreted statistically as being derived from a ${\chi}^2$ distribution. The residual maps show very little evidence for structure on angular scales $\theta > 10^{\circ}$ above the level of the noise, $\langle \delta I^2 \rangle / \bar{I}^2 \sim 10^{-5}$ where $\delta I$ are the residual fluctuations in X-ray intensity and $\bar{I}$ is the mean intensity (see Figure \[fig:resid\]). Since all the components of the model have significant structure on large angular scales, it appears that these particular systematics have been effectively eliminated. The Dipole of the X-ray Background ================================== The dipole fit to the map is consistent with the Compton-Getting dipole and there is no evidence for any additional intrinsic dipole in the XRB. To make this more quantitative, we corrected the maps for the predicted Compton-Getting dipole and fit the corrected map for any residual, intrinsic dipole. These dipole fit parameters are also included in Table \[tab:comp\]. Leaving out any individual model component, such as the time drift, the galaxy, Haslam or supercluster template, made little difference in the amplitude of the fit dipole. This is, perhaps, not too surprising since the Galaxy and time drift models are primarily quadrupolar in nature and the pancake model, while possessing a significant dipole moment, has a relatively small amplitude. All such fits were consistent with the Compton-Getting dipole alone. Even when all four of these parameters were excluded from the fit, the dipole amplitude increased by only $0.004~TOT~counts~s^{-1}$ with a direction that was was $33^{\circ}$ from that of the CMB dipole. The effective $\chi^2$ for the four parameter fit was, however, significantly worse, i.e., ${\chi_{eff}}^2 = 1.05$. Table \[tab:dipole\] lists the amplitude and direction of the dipole fit to Map C along with the fits to Maps D, E, and F. All of these fits are consistent with amplitude and direction of the Compton-Getting dipole (as inferred from the CMB dipole) which is also indicated in the Table. The effective $\chi^2$s of these fits range from 0.99 to 1.01, again indicating that the amplitude of the noise is reasonably well characterized. No errors are given for these quantities for reasons that will be discussed below. In order to check for unknown systematics, we performed dipole fits to a variety of other masked maps with larger Galaxy cuts as well as cuts of the brighter galaxies in the Tully Nearby Bright Galaxy Atlas (Tully 1988). The details of these cuts are discussed in Boughn (1999); however, none had a significantly different dipole fit. Since all of these dipoles are consistent with the Compton-Getting dipole we also fit these maps with a six parameter fit in which the dipole direction was constrained to be the direction of the CMB dipole. The dipole amplitude of these fits and errors computed according to Eq. (3-3) are also given in Table \[tab:dipole\]. ------- -------- --------------- -------------- --------------------- Map C 0.0133 $309^{\circ}$ $39^{\circ}$ 0.0117 $\pm$ 0.0064 Map D 0.0218 $300^{\circ}$ $33^{\circ}$ 0.0184 $\pm$ 0.0062 Map E 0.0150 $296^{\circ}$ $50^{\circ}$ 0.0148 $\pm$ 0.0059 Map F 0.0190 $283^{\circ}$ $44^{\circ}$ 0.0184 $\pm$ 0.0064 C-G 0.0145 $264^{\circ}$ $48^{\circ}$ 0.0145 ------- -------- --------------- -------------- --------------------- [ The dipole amplitude and directions are from the 8-parameter fits in $TOT$ units and Galactic coordinates. Map C is for the map of Table \[tab:comp\]; Map D is masked at a source level of $\sim1 \times 10^{-11} erg~s^{-1} cm^{-2}$; Map E is masked with internal source identification; and Map F is masked with a lower level of internal source identification (see §2 for full details). Also listed are the amplitude and direction of the Compton-Getting dipole (G-P) as inferred from the CMB dipole. The constrained amplitudes are for dipole models fixed to the direction of the CMB dipole. ]{} Even though we find no evidence for an intrinsic dipole in the XRB, it would be useful to place an upper limit on its amplitude. We define the dimensionless dipole by writing the first two moments of the X-ray intensity as $$I(\hat{\bf n}) = \bar{I} (1 + \vec{\Delta} \cdot \hat{\bf n}),$$ where $\vec{\Delta}$ is a vector in the direction of the dipole. There are various approaches one could take to find an upper limit, and the problem is complicated somewhat because the error bars are anisotropic (see Table 1). The dipole in the $\hat{y}$ direction is less constrained than in the other directions because of the anisotropic masking of the map. Here we take the limits on the individual components of the intrinsic dipole and marginalize over the dipole direction to obtain a distribution for its amplitude. For this, we use a Bayesian formalism and assume a uniform prior on the amplitude, $|\vec{\Delta}|$. We find $\Delta < 0.0052$ at the 95 % C.L. If the direction of the dipole is fixed to be that of the CMB dipole then the 95% C.L. upper limits on the dipole amplitudes fall in the range $0.0030$ to $0.0043$ for the fits listed in Table \[tab:dipole\]. The same sort of problem arises when trying to attach an error bar to the amplitude of the dipole fits to the maps which include the Compton-Getting dipole. It seems clear from Table \[tab:comp\] that we find evidence for a dipole at the 2 $\sigma$ level. This is supported by the six parameter fits of Table \[tab:dipole\], where the various maps indicate positive detections at a 2 to 3 $\sigma$ level. However, in the eight parameter fits, the dipole amplitude is a non-linear combination of the fit components. There are two approaches we can take in converting the three dimensional limits to a limit on the dipole amplitude. We can either fix the direction in the direction of the CMB dipole, which results in the constrained limits shown in Table \[tab:dipole\]. Alternatively, we can marginalize over the possible directions of the dipole, which will necessarily result in weaker limits than when the direction is fixed. This is particularly true here, where the direction of greatest uncertainty in the dipole measurement is roughly orthogonal to the expected dipole direction. In the case of the Compton-Getting dipole, there is a strong prior that it should be in the CMB dipole direction, so our limit is stronger than it would be if we did not have information about the CMB. In addition to the upper limit on the intrinsic dipole amplitude, we can also constrain the underlying dipole variance, which can, in turn, be used to test theoretically predicted power spectra. While the observed amplitude is related to the dipole variance, $\langle \Delta^2 \rangle = 3 \sigma_\Delta^2$, there is large uncertainty due to cosmic variance. The dipole represents only three independent samplings of $\sigma_\Delta$. To constrain $\sigma_\Delta$, we again take a Bayesian approach and calculate the likelihood of observing the data given the noise and $\sigma_\Delta$, $${\cal {P}} (\vec{\Delta}|\sigma_\Delta) \propto \prod e^{- \Delta_i^2/2(\sigma_i^2 + \sigma_\Delta^2)} (\sigma_i^2 + \sigma_\Delta^2)^{-1/2},$$ where the product is over the three spatial directions and we have ignored the small off-diagonal noise correlations (see Table \[tab:corrm\]). With a uniform prior on $\sigma_\Delta$, its posterior distribution implies a 95% C.L. upper limit of $\sigma_\Delta < 0.0064 $. This is twice as high as would be inferred from the limit on the dipole because of the significant tail in the distribution due to cosmic variance. The limit implied by the dipole (of Map C), $\sigma_\Delta = \Delta/\sqrt{3} < 0.0030 $ is at the 80% C.L. The difference between the limits arises because occasionally a small dipole can occur even when the variance is large. The bottom line is that we have detected the dipole in the XRB at about the 2 $\sigma$ level and that it is consistent with the Compton-Getting dipole. There is no evidence for any other intrinsic dipole at this same level. We will discuss the apparent detection of an intrinsic dipole by Scharf et al. (2000) in §6. Correlations in the X-ray Background ==================================== A standard way to detect the clustering of sources (or of the emission of these sources) is to compute the auto-correlation function (ACF), defined by $$\omega(\theta) = {1 \over \bar{I}^2} \sum_{i,j} (I_i -\bar{I}) (I_j-\bar{I}) / N_{\theta}$$ where the sum is over all pairs of pixels, $i,j$, separated by an angle $\theta$, $I_i$ is the intensity of the $ith$ pixel, $\bar{I}$ is the mean intensity, and $N_{\theta}$ is the number of pairs of pixels separated by $\theta$. Figure \[fig:acf\] shows the ACF of the residual map after being corrected with the 8-parameter fit and for photon shot noise in the $\theta = 0^\circ$ bin. The error bars are highly correlated and were determined from Monte Carlo trials in which the pixel intensity distribution was assumed to be Gaussian with the same ACF as in the figure. There is essentially no significant structure for $\theta > 13^{\circ}$ once local structures have been removed, as is evident in Figures \[fig:acf\], \[fig:resid\], and \[fig:intrin\]. It is clear from Figure \[fig:acf\] that the residuals of Map A possess significant correlated structure. It must be determined how much, if any, is due to clustering in the XRB and how much is simply due to smearing by the PSF of the map. It is straightforward to show that an uncorrelated signal smeared by a Gaussian PSF, $PSF(\theta) \propto e^{-\theta ^2 / 2\sigma_p ^2}$, results in an ACF of the form $\omega (\theta) \propto e^{-\theta ^2 /4\sigma_p ^2}$ where $\sigma_p = 1.29^{\circ}$ is the Gaussian width of the PSF in Figure \[fig:psf\] ($\theta_{FWHM}^2 = 8 \sigma_p^2 \ln 2.$) The dashed curve in Figure \[fig:acf\] is essentially this functional form, modified slightly to take into account the pixelization. In the plot, its amplitude has been forced to agree with $\theta = 0^\circ$ data point, while a maximum likelihood fit results in an amplitude about 5% lower (a consequence of the correlated noise). For $\theta \gs 3^{\circ}$, the ACF of the data clearly exceeds that accountable by beam smearing and this excess is even more pronounced with the maximum likelihood fit. The reduced $\chi^2$ for the fit to the first eight data points ($\theta \ls 9^{\circ}$) is $\chi ^2 = 18.6$ for six degrees of freedom which is another measure of the excess structure between 3 and 9 degrees. Note, this is a two parameter fit since the photon shot noise (which occurs only at $\theta = 0^\circ$) is also one of the parameters. While it is apparent that there is some intrinsic correlation in the X-ray background, it cannot be estimated by the residual to the above two parameter fit since that overestimates the contribution of beam smearing in order to minimize $\chi^2$. Instead, we also include in the fit a form for the intrinsic correlation and find its amplitude as well. Since the signal to noise is too small to allow a detailed model of the intrinsic clustering, we chose to model it with a simple power-law $\omega (\theta) = (\theta_0/\theta)^\epsilon$. This form provides an acceptable fit to the ACF of both radio and X-ray surveys on somewhat smaller angular scales (Cress & Kamionkowski 1998, Soltan et al. 1996, Giacconi et al. 2001). This intrinsic correlation was then convolved with the PSF and applied to the quadcube pixelization of the map (e.g., Boughn 1998). Finally, it is important to take into account the effects of the 8-parameter fit used to remove the large-scale structure as discussed in §3. If the X-ray background has intrinsic structure on the scale of many degrees, the 8-parameter fit will tend to remove it in order to minimize $\chi^2$. Since the model is composed of relatively large scale features, the greatest effect is expected for the largest angles. The significance of this effect was determined by generating Monte Carlo trials assuming a Gaussian pixel intensity distribution with the same ACF as in Figure \[fig:acf\]. The 8-parameter model was then fit and each trial map was corrected accordingly. The ACFs computed for these corrected maps indicate that, as expected, the value of the ACF is significantly attenuated for larger angles. The attenuation factor for $\theta = 9^{\circ}$ is already 0.55 and decreases rapidly for larger angles. The errors indicated in Figure \[fig:acf\] were also determined from these Monte Carlo trials and, as mentioned above, are highly correlated. We model the auto-correlation function as a sum of three templates and fit for their best amplitude. Much of the analysis parallels the discussion for the fits of large scale structure in the map, with the exception that here the number of bins is small enough that it is simple to calculate the maximum likelihood fit. Again, we model the observed correlation function vector as ${\bf{\omega}} = \tilde{W} {\bf{c}} + {\bf{n}}_{\omega}$, where $\tilde{W}$ is a $3 \times n_{bin}$ matrix containing the templates for shot noise ($\omega_s$), beam smearing ($\omega_{PSF}$), and intrinsic correlations in the XRB ($\omega_{intr}$). The amplitudes are given by the three element vector ${\bf{c}}$ and the noise is described by the correlation matrix determined from Monte Carlo trials, $\tilde{C}_\omega = \langle{\bf{n}}_{\omega} {\bf{n}}_{\omega}^T \rangle$. Shot noise contributes only to the first bin (zero separation) of the ACF and has amplitude given by its variance, $\omega_s$. Beam smearing contributes to the ACF with a template that looks like the beam convolved with itself, so appears as a Gaussian with a FWHM a factor of $\sqrt{2}$ larger than that of the beam and has amplitude denoted as $\omega_{psf}$. Finally, the intrinsic correlations are modelled as $(\theta_0/\theta)^\epsilon$, which is then smoothed appropriately by the beam. Its amplitude is denoted by its inferred correlation at zero separation, $\omega_{intr}.$ We fit using a range of indices, $0.8 \ge \epsilon \ge 1.6$, which cover the range of theoretical models of the intrinsic correlation. Both the PSF template and the intrinsic template are modified to include the effects of the attenuation at large angles as discussed above. Minimizing $\chi^2$ with respect to the three fit parameters, ${\bf{c}}$, results in the maximum likelihood fit to the model if one assumes Gaussian statistics. This assumption is reasonable by virtue of the central limit theorem since each data point consists of the combination of the signals from a great many pixels, each of which is approximately Gaussian distributed. In the presence of correlated noise, $\chi^2$ is defined by $$\chi^2 = ({\bf{\omega}} - \tilde{W} {\bf{c}})^T \tilde{C}^{-1}_{\omega} ({\bf{\omega}} - \tilde{W} {\bf{c}})$$ It is straightforward to show that the value of the parameters that minimize $\chi^2$ are given by $${\bf{c}} = \Omega^{-1} \tilde{W}^T \tilde{C}_\omega^{-1} \omega$$ where $\Omega = \tilde{W}^T \tilde{C}_\omega^{-1} \tilde{W}.$ Because of the large attenuation of the ACF at large angles, we chose to fit to only those data points with $\theta_i \le 9^{\circ}$, i.e., $i \le 8$, even though there appears to be statistically significant structure out to $\theta \sim 13^{\circ}$. The results of the fit of the $\epsilon = 1$ model to the ACF of the residuals of Map A are listed in Table \[tab:acf\] and plotted in Figure \[fig:acf\]. It is also straightforward to show that the correlation matrix of the fit parameters is given by, $ \langle \delta c_n \delta c_m \rangle = \Omega_{nm}^{-1}$ so the errors given in Table \[tab:acf\] are given by $\sigma_{c_n}^2 = \Omega_{nn}^{-1}$ and the normalized correlation coefficients by $r_{nm} = \Omega_{nm}^{-1}/(\Omega_{nn}^{-1}\Omega_{mm}^{-1})^{1/2}$. The correlations are as expected, i.e., $\omega_{intr}$ and $\omega_{psf}$ are highly correlated, while $\omega_s$ is relatively uncorrelated with the other two parameters. From the results in Table \[tab:acf\] it appears that intrinsic correlations in the X-ray background are detected at the 4 $\sigma$ level for sources with flux levels below $3 \times 10^{-11} erg~s^{-1} cm^{-2}$. Of course, if we have not successfully eliminated sources with fluxes larger than this, then the clustering amplitude might well be artificially inflated. It was for this reason that we used the RASS catalog to identify and remove additional sources with intensities $\gs 3 \times 10^{-11} erg~s^{-1} cm^{-2}$, the result of which was Map B (see §2). The fits to Map B are also listed in Table \[tab:acf\]. The clustering amplitude, $\omega_{intr}$, of the fits to this modified map were only $11\%$ less than that of Map A, i.e., considerably less than 1 $\sigma$. The $\chi^2$s of both fits are acceptable. These results are not very sensitive to the attenuation corrections. If they are removed from the model, the resulting amplitude of the fit clustering coefficient only decreases by $\sim 20\%$, less than $1~\sigma$. It should be noted that the corrections were relatively small (an average attenuation factor of 0.83 ranging from 1.0 at $\theta = 0^{\circ}$ to 0.55 for $\theta = 9^{\circ}$) and in all cases, the attenuation was less than the error bar of the corresponding data point. If data points with $\theta > 9^{\circ}$ are included, the fits become more sensitive to the attenuation corrections which are, in turn, quite sensitive to the 8-parameter fit. [|c|r|r|r|r|r|r|r|]{} & & & & & & &\ A & 7.71 $\pm$ 0.15 & 8.63 $\pm$ 0.59 & 2.64 $\pm$ 0.65 & 2.3/5 & -0.19 & -0.04 & -0.83\ B & 7.57 $\pm$ 0.15 & 8.22 $\pm$ 0.59 & 2.33 $\pm$ 0.65 & 3.6/5 & -0.19 & -0.04 & -0.83\ [Fit model parameters for Map A with 55% sky coverage and Map B with 52% sky coverage after removing additional ROSAT sources (see §2). The intrinsic fluctuations are modelled as $\omega \propto \theta^{-1}$.]{} Figure \[fig:resid\] is a plot of the residuals of the fit to the ACF of Map A for $\theta \le 9^{\circ}$ and of the uncorrected ACF from $10^{\circ}$ to $180^{\circ}$. The vertical scale is the same as for Figure \[fig:acf\]. The *rms* of these 140 data points is $1 \times 10^{-5}$ and it is clear that there is very little residual structure at levels exceeding this value. The observed correlation function for $\theta > 10^{\circ}$ in Figure \[fig:resid\] is entirely consistent with the noise levels determined from the Monte Carlo simulations: the *rms* of $\omega/\sigma$ is 1.03, indicating that there is no evidence for intrinsic fluctuations on these scales. We also take this as an indication that the errors are reasonably well characterized by the Monte Carlo calculation. The variance of the photon shot noise, ${\sigma_{s}}^2 = \omega_{s} {\bar{I}}^2$, is consistent with that expected from photon counting statistics only (Jahoda 2001). It should be noted that not only the shape but the amplitude of the beam smearing contribution, $\omega_{PSF}$, can be computed from source counts as a function of $2-10~keV$ flux. We will argue in §6 that, while our fitted value is consistent with current number counts, the latter are not yet accurate enough to correct the data. Figure \[fig:intrin\] shows the model of the intrinsic clustering, $\omega_{intr}$, compared to the data with both the shot noise and PSF component removed. The model curve is not plotted beyond $\theta = 9^{\circ}$ since the attenuation factor due to the 8-parameter fit corrections are large and uncertain at larger angles. The amplitude of $\omega_{intr}$ is sensitive to the exponent in the assumed power law for the intrinsic correlations. For example, the fit amplitude for a $\theta^{-1.6}$ power law the amplitude is a factor of $\sim 2$ larger than for a $\theta^{-0.8}$ power law. However, for a range of fits with $0.8 \le \epsilon \le 1.6$, the values of $\omega_{intr} (\theta)$ at $\theta = 4.5^{\circ}$ are all within $\pm 3\%$ of each other. Therefore, we chose to normalize the X-ray ACF at $4.5^{\circ}$ when comparing to cosmological models (see §7.1). The $\chi^2s$ are reasonable for all these fits. The bottom line is that there is fairly strong evidence for intrinsic clustering on these angular scales at the level of $\omega_{intr} \sim 3.6 \times 10^{-4} \theta^{-1}$ (see §6.1). While the exponent is $\epsilon \sim 1$, it is not strongly constrained. The implications of the intrinsic clustering of the X-ray background will be discussed in §7. Comparisons with Previous work ============================== Clustering ---------- As mentioned above, the component of the ACF due to beam smearing, $\omega_{PSF}$, can be determined with no free parameters if the flux limited number counts of X-ray sources are known. While such counts are still relatively inaccurate for our purposes, we did check to see if our results are consistent with current data. Over a restricted flux range, the X-ray number counts, $N(<S)$, are reasonably approximated by a power law, i.e., $N(< S) = K~S^{-\gamma}$ where $S$ is the flux of the source. It is straightforward to show that the variance of flux due to a Poisson distribution of sources is $${\sigma_{PSF}^2} (0) = \pi {\sigma_p}^2~A^2~\gamma~K~S^{2-\gamma}/(2-\gamma)$$ where $S$ is the upper limit of source flux, $\sigma_p$ is the Gaussian width of the PSF and $A^{-1}$ is the flux of a point source that results in a peak signal of $1~TOT~count~s^{-1}$ in our composite map. In our case, $A = 2.05 \times 10^{10}erg^{-1}~s~cm^2$. Using the $BeppoSAX~2-10~kev$ number count data of Giommi, Perri, & Fiore (2000), the $Chandra$ data of Mushotzky et al. (2000), and the $HEAO1~A2$ data of Piccinotti et al. (1982), we constructed a piecewise power-law $N(< S)$ for the range $6 \times 10^{-16} < S < 3 \times 10^{-11} erg~s^{-1}cm^{-2}$ and computed $\omega_{PSF}(0) = \sigma^2_{PSF}/\bar{I}^2$ to be $8.4 \times 10^{-4}$. The close agreement of this value with those in Table \[tab:acf\] is fortuitous given that the value depends most sensitively on the number counts at large fluxes which are the most unreliable, typically accurate only to within a factor of two. However, it is clear that the $\omega_{PSF}$ of Table \[tab:acf\] are quite consistent with the existing number count data. The results of §5 clearly indicate the presence of intrinsic clustering in the XRB. Mindful that the detection is only $\sim 4~\sigma$, we tentatively assume that the ACF has an amplitude of $\omega_{intr} \sim 2.5 \times 10^{-4}$ (the average of the two values in Table \[tab:acf\]) and is consistent with a $\theta^{-1}$ functional dependence. It is straight forward to relate this to the underlying correlation amplitude, $\theta_0$. The variance of correlations smoothed by a beam of Gaussian size $\sigma$ is given by $$\omega_{intr}(0) = {\Gamma(1 - \epsilon/2) \over 2^{\epsilon}} \left({\theta_0 \over \sigma} \right)^\epsilon.$$ When $\epsilon = 1$, then $\theta_0 = 2 \sigma \omega_{intr}/\pi^{1/2}$. Using this, we find $$\omega_{XRB}(\theta) \simeq 3.6 \times 10^{-4}~\theta^{-1} \label{eqn:acf}$$ where $\theta$ is measured in degrees and the normalization is such that $\omega_{XRB}(0) = \langle {\delta I}^2 \rangle / \bar{I}^2$. For comparison, this amplitude is about a factor of three below the $2~\sigma$ upper limit determined by Carrera et al. (1993) obtained with Ginga data ($4-12~keV$) for angular scales between 0.2 and 2.0 degrees. The detection of a significant correlation in the $HEAO1~A2$ data at the level of $3 \times 10^{-5}$ at $\theta = 10^{\circ}$ by Mushotzky and Jahoda (1992) was later attributed to structure near the super-Galactic plane (Jahoda 1993). To check for this effect in the present analysis, we masked all pixels within 15 and 20 degrees from the super-Galactic plane. The results were indistinguishable from those of Table \[tab:acf\]. It is interesting that the clustering indicated in Eq. (6-3) is consistent with this level of fluctuations at $10^{\circ}$; however, our sensitivity has begun to decline significantly at $10^{\circ}$ due to the fit for large scale structures. A correlation analysis of ROSAT soft X-ray background by Soltan et al. (1996) detected correlations about an order of magnitude larger than indicated in Eq. (6-3). Even considering that the ROSAT band ($0.5-2.0~keV$) is distinct from the HEAO band, it is difficult to imagine that the two correlation functions could be so disparate unless the lower energy analysis is contaminated by the Galaxy. While there has yet to be a definitive detection of the clustering of hard X-ray sources, a recent deep Chandra survey of 159 sources shows a positive correlation of source number counts on angular scales of 5 to 100 arcsec (Giacconi et al. 2001). Although the signal to noise is low and dependent on source flux, the implied number count ACF is roughly consistent with $\omega_N (\theta) \sim 3 \times 10^{-3}\theta^{-1}$ where $\omega_N (0) \equiv \langle {\delta N}^2 \rangle / \bar{N}^2$ and $\bar{N}$ is the mean surface density of sources. This is consistent with the correlation function determined by Vikhlinin & Forman (1995) for sources identified within ROSAT PSPC deep pointings. A direct comparison between the Chandra result with that of Eq. (6-3) is complicated by the more than one hundred times difference in the angular scales of the two analyses. It is doubtful that a single powerlaw model is adequate over this range. Furthermore, one is a luminosity ACF while the other is a flux limited, number count ACF. Relating the two requires understanding the luminosity function and its evolution as well as how the X-ray bias depends on scale. For these reasons, a direct comparison would be difficult to interpret. We only note in passing that the small angular scale ACF is a factor of eight larger than that of Eq. (6-3) assuming a $\theta^{-1}$ dependence. Finally, the recent harmonic analysis of the $HEAO1~A2$ data by Scharf et al. (2000) yielded a positive detection of structure in the XRB out to harmonic order $l \sim 10$. The present analysis looks at similar maps, so the results should be comparable. A direct comparison is complicated by the differences in analysis techniques, masking and corrections to the map. A rough comparison can be made by performing a Legendre transform on the $\theta^{-1}$ ACF model of Eq. (6-3). The ACF can be expressed in terms of Legendre polynomials as $$\omega(\theta) = {{1}\over{4\pi}} \sum_\ell (2\ell+1){C_\ell} P_\ell(\cos\theta)$$ where the ${C_l}$ constitute the angular power spectrum. Taking the Legendre transform $${C_\ell} = 2 \pi \int_{-1}^1 \omega(\theta)~P_\ell(\cos \theta)~d(\cos\theta)$$ where $P_\ell(\theta)$ is the Legendre polynomial of order $\ell$. Substituting the $\theta^{-1}$ model into this expression results in power spectrum coefficients, $C_\ell \simeq 4\times 10^{-5}/\ell$ for $\ell \sim 5.$ Note that for $\ell \sim 5$, this expression is relatively insensitive to the index $\epsilon$ in the expression for $\omega_{intr}$. While these values are highly uncertain, they are comparable to those found by Scharf et al. (2000) when the sky coverage and differences in notation are accounted for. Considering the low signal to noise of the data as well as the differences in the two analyses, a more detailed comparison would not be particularly useful. The Dipole ---------- Scharf et al. (2000) also searched for the XRB dipole using a $HEAO1~A2$ map and similar methods to those described in §4. They claim a detection of an intrinsic dipole with amplitude $\Delta \sim 0.0065$, though with a rather large region of uncertainty, i.e., $0.0023 \ls \Delta \ls 0.0085$, and in a direction about $80^{\circ}$ from that of the Compton-Getting dipole, in the general direction of the Galactic center. They used the $3^\circ \times 1.5^\circ$ $HEAO1~A2$ map restricted to regions further than $22^{\circ}$ from the Galactic plane. In addition, regions about sources with fluxes greater than $2 \times 10^{-11} erg~s^{-1} cm^{-2}$ were cut from the map. The sky coverage and the level of source removal closely correspond to those of our Map C. However, since we used a combination of the $3^\circ \times 1.5^\circ$ and $3^\circ \times 3^\circ$ maps, our map has significantly less ($\sim 1/\sqrt 3$) photon shot noise. While our analyses are similar, there are some significant differences: they corrected the map beforehand for linear instrument drift and Galaxy emission while we fit for those components simultaneously with the dipole and with emission from the local supercluster which they ignore. The upper limit on the intrinsic dipole we find is about the same amplitude as the Compton-Getting dipole, i.e., $\Delta < 0.0052$ at the 95 % C.L. Thus we exclude roughly the upper half of the Scharf et al. range and believe that their claim of a detection is probably an overstatement. While Scharf et al. do not take into account emission from the plane of the local supercluster, even if we leave that component out of our fit, the dipole moment (including the C-G dipole) increases by only $0.005~TOT~cts~s^{-1}$ and is still consistent with the C-G dipole. The upper limit on the intrinsic dipole with this fit is determined primarily by the noise in the fit and is not significantly different from that value given above. It is difficult to understand how their quoted errors could be two to four times less than those quoted in Table \[tab:dipole\]. The shot noise variance in the map they used was three times greater than in our combination map and so one would expect their errors would be somewhat larger than those above. They performed only a four parameter fit (offset plus dipole) which would result in a slight reduction of error; however, this is a bit misleading since their Galaxy model and linear time drift are derived from essentially the same data set. It is possible that their lower errors could result from ignoring correlations in the noise (our detected ACF). In any case, we find no evidence for an intrinsic dipole moment in the XRB. Implications for Cosmology ========================== Clustering and Bias in the X-ray Background ------------------------------------------- The observed X-ray auto-correlation can be compared to the matter auto-correlation predicted by a given cosmological model. The linear bias factor for the X-rays can then be determined by normalizing to the observed CMB anisotropies. Since X-rays arise at such high redshifts, the fluctuations we measure are on scales $\lambda \sim 100 h^{-1} Mpc$, comparable to those constrained by the CMB, i.e., on wavelengths that entered the horizon about the time of matter domination. The predicted X-ray ACF depends on both the cosmological model and on the model for how the X-ray sources are distributed in redshift, which is constrained by observed number counts and the redshift measurements of discrete sources. We use the redshift distribution described in Boughn, Crittenden and Turok (1998), based on the unified AGN model of Comastri et al. (1995). (See also the more recent analysis by Gilli et al. (2001).) While we will not reproduce those calculations here, the basic result is that the XRB intensity is thought to arise fairly uniformly in redshift out to $z=4$. Our results here are not very sensitive to the precise details of this distribution. Another issue in the calculation of the power spectrum is the possible time dependence of the linear bias. Some recent studies indicate that the bias is tied to the growth of fluctuations and may have been higher at large redshift (Fry 1996, Tegmark & Peebles 1998). For the purposes of the power spectrum, an evolving bias will have the same effect as changing the source redshift distribution. Again, our results are not strongly dependent on these uncertainties, but they comprise an important challenge to using the X-ray fluctuation studies to make precision tests of cosmology. Figure \[fig:x-cls\] shows the predicted XRB power spectrum, normalized to our observations. On the scales of interest, the predicted spectra are fairly featureless, and reasonably described by a power law in $\ell$, $C_\ell \propto \ell^{\epsilon -2}$, which corresponds to the correlation of the form $\omega(\theta) = (\theta_0/\theta)^\epsilon$. For the models of interest, $1.1 < \epsilon < 1.6$ for $\ell < 100$, and decreases at higher $\ell$ (smaller separations). Note that the spectra calculated by Treyer et al. (1998) appear to be consistent with our findings, suggesting $\epsilon = 1.2$. The precise index $\epsilon$ depends on the position of the power spectrum peak which is determined by the shape parameter, $\Gamma \simeq \Omega_m h.$ Larger values of $\Gamma$ imply more small scale power and thus higher $\epsilon.$ For simplicity, we normalize to the X-ray correlation function at $4.5^{\circ}$, $\omega(4.5^\circ)= 1.0 \pm 0.25\times 10^{-4}$. This separation is large enough to be independent of the PSF contribution to the ACF, but not so large that the attenuation from the large scale fits becomes significant. Also, the value of the fit ACF at $4.5^{\circ}$ is nearly independent of the index $\epsilon$ (see §5). As can be seen from Figure \[fig:x-cls\], this normalization fixes the power spectrum at $\ell \simeq 5-7$. We normalize the fluctuations to the COBE power spectrum as determined by Bond, Jaffe & Knox (1998). However, it should be noted that fits to smaller angular CMB fluctuations indicate that using COBE alone may somewhat overestimate the matter fluctuation level (Lahav et al. 2002). The biases derived from the models appear to be largely insensitive to the matter density. This is due to a cancellation of two effects: the CMB normalization and the power spectrum shape (White & Bunn 1995). The biases are roughly inversely proportional to $h$. Typical biases appear to be $b_X = 2.3 \pm 0.3 (0.7/h)^{0.9}$, increasing slightly as $\Gamma$ decreases and the peak of the power spectrum moves to larger scales. The Intrinsic X-ray Dipole --------------------------- The theoretical models normalized to our observations predict the intrinsic power on a wide range of scales, assuming the X-ray bias is scale independent. In particular, these models give a prediction for the variance of the intrinsic dipole moment. We can compare our model predictions to the the upper limit for the intrinsic dipole to see if we should have observed it in the X-ray map. The dipole amplitude in the $\hat{z}$ direction is related to the spherical harmonic amplitude by $\Delta_z = \sqrt{3/4\pi} a_{10}$. Thus, the expected dipole amplitude is related to the power spectrum by $$\langle \Delta^2 \rangle = 3 \times {3 \over 4 \pi} \langle |a_{1m}|^2 \rangle = {9 \over 4 \pi} C_1.$$ Note that there is considerable cosmic variance on this, as it is estimated with only three independent numbers; $\delta C_1/ C_1 = (2/3)^{1/2}$, which corresponds to a 40 % uncertainty in the amplitude of the dipole. Also shown in Figure \[fig:x-cls\] is the level of our dipole limit, translated using equation (7-1), which corresponds to $C_1 < 3.8 \times 10^{-5}$. While our limit on the dipole limit is at the 95 % C.L., this translates to a 80 % C. L. limit on the variance $C_1$ when cosmic variance is included. As discussed above, the 95 % upper limit is four times weaker when cosmic variance is included, $C_1 < 1.5 \times 10^{-4}$. Normalized to the ACF, all the theories are easily compatible with the $C_1$ bound. The large cosmic variance associated with the dipole makes it difficult to rule out any cosmological models. With our detected level of clustering, typical theories would predict a dipole amplitude of $\Delta \simeq 0.003$. While the theories are not in conflict with the dipole range claimed by Scharf et al., they strongly prefer the lower end of their range, even for the most shallow of the models. A dipole amplitude $\Delta \ge 0.005$ would be very unlikely from the models, indicating either a significantly higher bias than we find or a model with more large scale power ($\epsilon \le 1.1$). The dipole and bulk motions --------------------------- The dipole of the X-ray background provides another independent test of the large scale X-ray bias through its relation to our peculiar velocity (e.g. see Scharf et al. (2000) and references therein.) Like the gravitational force, the flux of a nearby source drops off as an inverse-square law, so the dipole in the X-ray flux is proportional to the X-ray bias times the gravitational force produced by nearby matter. Our peculiar motion is a result of this force, and is related to the gravitational acceleration by a factor which depends on the matter density. In typical CDM cosmologies, the dipole and our peculiar velocity arise due to matter at fairly low redshifts ($z < 0.1$). If this is the case, it is straight forward to relate their amplitudes. Following the notation of Scharf et al., we define $D^\alpha = \int d\Omega I(\hat{\bf{n}}) \hat{n}^\alpha = 4\pi \bar{I} \Delta^\alpha/3$. Using linear perturbation theory, one can show the local bulk flow is $$v^\alpha = {{H_0 f} \over {b_X(0) \rho_X (0)}} D^\alpha,$$ where $\rho_X (0)$ is the local X-ray luminosity density, $b_X(0)$ is the local X-ray bias and $f \simeq \Omega_m^{0.6}$ is related to the growth of linear perturbations (Peebles 1993). From the mean observed intensity (Gendreau et al. 1995) and the local X-ray luminosity density (Miyaji et al. 1994) we find that $\bar{I} \simeq 2.4 \rho_X (0) c/4 \pi H_0$. This implies that $$|v| \simeq 2.4 \times 10^5 \Delta \, {\Omega_m^{0.6} \over b_X(0)} \, \rm{km\,s^{-1}}.$$ This relation was derived by Scharf et al. (2000), though their numerical factor was computed from a fiducial model rather than directly from the observations, as above. In any case, the uncertainty in $\rho_X (0)$ is considerable, $6 \times 10^{38} erg\, s^{-1}Mpc^{-3} < \rho_X (0) < 15 \times 10^{38}erg\, s^{-1}Mpc^{-3}$ (Miyaji et al. 1994), and so is the uncertainty in this relation. Our maps have bright sources removed, which correspond to nearby sources out to $60 h^{-1} Mpc$. Thus, we need to compare our dipole limit to the motion of a sphere of this radius centered on us. Typical velocity measurements on this scale find a bulk velocity of $v_{60} \simeq 300 \pm 100$ km/s (see Scharf et al. (2000) for a summary.) With our dipole limit, this implies that $\Omega_m^{0.6}/b_X(0) \gs 0.24 \pm 0.08$ where the uncertainty in $\bar{I}/\rho_X (0)$ is not included. This constraint is independent of cosmic variance issues. While the diameter of the local (Virgo) supercluster is generally considered to be in on the order of $40 h^{-1}$ to $50 h^{-1}~Mpc$ (e.g., Davis et al. 1980), there is evidence that the overdensity in the Supergalactic plane extends significantly beyond $60h^{-1}~Mpc$ (Lahav et al. 2000). One might, therefore, suspect that our correction for emission from the local supercluster might effectively remove sources at distances greater than $60h^{-1}~Mpc$ in that plane. In any case, that correction made very little difference in dipole fits (see §6.2) so our conclusions remain the same. Note that this limit could potentially conflict with previous determinations by Miyaji (1994) who found $\Omega_m^{0.6}/b_X(0) = f_{45}/3.5,$ where $f_{45}$ is the fraction of gravitational acceleration arising from $R \le 45 h^{-1}$ Mpc. This is consistent only for $f_{45} \sim 1$, which is larger than is usually assumed ($f_{45} \sim 0.5$). However, this limit comes from studies of a fairly small sample (16) of X-ray selected AGN and is subject to significant uncertainties of its own. For typical biases suggested by the observed clustering ($b_X \sim 2.3$), our constraint suggests a somewhat high matter density, $\Omega_m > 0.37$, for $v_{60} \simeq 300$ km/s. This is consistent with the ISW constraint discussed below and also with previous analyses of bulk velocities which tend to indicate higher $\Omega_m$. However, if the bulk velocity is smaller and/or $\rho_X (0)$ larger, this constraint is weakened. In addition, we have assumed a constant X-ray bias. If the bias evolves with redshift, then the local value could be considerably smaller which would also weaken this bound. The Integrated Sachs-Wolfe Effect and $\Omega_{\Lambda}$ -------------------------------------------------------- In models where the matter density is less than unity, microwave background fluctuations can be created very recently by the evolution of the linear gravitational potential. This is known as the late time integrated Sachs-Wolfe (ISW) effect. Photons gain energy as they fall into a potential well, and loose a similar amount of energy as they exit. However, if the potential evolves significantly as the photon passes through, the energy of the photons will be changed, leaving an imprint on the CMB sky. The spectrum is modified most on large scales where the photons receive the largest changes. The CMB anisotropies created in this way are naturally correlated with the gravitational potential. Thus, we expect to see correlations between the CMB and tracers of the local ($z \sim 2$) gravitational potential such as the X-ray background (Crittenden & Turok 1996). These correlations are primarily on large scales such as those probed by the HEAO survey. In an earlier paper, we searched for a correlation between the HEAO maps and maps of the CMB sky produced by COBE. We failed to find such a cross correlation and were able to use our limit to constrain the matter density and the X-ray bias (Boughn, Crittenden & Turok 1998, hereafter BCT). However, translating our measurement into a cosmological bound was ambiguous because the level of the intrinsic structure of the XRB was unknown at the time. With the observation of the X-ray ACF presented here, we are in a position to revisit the cosmological limits implied by these measurements. To make cosmological constraints, we compare the observed X-ray/CMB cross correlation to those predicted by $\Lambda CDM$ models. As above, we normalize the CMB fluctuations using the band powers of COBE (Bond, Jaffe & Knox 1998) and also normalize the X-ray fluctuations as discussed in §7.1. The cross correlation analysis of BCT was performed with a coarser pixelization ($2.6^\circ \times 2.6^\circ$) than the ACF discussed above. We include this effect by using the numerically calculated pixelization window function. The COBE PSF used was that found by Kneissl & Smoot (1993) and we used a $2.9^\circ$ FWHM Gaussian for the underlying X-ray PSF (recall that the $3.04^\circ$ FWHM beam found above includes a $1.3^\circ \times 1.3^\circ$ pixelization.) The calculation of the HEAO-COBE cross correlation was discussed in BCT and has not changed. The results are shown in Figure \[fig:cross\], along with predictions for three different values of $\Omega_\Lambda$. While the X-ray bias depends strongly on the Hubble parameter, the predicted cross correlation is only weakly dependent on it, changing only 10% for reasonable values of $H_0$. The cross correlation depends primarily on $\Omega_\Lambda$; no correlation is expected if there is no cosmological constant and the ISW effect increases as $\Omega_\Lambda$ grows. The error bars in Figure \[fig:cross\] are calculated from Monte Carlo simulations and arise primarily due to cosmic variance in the observed correlation. The error bars are significantly correlated. The observed correlation is most consistent with there being no intrinsic cross correlation ($\Omega_\Lambda =0.0$). We set limits by calculating the likelihood of a model relative to this no correlation model. Using the frequentist criterion used in BCT, $\Omega_\Lambda \le 0.65$ at the 98% C.L., $\Omega_\Lambda \le 0.60$ at the 95%. C.L. Almost identical limits arise from a Bayesian approach, where the relative likelihoods are marginalized over, assuming a constant prior for $\Omega_\Lambda \ge 0$. Figure \[fig:rprob\] shows a one-dimensional slice through the likelihood surface, where only the cross correlation information has been used to calculate the likelihood. One of the major assumptions we made in interpreting the above result is how the sources of the XRB are distributed in redshift. It is likely that current models of the luminosity function will have to be substantially modified as further deep observations of the sources of the XRB are made. However, as pointed out above, the ISW is relatively insensitive to the exact shape of the redshift distribution of luminosity. If the true distribution includes a substantial fraction of the luminosity at redshifts greater than 1, then the above results will not change dramatically. On the other hand, our constraint on $\Omega_\Lambda$ is quite sensitive to the value of the bias parameter. If the sources of the XRB should turn out to be unbiased, i.e., $b_X = 1$, then the constraint on $\Omega_\Lambda$ could be weakened dramatically. We hasten to add that such a low bias would require that the ACF of Figure 5 be reduced by more than a factor of four, which seems unlikely. Previous determinations of X-ray bias have resulted in a wide range of values, $1 < b_X < 7$ (see Barcons et al. 2000 and references therein). Its clear that firming up the value of $b_X$ and determining how it varies with scale and redshift will be required before the ISW effect can be unambiguously interpreted. The above limit may be compared to what we found from cross correlating COBE with the NVSS radio galaxy survey (Boughn & Crittenden 2002). There we also found no evidence for correlations, and were able to put a 95 % C.L. limit of $\Omega_\Lambda \le 0.74$, with some weak dependence on the Hubble constant. While the above limit provides important confirmation of that result, it should be noted that these two limits are not entirely independent. Radio galaxies and the X-ray background are, indeed, correlated with each other (Boughn 1998). An important source of noise in the cross correlation of Figure 7 is instrument noise of the COBE DMR receivers. In addition, the relatively poor angular resolution of the COBE radiometers reduce, somewhat, the amplitude of the ISW signal. Therefore, some improvement can be expected by repeating the analysis on future CMB maps, such as that soon to be produced by NASA’s MAP satellite mission. If such an analysis still finds the absence of an ISW effect, then the current $\Lambda CDM$ model would be in serious conflict with observational data if the X-ray bias can be similarly constrained. On the other hand, a positive detection would provide important evidence about the dynamics of the universe even if the X-ray bias remains uncertain. Conclusions =========== By carefully reconstructing the HEAO beam and analysing its auto-correlation function, we have been able to confirm the presence of intrinsic clustering in the X-ray background. This gives independent verification of the multipole analysis of Scharf et al. (2000) and the level of clustering we see is comparable. The clustering we see is in excess of that predicted by standard cold dark matter models and indicates that some biasing is needed. The amount of biasing required depends on the cosmological model and on how the bias evolves over time; if the bias is constant, typical models indicate that $b_X \simeq 2.$ The biases of galaxies, clusters of galaxies, radio sources, and quasars have yet to be adequately characterized and so whether or not the above X-ray bias is excessive is a question that, for the present, remains unanswered. We have also confirmed, at the 2-3 $\sigma$ level, the detection of the Compton-Getting dipole in the X-ray background due to the Earth’s motion with respect to the rest frame of the CMB. However, we have been unable to confirm the presence of an intrinsic dipole in the XRB and have actually been able to exclude a significant part of the range reported by Scharf et al. (2000). While our dipole limit is still too small to conflict with any of the favored CDM models, combining our dipole limit with observations of the local bulk flow enable us to constrain $\Omega_m^{0.6}/b_X(0) > 0.24$. For constant bias models, this suggests a relatively large matter density, as is also seen in for other velocity studies; however, the uncertainty in this limit is still considerable. With the observed X-ray clustering, large $\Lambda-CDM$ models predict a detectable correlation with the cosmic microwave background arising via the integrated Sachs-Wolfe effect. That we have not observed this effect suggests $\Omega_\Lambda \ls 0.60$. This is beginning to conflict with models preferred by a combination of CMB, LSS and SNIA data (e.g., de Bernardis et al. 2000 & Bahcall et al. 1999). This work gives strong motivation for further observations of the large scale structure of the hard X-ray background. Better measurements of the full sky XRB anisotropy are needed, as is more information about the redshift distribution of the X-ray sources. This will be essential for cross correlation with the new CMB data from the MAP satellite and to bridge the gap between the CMB scales and those probed by galaxy surveys such as 2-dF and SDSS. We would like to acknowledge Keith Jahoda who is responsible for constructing the HEAO1 A2 X-ray map and who provided us with several data-handling programs. We also thank Neil Turok for useful discussions, Ed Groth for a variety of analysis programs, and Steve Raible for his help with some of the analysis programs. RC acknowledges support from a PPARC Advanced Fellowship. This work was supported in part by NASA grant NAG5-9285. Allen, J. Jahoda, K. & Whitlock, L. 1994, Legacy, 5, 27 Bahcall, N. Ostriker, J.P., Perlmutter, S. & Steinhardt, P. 1999, Science, 284,1481 Barcons, X., Carrera, F.J., Ceballos, M.T. & Mateos, S. 2000, Invited review presented at the Workshop X-ray Astronomy’99: Stellar endpoints, AGN and the diffuse X-ray background, astro-ph/0001182 Boldt, E. 1987, Phys. Rep., 146, 215 Bond, J. R., Jaffe, A. & Knox, L. 1998, Phys Rev D, 57, 2117B Boughn, S. 1998, ApJ, 499, 533 Boughn, S. 1999, ApJ, 526, 14 Boughn, S. & Crittenden, R. 2002, PRL 88, 1302 Boughn, S., Crittenden, R. & Turok, N. 1998, New Astron., 3, 275 (BCT) Carrera, F. et al. 1993, MNRAS, 260, 376 Comastri, A., Setti, G., Zamorani, G. & Hasinger, G. 1995, A & A, 296, 1 Compton, A. & Getting, I. 1935, Phys. Rev., 47, 817 Cowie, L. et al. 2002, ApJ, 566, L5 Cress, C. M. & Kamionkowski, M. 1998, MNRAS, 297, 486 Crittenden, R. & Turok, N. 1996, PRL 76, 575 Davis, M., Tonry, J., Huchra, J. & Latham, D. 1980, ApJ, 238, L113 de Bernardis et al. 2000, Nature, 404, 955 Fry, J.N., 1996, ApJ 461, L65 Gendreau, K. C. et al. 1995, PASJ, 47, L5 Giacconii, R. et al. 2001, ApJ, 551, 624 Gilli, R., Salvati, M. & Hasinger, G. 2001, A&A, 366, 407 Giommi, P. Perri, M., & Fiore F. 2000, A & A, 372, 799 Haslam, C.G.T. et al. 1982, A & A Supp., 47, 1 Iwan, D. et al. 1982, ApJ, 260, 111 Jahoda, K. 1993, Adv. Space Res., 13 (12), 231 Jahoda, K. 2001 - private communication Jahoda, K. & Mushotzky, R. 1989, ApJ, 346, 638 Kneissl, R. & Smoot, G. 1993, COBE note 5053 Lahav, O., Piran, T. & Treyer, M. A. 1997, MNRAS, 284, 499 Lahav, O, Santiago, B., Webster, A., Strauss, M., Davis, M., Dressler, A. & Huchra, J. 2000, MNRAS, 312, 166L Lahav, O., et al. 2002, MNRAS, 333, 961 Miyaji, T. 1994, Ph.D. thesis, Univ. Maryland Miyaji, T., Lahav, O., Jahoda, K. & Boldt, E. 1994, ApJ, 434, 424 Mushotzky, R. F., Cowie, L. L., Barger, A. J. & Arnaud, K. A. 2000, Nature, 404, 459 Mushotzky, R. & Jahoda, K. 1992, in: The X-ray Background, X. Barcons & A.C. Fabian eds, (Cambridge University Press) Peebles, P. J. E. 1993, Principles of Physical Cosmology, (Princeton: Princeton Univ. Press) Piccinotti, G., Mushotzky, R. F., Boldt, E. A., Holt, S. S., Marshall, F. E., Serlemitsos, P. J. & Shafer, R. A. 1982, ApJ, 253, 485 Scharf, C. A., Jahoda, K., Treyer, M., Lahav, O., Boldt, E. & Piran, T. 2000, ApJ, 544, 49 Rosati, P. et al. 2002, ApJ, 566,667 Shafer, R. A. 1993, Ph.D. thesis, Univ. Maryland (NASA TM 85029) Soltan, A. M., Hasinger, G., Egger, R., Snowden, S. & Truemper, J. 1996, A & A, 305, 17S Tegmark, M. & Peebles, P. J. E. 1998, ApJ, 500, L79 Treyer, M. A., Scharf, C. A., Lahav, O., Jahoda, K., Boldt, E. & Piran, T. 1998, ApJ, 509, 531 Tully, R. B. 1988, Nearby Galaxies Catalog, Cambridge Univ. Press, Cambridge Vikhlinin, A. & Forman, W. 1995, ApJ, 455L, 109 Voges, W. et al. 1996, IAU Circ., 6420, 2 White, R. & Stemwedel, S. 1992, Astronomical Data Analysis Software and Systems I, eds. D. Worrall, C. Biemesderfer & J. Barnes (San Francisco: ASP), 379 White, M. & Bunn, E. F. 1995, ApJ 450, 477 ![The combined map from the HEAO1 A2 medium and high energy detectors, pixelized using the standard COBE quad cubed format ($1.3^\circ \times 1.3^\circ$ pixels.) The effective beam size is approximately $3^\circ$. The most visible features, the Galactic plane and the nearby bright sources, are removed from the maps we analyze. []{data-label="fig:heao"}](fig1.ps){height="5.0in"} ![The mean point spread function for the combined map found by averaging the individual PSFs of sixty strong HEAO1 point sources. The data is well fit by a Gaussian with FWHM of $3.04^\circ$.[]{data-label="fig:psf"}](fig2.ps){height="3.0in"} ![The auto-correlation function of the HEAO1 A2 map with bright sources and the Galactic plane removed and corrected for large-scale, high Galactic latitude structure. The dashed curve is that expected from beam smearing due to the PSF of the map while the solid curve includes a contribution due to clustering in the XRB (see §5).[]{data-label="fig:acf"}](fig3.ps){height="3.0in"} ![The residuals of the ACF fit from Figure 3, after the shot noise, PSF and a simple model of the intrinsic fluctuations have been removed. []{data-label="fig:resid"}](fig4.ps){height="3.0in"} ![The intrinsic ACF, with shot noise and PSF fits removed. For comparison, a simple $\theta^{-1}$ model for the intrinsic correlations is shown. The data beyond $9^\circ$ is not used because of uncertainty due to the fitting of the large scale structures. The model has been smoothed by the PSF and corrected for the removal of the large scale structures, which suppresses the correlations on scales larger than $10^\circ.$[]{data-label="fig:intrin"}](fig5.ps){height="3.0in"} ![The power spectrum for a range of cosmologies normalized to the observations ($H_0 = 70~km~s^{-1}Mpc^{-1}$.) The various cosmologies show a range of slopes, from $1.1 < \epsilon < 1.6 $ and the observations fix them at $\ell \simeq 5$. Also shown is the 95 % upper limit from the dipole, excluding cosmic variance. With cosmic variance, the limit shown is at the 80 % confidence level, and the 95 % upper limit is four times higher. The green line shows the suppression arising from beam smoothing, which smoothes scales $\ell > 50$. []{data-label="fig:x-cls"}](fig6.ps){width="3.0in"} ![ The calculated X-ray/CMB cross correlation. The error bars are highly correlated. Also shown are the predictions for three $\Lambda-CDM$ models with varying $\Omega_\Lambda$ ($H_0 = 70~km~s^{-1}Mpc^{-1}$.) []{data-label="fig:cross"}](fig7.ps){width="3.0in"} ![ The relative probability of the observed cross correlation for varying cosmological constant, with the Hubble constant fixed ($H_0 = 70~km~s^{-1}Mpc^{-1}$.) The best fit is for no correlation. []{data-label="fig:rprob"}](fig8.ps){width="3.0in"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'We extend the notion of parking functions to parking sequences, which include cars of different sizes, and prove a product formula for the number of such sequences.' author: - Richard Ehrenborg and Alex Happ title: Parking Cars of Different Sizes --- The result. =========== Parking functions were first introduced by Konheim and Weiss [@Konheim_Weiss]. The original concept was that of a linear parking lot with $n$ available spaces, and $n$ cars with a stated parking preference. Each car would, in order, attempt to park in its preferred spot. If the car found its preferred spot occupied, it would move to the next available slot. A parking function is a sequence of parking preferences that would allow all $n$ cars to park according to this rule. This definition is equivalent to the following formal definition: Let $\vec{a}=(a_1,a_2,\dots,a_n)$ be a sequence of positive integers, and let $b_1\leq b_2\leq\cdots\leq b_n$ be the increasing rearrangement of $\vec{a}$. Then the sequence $\vec{a}$ is a parking function if and only if $b_i\leq i$ for all indexes $i$. It is well known that the number of such parking functions is $(n+1)^{n-1}$. This is Cayley’s formula for the number of labeled trees on $n+1$ nodes and Foata and Riordan found a bijective proof [@Foata_Riordan]. Stanley discovered the relationship between parking functions and non-crossing partitions [@Stanley_I]. Further connections have been found to other structures, such as priority queues [@Gilbey_Kalikow], Gončarov polynomials [@Kung_Yan_I] and hyperplane arrangements [@Stanley_II]. The notion of a parking function has been generalized in myriad ways; see the sequence of papers [@Chebikin_Postnikov; @Kung_Yan_I; @Kung_Yan_II; @Kung_Yan_III; @Yan]. We present here a different generalization, returning to the original idea of parking cars. This time the cars have different sizes, and each takes up a number of adjacent parking spaces. Let there be $n$ cars $C_{1},\dots,C_n$ of sizes $y_{1},\dots,y_{n}$, where $y_{1}, \ldots, y_{n}$ are positive integers. Assume there are $\sum_{i=1}^{n} y_{i}$ spaces in a row. Furthermore, let car $C_{i}$ have the preferred spot $c_{i}$. Now let the cars in the order $C_{1}$ through $C_{n}$ park according to the following rule: > Starting at position $c_{i}$, car $C_{i}$ looks for the first empty spot $j \geq c_{i}$. If the spaces $j$ through $j+y_{i}-1$ are empty, then car $C_{i}$ parks in these spots. If any of the spots $j+1$ through $j+y_{i}-1$ is already occupied, then there will be a collision, and the result is not a parking sequence. Iterate this rule for all the cars $C_{1}, C_{2}, \ldots, C_{n}$. We call $(c_{1},\dots, c_n)$ a *parking sequence* for $\vec{y}=(y_{1},\dots,y_{n})$ if all $n$ cars can park without any collisions and without leaving the $\sum_{i=1}^{n} y_{i}$ parking spaces. As an example, consider three cars of sizes $\vec{y}=(2,2,1)$ with preferences $\vec{c}=(2,3,1)$. Then there are $2+2+1=5$ available parking spaces, and the final configuration of the cars is $$\begin{tikzpicture}[scale=1.2] %%%% Parking Spaces %%%% \draw (0,0) -- (5,0); \foreach \x in {0,...,5} \draw (\x,0) -- (\x,0.5); \foreach \x in {1,...,5} \node[gray] at (\x-0.5,-0.2) {\small$\x$}; %%%% Cars %%%% % \draw[fill=gray!20] (0.1,0.1) rectangle (2.9,0.45); \draw[fill=gray!20] (0.1,0.1) rectangle (0.9,0.45); \draw[fill=gray!20] (1.1,0.1) rectangle (2.9,0.45); \draw[fill=gray!20] (3.1,0.1) rectangle (4.9,0.45); % \node at (1.5,0.265) {\footnotesize $T$}; \node at (0.5,0.265) {\footnotesize $C_{3}$}; \node at (2,0.265) {\footnotesize $C_{1}$}; \node at (4,0.265) {\footnotesize $C_{2}$}; \end{tikzpicture}$$ All cars are able to park, so this yields a parking sequence. There are two ways in which a sequence can fail to be a parking sequence. Either a collision occurs, or a car passes the end of the parking lot. As an example, consider three cars with $\vec{y}=(2,2,2)$ and preferences $\vec{c}=(3,2,1)$. Then we have $2+2+2=6$ parking spots, and the first car parks in its desired spot: $$\begin{tikzpicture}[scale=1.2] %%%% Parking Spaces %%%% \draw (0,0) -- (6,0); \foreach \x in {0,...,6} \draw (\x,0) -- (\x,0.5); \foreach \x in {1,...,6} \node[gray] at (\x-0.5,-0.2) {\small$\x$}; %%%% Cars %%%% \draw[fill=gray!20] (2.1,0.1) rectangle (3.9,0.45); \node at (3,0.265) {\footnotesize $C_{1}$}; \end{tikzpicture}$$ However, the second car prefers spot $2$, and since spot $2$ is open, he tries to take spots $2$ and $3$, but collides with $C_{1}$ in the process. Hence, this is not a parking sequence. If, instead, we had $\vec{y}=(2,2,2)$ and $\vec{c}=(2,5,5)$, then again the first two cars are able to park with no difficulty: $$\begin{tikzpicture}[scale=1.2] %%%% Parking Spaces %%%% \draw (0,0) -- (6,0); \foreach \x in {0,...,6} \draw (\x,0) -- (\x,0.5); \foreach \x in {1,...,6} \node[gray] at (\x-0.5,-0.2) {\small$\x$}; %%%% Cars %%%% \draw[fill=gray!20] (1.1,0.1) rectangle (2.9,0.45); \draw[fill=gray!20] (4.1,0.1) rectangle (5.9,0.45); \node at (2,0.265) {\footnotesize $C_{1}$}; \node at (5,0.265) {\footnotesize $C_{2}$}; \end{tikzpicture}$$ But car $C_{3}$ will pass by all the parking spots after his preferred spot without seeing an empty spot. Hence, this also fails to be a parking sequence. The classical notion of parking function is obtained when all the cars have size $1$, that is, $\vec{y}=(1,1, \ldots, 1)$. Note in this case that there are no possible collisions. In the classical case, any permutation of a parking function is again a parking function. This is not true for cars of larger size. As an example, note for $\vec{y} = (2,2)$ that $\vec{c} = (1,2)$ is a parking sequence. However, the rearrangement $\vec{c}\,^{\prime} = (2,1)$ is not a parking sequence. This shows that the notion of parking sequence differs from the notion of parking function in the papers [@Chebikin_Postnikov; @Kung_Yan_I; @Kung_Yan_II; @Kung_Yan_III; @Yan]. The classical result is that the number of parking functions is given by $(n+1)^{n-1}$; see [@Konheim_Weiss]. For cars of bigger sizes we have the following result: The number of parking sequences $f(\vec{y})$ for car sizes $\vec{y}=(y_{1},\dots,y_n)$ is given by the product $$f(\vec{y})= (y_{1}+n) \cdot (y_{1}+y_{2}+n-1) \cdots (y_{1}+\cdots+y_{n-1}+2).$$ \[theorem\_parking\] Circular parking arrangements. ============================== Consider $M = y_{1} + y_{2} + \cdots + y_{n} + 1$ parking spaces arranged in a circle. We will consider parking cars on this circular arrangement, without a cliff for cars to fall off. Observe that when all the cars have parked, there will be one empty spot left over. We claim that there are $$M \cdot f(\vec{y}) = (y_{1}+n) \cdot (y_{1}+y_{2}+n-1) \cdots (y_{1}+\cdots+y_{n}+1) . \label{equation_circular}$$ such circular parking sequences. The first car $C_{1}$ has $M$ ways to choose its parking spot. The next step is counterintuitive. After car $C_{1}$ has parked, erase the markings for the remaining $y_{2}+ \cdots + y_{n} + 1$ spots and put in $n+1$ dividers. These dividers create $n+1$ intervals on the circle, where one interval is taken up by $C_{1}$. Furthermore, these dividers are on wheels and can freely move along the circle. Each interval will accept one (and only one) car. For example, consider the case where $n=5$ and $\vec{y}=(2,5,1,3,2)$ so that $M=2+5+1+2+3+1=14$, and $c_1=5$. $$\begin{tikzpicture}[scale=0.7] %%%%%%%%% The Lot %%%%%%%% \draw (0,0) circle (2.5); \foreach \x in {0,...,13} \draw (102.85+\x*25.7:2.5) -- (102.85+\x*25.7:3.1); \foreach \x in {1,...,14} \node at (115.7-\x*25.7:2.2) {\color{gray}\footnotesize $\x$}; %%%%%%%%%% Cars %%%%%%%%% \draw[fill=gray!20] (-2.45:3) arc (-2.45:-48.85:3) -- (-48.85:2.6) arc (-48.85:-2.45:2.6) -- cycle; \node[rotate=-115.65] at (-25.65:2.795) {\footnotesize $C_1$}; \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=0.7] %%%%%%%%% The Lot %%%%%%%% \draw (0,0) circle (2.5); \draw[thick] (0.55:2.55) -- (0.55:3.1); \node at (0.55:2.55) {\tiny$\bullet$}; \node at (0.55:3.1) {\tiny$\bullet$}; \draw[thick] (-51.85:2.55) -- (-51.85:3.1); \node at (-51.85:2.55) {\tiny$\bullet$}; \node at (-51.85:3.1) {\tiny$\bullet$}; \foreach \x in {1,...,4} \draw[thick] (-51.85-\x*61.52:2.55) -- (-51.85-\x*61.52:3.1); \foreach \x in {1,...,4} \node at (-51.85-\x*61.52:2.55) {\tiny$\bullet$}; \foreach \x in {1,...,4} \node at (-51.85-\x*61.52:3.1) {\tiny$\bullet$}; %%%%%%%%%% Cars %%%%%%%%% \draw[fill=gray!20] (-2.45:3) arc (-2.45:-48.85:3) -- (-48.85:2.6) arc (-48.85:-2.45:2.6) -- cycle; \node[rotate=-115.65] at (-25.65:2.795) {\footnotesize $C_1$}; \end{tikzpicture}$$ We will now create a circular parking sequence, but only at the end do we obtain the exact positions of cars $C_{2}$ through $C_{n+1}$. That is, instead of focusing on the number of specific spot preferences each car could have, we keep track of the order the cars park in, which will then determine the exact locations of the cars. The second car has two options. The first is that it has a desired position already taken by $C_{1}$. In this case, it will cruise until the next empty spot. This can happen in $y_{1}$ ways, and then car $C_{2}$ obtains the next open interval after the interval $C_{1}$ is in. Otherwise, the car $C_{2}$ has a preferred spot not already taken. In this case $C_{2}$ has $n$ open intervals to choose from. The total number of options for $C_2$ is $y_{1} + n$. The third car $C_{3}$ has the same options. First, it may desire a spot that is already taken, in which case it will have to cruise until the next open interval. This can happen in $y_{1} + y_{2}$ ways. Note that this count applies to both the case when $C_{1}$ and $C_{2}$ are parked next to each other, and when $C_{1}$ and $C_{2}$ have open intervals between them. Otherwise, $C_{3}$ has $n-1$ open intervals to pick from. In general, car $C_{i}$ has $y_{1} + \cdots + y_{i-1} + n+2-i$ choices. This pattern continues up to $C_{n}$, which has $y_{1} + \cdots + y_{n-1} + 2$ possibilities. For example, suppose $C_2$ and $C_3$ in our above example have parked as below: $$\begin{tikzpicture}[scale=0.7] %%%%%%%%% The Lot %%%%%%%% \draw (0,0) circle (2.5); \draw[thick] (0.55:2.55) -- (0.55:3.1); \node at (0.55:2.55) {\tiny$\bullet$}; \node at (0.55:3.1) {\tiny$\bullet$}; \draw[thick] (-51.85:2.55) -- (-51.85:3.1); \node at (-51.85:2.55) {\tiny$\bullet$}; \node at (-51.85:3.1) {\tiny$\bullet$}; \draw[thick] (263:2.55) -- (263:3.1); \node at (263:2.55) {\tiny$\bullet$}; \node at (263:3.1) {\tiny$\bullet$}; \draw[thick] (133.5:2.55) -- (133.5:3.1); \node at (133.5:2.55) {\tiny$\bullet$}; \node at (133.5:3.1) {\tiny$\bullet$}; \draw[thick] (27.25:2.55) -- (27.25:3.1); \node at (27.25:2.55) {\tiny$\bullet$}; \node at (27.25:3.1) {\tiny$\bullet$}; \draw[thick] (80.375:2.55) -- (80.375:3.1); \node at (80.375:2.55) {\tiny$\bullet$}; \node at (80.375:3.1) {\tiny$\bullet$}; %%%%%%%%%% Cars %%%%%%%%% \draw[fill=gray!20] (3.55:3) arc (3.55:24.25:3) -- (24.25:2.6) arc (24.25:3.55:2.6) -- cycle; \node[rotate=-76.1] at (13.9:2.795) {\footnotesize $C_3$}; \draw[fill=gray!20] (-2.45:3) arc (-2.45:-48.85:3) -- (-48.85:2.6) arc (-48.85:-2.45:2.6) -- cycle; \node[rotate=-115.65] at (-25.65:2.795) {\footnotesize $C_1$}; \draw[fill=gray!20] (260:3) arc (260:136.5:3) -- (136.5:2.6) arc (136.5:260:2.6) -- cycle; \node[rotate=88.25+15] at (178.25+15:2.795) {\footnotesize $C_2$}; \end{tikzpicture}$$ Then $C_4$ may either cruise on $C_1$ and $C_3$ (in $y_1+y_3$ ways), it may cruise on $C_2$ (in $y_2$ ways), or it can pick one of the three available intervals directly. In total, $C_4$ has $(y_1+y_3)+y_2+3=11$ ways to park. One can imagine that when we park a car, we do not set the parking brake, but put the car in neutral, so that the car and the dividers can move as necessary to make room for future cars. Thus the total number of circular parking arrangements of this type is $$M \cdot (y_{1} + n) \cdot (y_{1} + y_{2} + n-1) \cdots (y_{1}+ \cdots + y_{n-1} + 2) ,$$ where the $i$th factor is the number of options for the car $C_{i}$. This proves the claim about the number of circular parking sequences in . Hence, to prove Theorem \[theorem\_parking\] we need only observe that the circular parking sequences with spot $M$ empty are the same as our parking sequences. This follows from the observation that no car in the circular arrangement has preference $M$, since otherwise this spot would not be empty. Furthermore, no car would cruise by this empty spot. Observe that the set of circular parking sequences is invariant under rotation. That is, if $(c_{1}, c_{2}, \ldots, c_{n})$ is a parking sequence, then so is the sequence $(c_{1}+a, c_{2}+a, \ldots, c_{n}+a)$, where all the additions are modulo $M$. In particular, the number of circular parking sequences with spot $M$ empty is given by $1/M \cdot M \cdot f(\vec{y}) = f(\vec{y})$. Concluding remarks. =================== The idea of considering a circular arrangement goes back to Pollak; see [@Riordan]. In fact, when all the cars have size $1$, this argument reduces to his argument that the number of classical parking functions is $(n+1)^{n-1}$. The idea of not using fixed coordinates when placing cars in the circular arrangement is reminiscent of the argument Athanasiadis used to compute the characteristic polynomial of the Shi arrangement [@Athanasiadis]. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank two referees for their comments as well as Margaret Readdy for her comments on an earlier draft of this note. Both authors were partially supported by National Security Agency grant H98230-13-1-0280. The first author wishes to thank the Mathematics Department of Princeton University where this work was carried out. [99]{} [C. A. Athanasiadis, Characteristic polynomials of subspace arrangements and finite fields, [*Adv. Math.*]{} [**122**]{} (1996), 193–233.]{} [D. Chebikin, A. Postnikov, Generalized parking functions, descent numbers, and chain polytopes of ribbon posets, [*Adv. in Appl. Math.*]{} [**44**]{} (2010), 145–154.]{} [A. D. Foata, J. Riordan, Mappings of acyclic and parking functions, [*Aequationes Mathematicae*]{} [**10**]{} (1974), 10–22.]{} [J. D. Gilbey, L. H. Kalikow, Parking functions, valet functions, and priority queues, [*Discrete Math.*]{} [**197–198**]{} (1999), 351–373.]{} [A. G. Konheim, B. Weiss, An occupancy discipline and applications, [*SIAM J. Applied Math.*]{} [**14**]{} (1966), 1266–1274.]{} [J. P. S. Kung, C. Yan, Gončarov polynomials and parking functions, [*J. Combin. Theory Ser. A*]{} [**102**]{} (2003), 16–37.]{} [J. P. S. Kung, C. Yan, Exact formulas for moments of sums of classical parking functions, [*Adv. in Appl. Math.*]{} [**31**]{} (2003), 215–241.]{} [J. P. S. Kung, C. Yan, Expected sums of general parking functions, [*Ann. Comb.*]{} [**7**]{} (2003), 481–493.]{} [J. Riordan, Ballots and trees, [*J. Combinatorial Theory*]{} [**6**]{} (1969), 408–411.]{} [R. P. Stanley, Parking functions and noncrossing partitions, [*Electron. Combin. *]{} [**4**]{} (1997), no. 2, Research Paper 20, 14 pp.]{} (B. E. Sagan and R. P. Stanley, eds.), Birkhäuser, Boston, 1998, pp. 359–375. (2001), 641–670. [*Department of Mathematics, University of Kentucky, Lexington, KY 40506\ [richard.ehrenborg@uky.edu]{}, [alex.happ@uky.edu]{}* ]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'Stellar limb darkening affects a wide range of astronomical measurements and is frequently modelled with a parametric model using polynomials in the cosine of the angle between the line of sight and the emergent intensity. Two-parameter laws are particularly popular for cases where one wishes to fit freely for the limb darkening coefficients (i.e. an uninformative prior) due to the compact prior volume and the fact that more complex models rarely obtain unique solutions with present data. In such cases, we show that the two limb darkening coefficients are constrained by three physical boundary conditions, describing a triangular region in the two-dimensional parameter space. We show that uniformly distributed samples may be drawn from this region with optimal efficiency by a technique developed by computer graphical programming: triangular sampling. Alternatively, one can make draws using a uniform, bivariate Dirichlet distribution. We provide simple expressions for these parametrizations for both techniques applied to the case of quadratic, square-root and logarithmic limb darkening laws. For example, in the case of the popular quadratic law, we advocate fitting for $q_1 \equiv (u_1+u_2)^2$ and $q_2 \equiv 0.5u_1(u_1+u_2)^{-1}$ with uniform priors in the interval $[0,1]$ to implement triangular sampling easily. Employing these parametrizations allows one to derive model parameters which fully account for our ignorance about the intensity profile, yet never explore unphysical solutions, yielding robust and realistic uncertainty estimates. Furthermore, in the case of triangular sampling with the quadratic law, our parametrization leads to significantly reduced mutual correlations and provides an alternative geometric explanation as to why naively fitting the quadratic limb darkening coefficients precipitates strong correlations in the first place.' author: - | David M. Kipping$^{1,2}$[^1]\ $^{1}$Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138, USA\ $^{2}$Carl Sagan Fellow date: 'Accepted 2013 July 31. Received 2013 July 30; in original form 2013 June 28' title: 'Efficient, uninformative sampling of limb darkening coefficients for two-parameter laws' --- \[firstpage\] methods: analytical — stars: atmospheres Introduction {#sec:intro} ============ Stellar limb darkening is the wavelength-dependent diminishing of the surface brightness from the centre of the disc to the limb of the star. Limb darkening affects a wide range of different astronomical observations, such as optical interferometry (e.g. @aufdenberg:1995), microlensing light curves (e.g. @witt:1995; @zub:2011), rotational modulations [@macula:2012], eclipsing binaries [@kopal:1950] and transiting planets [@mandel:2002]. Due to the often subtle, profile distorting effects of limb darkening, the parameters describing limb darkening are frequently degenerate with other model parameters of interest, and thus accurate modelling is crucial in the interpretation of such data. Many of these astronomical phenomena may be described with precise closed-form analytic solutions, if one assumes a parametric limb darkening law. For example, the transit light curve may be expressed using hypergeometric functions and elliptical integrals when one adopts a polynomial law [@mandel:2002; @kjurkchieva:2013]. Such closed forms are not only computationally expedient to evaluate, but their parametrization also easily allows for uninformative priors on a target star’s properties and Bayesian model selection of different laws, since the prior volume can be directly controlled. Many of the commonly employed parametric limb darkening laws have been chosen to provide the best approximation possible between stellar atmosphere model intensity profiles and simple polynomial expansions (e.g. @claret:2000; @claret:2003; @sing:2010; @hayek:2012). This is because a typical approach was to regress a model to some observations whilst assuming a fixed stellar limb darkening law which most realistically described the modeller’s expectation for the star. The benefits of this approach are that the parameters describing the limb darkening do not have to be varied, making the regressions considerably easier. However, an obvious consequence of this is that any model parameters derived from such an approach are fundamentally dependent upon the stellar atmosphere model adopted. An equivalent way of describing this approach is that a Dirac delta function prior was adopted for the limb darkening profile, which is statistically an implausible scenario. An alternative strategy is to relax the constraint to weaker or even uninformative priors. However, the trade-off is that by adopting a finite prior volume for the parameters describing the limb darkening, it is strongly preferred to use as compact a parametric model as possible (i.e. fewer parameters) so that the regression algorithm can reasonably hope to explore the full parameter space. Nevertheless, this is a statistically more robust approach than simply fixing these parameters, which are frequently correlated to the other model terms [@pal:2008]. An example of a weaker prior would be to regress a joint probability density function (PDF) to the limb darkening coefficients (LDCs) emerging from an ensemble of stellar atmosphere models (e.g. @kepler22:2013). However, even this approach is still fundamentally dependent upon stellar atmosphere models, since it is from these models that the ensemble of coefficients is initially computed. In contrast, uninformative priors make no assumption about the limb darkening profile, except for the parametric form which describes the intensity profile (e.g. the polynomial orders used). Such an approach may even be used to reverse engineer properties of individual stars or populations thereof [@neilson:2012], although @howarth:2011 cautions that one must carefully account for the system geometry when comparing fitted LDCs and those from stellar atmosphere models. Adopting a simple parametric limb darkening law with uninformative priors is therefore a powerful way of (i) incorporating and propagating our ignorance about the target star’s true intensity profile into the derivation of all model parameters, (ii) presenting results which are independent of theoretical stellar atmosphere models, (iii) modelling astronomical phenomenon using closed-form and thus highly expedient algorithms and (iv) providing insights and constraints on the fundamental properties of the target star. The most common choice of uninformative prior for LDCs is a simple uniform prior. One danger of uninformative priors is that allowing the LDCs to explore any parameter range can often lead to unphysical limb darkening profiles being explored. It is therefore necessary to impose boundary conditions which prevent such violations. In this work, we show that after imposing the said boundary conditions (§\[sub:derivation\]), the PDF describing an uninformative joint prior on the quadratic LDCs is a uniform, bivariate Dirichlet distribution (§\[sub:dirichlet\]). Furthermore, we show that one may efficiently sample from this distribution using a trick from the field of computer graphical programming: triangular sampling (§\[sub:triangular\]). This results in a new parametrization for the quadratic LDCs which samples the physically plausible range of LDCs in an optimally efficient and complete manner. By comparing our results to previously proposed parametrizations, we show that this approach is at least twice as efficient as all others (§\[sec:comp\]). Finally, we provide optimal parametrizations using triangular sampling for other two-parameter limb darkening laws (§\[sec:otherlaws\]). Quadratic Limb Darkening Law {#sec:quadratic} ============================ Deriving the three boundary conditions {#sub:derivation} -------------------------------------- We begin by considering the quadratic limb darkening law due its wide ranging use in a variety of fields. We first derive the boundary conditions which constrain the physically plausible range of the associated LDCs. Note, this is not the first presentation of such constraints (e.g. @burke:2007), but due to some distinct constraints present elsewhere in the literature (e.g. @carter:2009) and the fact that this derivation serves as a template for the applying constraints to other two-parameter limb darkening laws (e.g. see later §\[sec:otherlaws\]), we present an explicit derivation here. We begin by considering the widely used quadratic limb darkening law. The quadratic law seems to have first appeared in @kopal:1950 and is attractive due to its simple, intuitive form, flexibility to explore a range of profiles plus a fairly compact, efficient structure. The specific intensity of a star, $I(\mu)$, following the quadratic limb darkening may be described by $$\begin{aligned} I(\mu)/I(1) &= 1 - u_1 (1-\mu) - u_2 (1-\mu)^2, \label{eqn:Ispecific}\end{aligned}$$ where $I(1)$ is the specific intensity at the centre of the disc, $u_1$ and $u_2$ are the quadratic LDCs and $\mu$ is the cosine of the angle between the line of sight and the emergent intensity. We may also express $\mu=\sqrt{1-r^2}$, where $r$ is the normalized radial coordinate on the disc of the star. We wish to investigate whether imposing some physical conditions on this expression leads to any useful constraints on the allowed ranges of the coefficients $u_1$ and $u_2$. In what follows, we define physically plausible limb darkening profiles in reference to broad bandpass photometric/imaging observations of normal main-sequence stars (i.e. we do not consider pulsars, white dwarfs, brown dwarfs, etc). Accordingly, we may impose the following two physical conditions: - an everywhere-positive intensity profile, - a monotonically decreasing intensity profile from the centre of the star to the limb. Condition **(A)** requires little justification since a negative intensity has no physical meaning and it may be expressed algebraically as $I(\mu)>0$ $\forall$ $0\leq\mu<1$, or $$\begin{aligned} u_1 (1-\mu) + u_2 (1-\mu)^2 < 1 \,\,\,\,\,\forall\,\,\,\,\,\,0\leq\mu<1.\end{aligned}$$ The above can be evaluated in one of two extrema; minimizing the LHS with respect to $\mu$ and maximizing the LHS with respect to $\mu$. Consider first minimizing the LHS, which is trivially found to occur for $\mu\rightarrow1$. This leaves us with the meaningless constraint that $0<1$, which is of course satisfied for all $u_1$ and $u_2$ and thus leads to no useful constraints on the LDCs. The other extrema of this condition is found by evaluating the LHS at its maximum, which is again trivially found to occur when $\mu\rightarrow0$ and leads us to $$\begin{aligned} u_1+u_2 < 1. \label{eqn:conditionA}\end{aligned}$$ Therefore, the physical requirement of an everywhere-positive intensity profile leads to a single constraint on the LDCs, given by equation (\[eqn:conditionA\]). Next, let us enforce condition **(B)**, that the specific intensity is a monotonically decreasing function towards the limb. This is generally expected for any broad bandpass limb darkening profile [@burke:2007], but some narrow spectral lines, such as Si IV, could produce limb-brightened profiles [@schlawin:2010]. Focusing on the much more common case of limb darkening though, we have $$\begin{aligned} \frac{ \partial I(\mu) }{\partial \mu } > 0,\end{aligned}$$ which is easily shown to give $$\begin{aligned} u_1 + 2u_2(1-\mu) > 0.\end{aligned}$$ One of the extrema of this condition is found by minimizing the LHS with respect to $\mu$, which occurs for $\mu\rightarrow1$, giving $$\begin{aligned} u_1 > 0. \label{eqn:conditionB}\end{aligned}$$ The other extrema occurs when we maximize the LHS with respect to $\mu$, which occurs for $\mu\rightarrow0$ and gives $$\begin{aligned} u_1 + 2 u_2 > 0. \label{eqn:conditionC}\end{aligned}$$ We therefore derive two constraints on the LDCs from condition **(B)** (equations \[eqn:conditionB\] & \[eqn:conditionC\]). In total then, we have three boundary conditions on the coefficients $u_1$ and $u_2$: $$\begin{aligned} u_1+u_2 &< 1,\nonumber \\ u_1 &> 0,\nonumber \\ u_1 + 2 u_2 &> 0. \label{eqn:conditions}\end{aligned}$$ Comparison to previous works {#sub:theorycomparison} ---------------------------- Before proceeding to our new parametrization model, we pause to compare our derived boundary conditions to those in previous works. The first explicit declaration of a set of expressions used to enforce physically plausible LDCs, that we are aware of, seems to come from @burke:2007. Here, the authors state all three of the same boundary conditions (see §3.2 of that work) stated here in equation (\[eqn:conditions\]). This is not surprising as @burke:2007 enforced the same physical criteria \[i.e. conditions **(A)** and **(B)**\] to derive their expressions, i.e. an everywhere-positive intensity profile and a monotonically decreasing brightness from the centre-to-limb. Another paper stating boundary conditions on the LDCs comes from @carter:2009, where the authors used the conditions $(u_1+u_2)<1$, $u_1>0$ and $(u_1+u_2)>0$. We point out that the last constraint seems to be a typographical error missing a ‘2’, but otherwise are the same as those constraints provided here. We highlight this minor point to avoid potential confusion in comparing these works. Visualizing the constraints {#sub:visualizing} --------------------------- In order to visualize the constraints of equation (\[eqn:conditions\]), we generated $u_1$ and $u_2$ by naively randomly sampling a uniform distribution bounded by $-3<u_1<+3$ and $-3<u_2<+3$. For every realization, we only accept the draw if all of the constraints in equation (\[eqn:conditions\]) are satisfied, as shown in Fig. \[fig:constraints\]. Iterating until $10^5$ trials were accepted, we required 3.6 million trials, i.e. an efficiency of 2.8%. This highlights how inefficient it would be to sample from such a joint distribution. ![*Drawing $u_1$ and $u_2$ from a uniform distribution between $-3$ and $+3$, we show the realizations which satisfy the physical constraints of equation (\[eqn:conditions\]). The black dashed lines describe the three constraints. The loci of accepted points form a triangle with a bisector inclined $35.8^{\circ}$ to the $u_1$-axis.*[]{data-label="fig:constraints"}](constraints2.eps){width="8.4"} One may re-plot Fig. \[fig:constraints\] using different axes to visualize the constraints in alternative ways. We found that one particularly useful way of visualizing the constraints was found by plotting $v_1$ against $v_2$, as shown in Fig. \[fig:triangle\], where we use the parametrization: $$\begin{aligned} v_1 &\equiv u_1/2,\\ v_2 &\equiv 1 - u_1 - u_2.\end{aligned}$$ Using this parametrization, Fig. \[fig:triangle\] reveals the loci of points satisfying conditions **(A)** and **(B**) form a right-angled triangle. ![*Same as Fig. \[fig:constraints\], except that we have re-parametrized the two axes. One can see that the allowed physical range falls within a triangle which covers exactly one half of the unit square $\{0,0\}\rightarrow\{0,1\}\rightarrow\{1,1\}\rightarrow\{1,0\}$. This square describes the constraints stated in @carter:2009, which violate condition **(B)**.* []{data-label="fig:triangle"}](triangle.eps){width="8.4"} Physical priors using the Dirichlet distribution {#sub:dirichlet} ------------------------------------------------ For those familiar with the Dirichlet distribution, the shape of Fig. \[fig:triangle\] will bear an uncanny resemblance to the uniform, bivariate Dirichlet distribution. The Dirichlet distribution is a multivariate generalization of the Beta distribution (which itself has useful applications as a prior; @beta:2013). Aside from being able to exactly reproduce the distribution shown in Fig. \[fig:triangle\], the bivariate Dirichlet distribution is able to reproduce a diverse range of profiles with just three so-called ‘concentration’ parameters ($\balpha=\{\alpha_1,\alpha_2,\alpha_3\}^T$). The PDF is given by $$\begin{aligned} \mathrm{P}(\balpha;v_1,v_2) &=\frac{v_1^{\alpha_2-1} v_2^{\alpha_1-1} (1-v_1-v_2)^{\alpha_3-1} \Gamma[\alpha_1+\alpha_2+\alpha_3]} {\Gamma[\alpha_1] \Gamma[\alpha_2] \Gamma[\alpha_3]}, \label{eqn:dirichlet}\end{aligned}$$ for $v_1 > 0$, $v_2 > 0$ and $(v_1+v_2)<1$; otherwise $\mathrm{P}(\balpha;v_1,v_2) = 0$. In the case of the uniform distribution of Fig. \[fig:triangle\], one may simply use $\balpha = \mathbf{1}$: $$\mathrm{P}(\balpha=\mathbf{1};v_1,v_2) = \begin{cases} 2 & \text{if} \ v_1>0 \ \mathrm{\&} \ v_2>0 \ \mathrm{\&} \ (v_1+v_2)<1 ,\\ 0 & \text{otherwise}. \end{cases} \label{eqn:flatdirichlet}$$ The bivariate Dirichlet distribution is also uniquely defined over the range $v_1>0$, $v_2>0$ and $(v_1+v_2)<1$ and naturally integrates to unity over this range. It may therefore be used to serve as a proper prior. Physical priors using triangular sampling {#sub:triangular} ----------------------------------------- Consider the special case where one requires sampling from a uniform prior in the joint distribution $\{u_1,u_2\}$ (but wishes to enforce that all sampled realizations are physical). This corresponds to the uniform, bivariate Dirichlet distribution described by equation (\[eqn:dirichlet\]) with $\balpha=\mathbf{0}$. One therefore needs to simply draw a random variate in $\{v_1,v_2\}$ from the uniform, bivariate Dirichlet distribution. However, another way of thinking about the problem is to try to populate a triangle with a uniform sampling of points, as evident from Fig. \[fig:triangle\] - a procedure we dub ‘triangular sampling’. This more geometric perspective leads to a simple and elegant expression for generating the LDC samples. An elegant method for triangular sampling comes from the field of computer graphical programming, which we will describe here. Consider a triangle with vertices $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{C}$, and two random uniform variates $q_1$ and $q_2$ in the interval $[0,1]$. @turk:1990 showed that a random location, $\mathbf{v}$, within the triangle can be sampled using (notation has been changed slightly from that of @turk:1990) $$\begin{aligned} \mathbf{v} &= (1-\sqrt{q_1}) \mathbf{A} + \sqrt{q_1} (1-q_2) \mathbf{B} + q_2 \sqrt{q_1} \mathbf{C}.\end{aligned}$$ This sampling is equivalent to having $q_1$ draw out a line segment parallel to $BC$ that joins a point on $AB$ with a point on $AC$ and then selecting a point on this segment based upon the value of $q_2$ (as shown in Fig. \[fig:turk\]). Taking the square root of $q_1$ is necessary to weight all portions of the triangle equally. ![*A geometric illustration of how a random point is drawn from a triangle with vertices $A$, $B$ and $C$ using two random variates $q_1$ and $q_2$ (i.e. ‘triangular sampling’). The method and figure adapted are from the computer graphical programming chapter of @turk:1990.* []{data-label="fig:turk"}](turk.eps){width="8.4"} Evaluating the above for $\mathbf{A}=\{0,1\}^T$, $\mathbf{B}=\{0,0\}^T$ and $\mathbf{C}=\{1,0\}^T$ (representing the vertices of the specific triangle we are interested in) gives $$\begin{aligned} v_1 &= \sqrt{q_1} q_2,\\ v_2 &= 1-\sqrt{q_1}.\end{aligned}$$ Substituting the above into equations (9) and (10) gives $$\begin{aligned} u_1 &= 2 \sqrt{q_1} q_2,\\ u_2 &= \sqrt{q_1} (1 - 2 q_2). \label{eqn:myeqn}\end{aligned}$$ The inverse of these expressions are easily found to be given by $$\begin{aligned} q_1 &\equiv (u_1 + u_2)^2,\\ q_2 &\equiv \frac{u_1}{2(u_1 + u_2)}. \label{eqn:myinverseeqn}\end{aligned}$$ By re-parametrizing the LDCs from a set $\btheta$ of $\btheta=\{u_1,u_2\}$ to $\btheta=\{q_1,q_2\}$, one can fit for quadratic LDCs in such a way that the joint prior distribution is uniform and exclusively samples physically plausible solutions. For example, one would fit for $q_1$ and $q_2$ with uniform priors between $0$ and $1$, but convert these parameters into $u_1$ and $u_2$ (using equation 15 & 16) before calling their light curve generation code e.g. the @mandel:2002 algorithm. This will exactly reproduce the uniform, bivariate Dirichlet distribution shown in Fig. \[fig:constraints\] & \[fig:triangle\]. Comparison to theoretical stellar atmosphere models {#sub:theorycomp} --------------------------------------------------- By sampling from $\btheta=\{q_1,q_2\}$ uniformly over the interval $[0,1]$, one adopts uninformative priors in the LDCs and thus the underlying intensity profile of a given star. The only physics which goes into our model are the conditions **(A)** and **(B)**. In contrast, LDCs generated using stellar atmosphere models include a great deal of physics, and sampling coefficients from such a model is more appropriately described as using informative priors. The choice as to which path to follow is a matter for the data analyst to decide and is likely dependent upon how well characterized the target star is and how much trust is placed in the theoretical models. An implicit expectation of our $\btheta=\{q_1,q_2\}$ model is that the true LDCs of normal stars observed in a broad bandpass should fall within the unit-square of $0<q_1<1$ and $0<q_2<1$. By extension then, the LDCs of a realistic stellar atmosphere should also reproduce coefficients lying within this unit-square. To check this, we here show the results of converting standard tabulations of quadratic LDCs into the $\btheta=\{q_1,q_2\}$ parametrization. We decide to use the *Kepler* bandpass for this comparison since our model is (a) designed for broad bandpass photometry, (b) most useful for faint target stars with poor characterization requiring uninformative priors and (c) likely to be most commonly employed on such targets due to the sheer volume of observations obtained by this type of survey. @claret:2011 provide tabulations of *Kepler* LDCs for the quadratic law computed using 1D Kurucz ATLAS[^2] and PHOENIX[^3] stellar atmosphere models over a wide range of stellar input parameters: $0.0\leq\log g\leq5.0$, $-5\leq[\mathrm{M}/\mathrm{H}]\leq+1$, $2000\leq T_{\mathrm{eff}}\leq50000$K. The extreme ends of this temperature range do not necessarily conform to the criteria $\textbf{(A)}$ and $\textbf{(B)}$, even in *Kepler’s* broad bandpass, and so we make some cuts to avoid the extrema. The lowest effective temperature for a planet-hosting star is Kepler-42 (aka KOI-961) with $T_{\mathrm{eff}}=3068\pm174$K [@muirhead:2012] and so we make a cut at 3000K. The highest effective temperature of a planet-hosting star is $8590\pm73$K for Formalhaut b [@kalas:2008], and so we place an additional cut at $10000$K. Using this range and the @claret:2011 tabulations, we compute $\{q_1,q_2\}$ from $\{u_1,u_2\}$ for all 12026 entries and display the results in Fig. \[fig:theorycomp\]. It can be easily seen that the entire grid falls within the expected unit-square. We therefore conclude that our parametrization is consistent with the results from a typical stellar atmosphere model. It is interesting to observe that hot-stars display a narrow range of LDCs in the *Kepler* bandpass since the Wien’s peak wavelength is sufficiently low that the Rayleigh tail dominates the part of the spectrum seen by *Kepler*. ![*Quadratic LDCs generated from stellar atmosphere models over the *Kepler* bandpass by @claret:2011.The original LDCs ($u_1$-$u_2$) have been re-parametrized into our $q_1$-$q_2$ scheme. Stellar parameters range from $0\leq\log g\leq+5$, $-5\leq[\mathrm{M}/\mathrm{H}]\leq+1$ and $4000\leq T_{\mathrm{eff}}\leq10000$K, with the latter indicated by the colour of the points (blue=hot; red=cool).*[]{data-label="fig:theorycomp"}](LDtheory.eps){width="8.4"} Comparison to Previously Suggested Parametrizations {#sec:comp} =================================================== Overview {#sub:compoverview} -------- In order to give our proposed parametrization some context, we here discuss previously suggested parametrizations of the LDC, with sole attention given to the quadratic law [@kopal:1950], due to its very frequent use, particularly in the transiting exoplanet community. There have been numerous distinct suggestions for reasonable parametrizations in the exoplanet literature, and here, we attempt to compare our proposed parametrization to those of the previous ones (that we are aware of at least). A comment on mutual correlations {#sub:correls} -------------------------------- Before we continue, there is an important point we would like to establish. Many of the previous suggestions have been designed to minimize the correlation between the two fitted limb darkening parameters whilst regressing data and *not* specifically designed to sample the physically plausible solutions in an efficient and complete manner (which is the motivation behind our parametrization). So, which motivation is preferable? In most cases, astronomical data do not usually constrain freely fitted LDCs particularly well. For example, for rotational modulations the limb darkening profile is degenerate with the spots’ contrasts and geometries [@macula:2012]. In the case of transiting planets or eclipsing binaries, the LDCs are degenerate with the geometry and size of the eclipsing body [@pal:2008; @howarth:2011]. Therefore, in most cases, the data are essentially unconstraining and we do not improve our ignorance of the true profile significantly. The power of our technique lies in the fact that by efficiently sampling the entire physically plausible parameter volume, we propagate that ignorance into the posterior distributions of all of the parameters which are correlated to the LDCs. So by fitting for $\{q_1,q_2\}$ with uniform priors over the interval $[0,1]$, the derived posteriors account for the full range of physically permissible models. Additionally, the only consequence of fitting two parameters with non-zero mutual correlation is that more computational resources are required to obtain a converged solution e.g. for Markov Chain Monte Carlo (MCMC) this would require a greater chain length. However, this issue is actually somewhat less important in the modern age of computing with significant strides in CPU speeds. We therefore argue that it is more valuable to sample from a physically plausible prior volume. Finally, it is important to realize that despite the very wide use of MCMC techniques, other regression techniques are becoming increasingly popular and have different issues affecting their efficiency. Suppose a set of data strongly constrains the quadratic LDCs. For MCMC [@metropolis:1953; @hastings:1970], one could seed the chain from the approximate solution location and because the data is constraining, the chain should never cross the three boundary conditions (i.e. it is highly efficient). In contrast, for nested sampling [@skilling:2004], the initial nests stretch across the entire prior volume and thus any regions which violate the boundary conditions would have to be rejected through a likelihood penalty, and thus the larger this region is, the poorer the efficiency of the nested sampling algorithm. As a side note, in the case of poorly constraining data, nested sampling is the more efficient code since the MCMC routine would randomly walk into unphysical territory frequently, but (with well-chosen priors) nested sampling will not. Despite not being designed to minimize the mutual correlation between the two fitted LDCs, numerical experiments show that our parametrization does in fact reduce the correlation significantly. In recent months, the Hunt for Exomoons with Kepler (HEK) project [@hek:2012] has been employing our proposed parametrization during their fits of *Kepler* transiting planetary candidates, and initial results find that the mutual correlation is reduced from a median value of $\mathrm{Corr}[u_1,u_2]=-0.89$ to $\mathrm{Corr}[q_1,q_2]=-0.37$ (see Table \[tab:correls\]). The reason why our parametrization reduces the correlation can be explained geometrically and is discussed in §\[sub:pal\]. We therefore argue that efficient, complete sampling of the physically plausible prior volume has many advantages over simply reducing mutual correlation, which is why we have pursued such an approach in this paper. However, a by-product of our proposition is that mutual correlations are significantly reduced anyway. ---------------------- ----------------------------------------- ------------------------- ------------------------- HEK candidate ID Model $u_1$-$u_2$ Correlation $q_1$-$q_2$ Correlation \[0.5ex\] HCO-254.01 $\mathcal{P}_{\mathrm{LD-free}}$ $-0.946029$ $-0.290086$ HCO-254.01 $\mathcal{P}_{\mathrm{LD-free},e_{B*}}$ $-0.878806$ $-0.507617$ HCO-254.01 $\mathcal{S}_{\mathrm{LD-free}}$ $-0.925289$ $-0.211113$ HCO-254.01 $\mathcal{S}_{\mathrm{LD-free},e_{B*}}$ $-0.891628$ $-0.406471$ HCO-254.01 $\mathcal{S}_{\mathrm{LD-free},e_{SB}}$ $-0.951899$ $-0.269654$ HCA-39.02 $\mathcal{P}_{\mathrm{LD-free}}$ $-0.931938$ $+0.180139$ HCA-39.02 $\mathcal{S}_{\mathrm{LD-free}}$ $-0.956056$ $-0.374412$ HCA-669.01 $\mathcal{P}_{\mathrm{LD-free}}$ $-0.949620$ $-0.001698$ HCA-669.01 $\mathcal{S}_{\mathrm{LD-free}}$ $-0.571816$ $-0.480137$ HCO-754.01 $\mathcal{P}_{\mathrm{LD-free}}$ $-0.713572$ $-0.447555$ HCO-754.01 $\mathcal{S}_{\mathrm{LD-free}}$ $-0.703587$ $-0.172976$ HCV-531.01 $\mathcal{P}_{\mathrm{LD-free}}$ $-0.580507$ $-0.482708$ HCV-531.01 $\mathcal{S}_{\mathrm{LD-free}}$ $-0.567252$ $-0.472353$ HCA-941.01 $\mathcal{P}_{\mathrm{LD-free}}$ $-0.599955$ $-0.583683$ HCA-941.01 $\mathcal{S}_{\mathrm{LD-free}}$ $-0.597042$ $-0.575972$ HCV-40.01 $\mathcal{P}_{\mathrm{LD-free}}$ $-0.986092$ $-0.116053$ HCV-40.01 $\mathcal{S}_{\mathrm{LD-free}}$ $-0.985540$ $-0.104638$ Median - $-0.891628$ $-0.374412$ \[1ex\] ---------------------- ----------------------------------------- ------------------------- ------------------------- \[tab:correls\] Performance metrics {#sub:metrics} ------------------- Each parametrization has two metrics which describe how well they sample the parameter space. We denote “efficiency”, $\epsilon$, as unity minus the fraction of times the parametrization produces an unphysical intensity profile (which would require rejection). In a practical case, unphysical trials would have to be rejected in a Monte Carlo fit and thus act to reduce the overall efficiency and hence the name for this term. This value is easily calculated with a Monte Carlo experiment of $N\gg1$ synthetic draws from a given joint distribution. The other metric we consider is ‘completeness’, $\kappa$, which describes what fraction of the allowed physical parameter space is explored by the parametrization. A $\kappa<1$ means that certain regions of reasonable and physically plausible realizations of $\{u_1,u_2\}$ are never explored. The $\kappa$ value of a given parametrization, $\btheta$, is simply equal to the area of the loci sampled in $\{u_1,u_2\}$ parameter space divided by that achieved for only the physically acceptable trials (which happens to equal unity) multiplied by the efficiency, $\epsilon$. Note that the parametrization described in this work ($\btheta=\{q_1,q_2\}$; equation 17 & 18) produces $\kappa=1$ and $\epsilon=1$, by virtue of its construction. As mentioned in §\[sub:correls\], one could argue that the correlation between the two fitted limb darkening parameters is also a key metric of interest. However, we make the case here that correlation is not critical in light of the substantial improvements in computational hardware and software over the last decade. We will therefore proceed by considering several popular parametrizations of the LDCs (in chronological sequence) and evaluating their efficiency, $\epsilon$, and completeness, $\kappa$. During this investigation, we found that it is quite rare for authors to declare the upper and lower bounds used on their priors (which are usually uniform). Without the prior bounds, it is not possible to evaluate $\epsilon$ and $\kappa$ and so in these cases we proceed by selecting a choice of prior bounds which ensures $\kappa=1$ for the highest possible $\epsilon$. $\btheta=\{u_1,u_2\}$ {#sub:simple} --------------------- We begin by first considering the naive parameter set of $\btheta=\{u_1,u_2\}$ directly, which serves as a useful baseline for subsequent comparisons. In order to estimate $\kappa$ and $\epsilon$ though, we must first choose upper/lower bounds on these two parameters. As discussed in the previous subsection, we can optimize the prior bounds to ensure $\kappa=1$. This is done by generating $N\gg1$ Monte Carlo realizations of $\{u_1,u_2\}$ across an overly generous interval (in this case we used $[-3,+3]$) and only accepting points which satisfy the conditions stipulated in equation (\[eqn:conditions\]). We find that using $0<u_1<+2$ and $-1<u_2<+1$ ensures $\kappa=1$ for the highest possible efficiency. The corresponding value of $\epsilon$ is 0.25 i.e. sampling from this prior with a lack of constraining data would mean that three out of four trials would have to be rejected. The parameter volume sampled by this prior is illustrated in Fig. \[fig:comps\]. $\btheta=\{u_{+},u_{-}\}\equiv\{u_1+u_2,u_1-u_2\}$ {#sub:brown} -------------------------------------------------- The pioneering work of @brown:2001 offers perhaps the first such example of serious consideration of alternative parametrizations in the exoplanet literature. Using the *Hubble Space Telescope* photometry of HD 209458b, @brown:2001 suggested fitting for $u_{+} \equiv (u_1 + u_2)$ and $u_{-} \equiv (u_1 - u_2)$. Unlike in this work, the purpose of this parametrization was not to ensure physically plausible intensity profiles, but rather to reduce the correlation between the two LDCs in the fitting procedure, as stated in §3.2 of @brown:2001. In order to compute our performance metrics, a choice of prior bounds is required. We choose to select these bounds such that we optimize $\kappa=1$, as discussed earlier in §\[sub:metrics\]. Following the same method described in §\[sub:simple\], we estimate that this occurs for $0<u_{+}<+1$ and $-1<u_{-}<+3$. Using these bounds, we calculate that $\epsilon=0.5$. This can be intuitively visualized in Fig. \[fig:comps\]. $\btheta=\{U_1,U_2\}\equiv\{2u_1+u_2,u_1-2u_2\}$ {#sub:holman} ------------------------------------------------ @holman:2006 chose to fit for $U_1 \equiv (2u_1+u_2)$ and $U_2 \equiv (u_1-2u_2)$ because “the resulting uncertainties in those parameters are uncorrelated”. Once again then, it is worth noting that the motivation of this parameter set was not to sample the physically allowed parameter space efficiently. The priors used in the exploration of these parameters are not stated in the paper, and so we assume uniform between some upper and lower bounds on each term. The numerical range is not stated in @holman:2006 but we have learned that the exploration was unbound, but with rejections applied to samples which fall outside of the conditions stated in equation (\[eqn:conditionA\]), (\[eqn:conditionB\]) & (\[eqn:conditionC\]) (private communication with M. Holman & J. Winn). We therefore proceed by optimizing the prior bound choice to $\kappa=1$ via the same Monte Carlo method described earlier (§\[sub:simple\]). This procedure yields $0<U_1<+3$ and $-2<U_2<+4$. Using these limits, we calculate $\epsilon=0.278$ for the fixed choice of $\kappa=1$, which is illustrated in Fig. \[fig:comps\]. $\btheta=\{a_1,a_2\}\equiv\{u_1+2u_2,2u_1-u_2\}$ {#sub:burke} ------------------------------------------------ During our analysis of the literature on this subject, we noticed that the parametrization of @holman:2006 was cited by many authors including the instance of @burke:2007. What is interesting is that @burke:2007 state that ‘we follow @holman:2006 by adopting $a_1\equiv(u_1+2u_2)$ and $a_2\equiv(2u_1-u_2)$’, but as discussed earlier @holman:2006 in fact used $U_1\equiv(2u_1+u_2)$ and $U_2\equiv(u_1-2u_2)$. Therefore, despite @burke:2007 claiming to have simply followed @holman:2006, they had in fact introduced an entirely new parametrization. We explore this parametrization here. @burke:2007 do explicitly declare that they use uniform priors on the LDCs but do not explicitly state the bounds on $a_1$ and $a_2$. However, the authors do state they impose $u_1>0$, $(u_1+u_2)<1$ and $(u_1+2u_2)>0$, which are identical to the conditions derived in this work (see equations \[eqn:conditionA\], \[eqn:conditionB\] & \[eqn:conditionC\]). The $a_1$ parameter is therefore bound by $a_1>0$ but the other constraints do not naturally impose any other bounds. We are also unable to find any way of inferring any other bounds from the paper of @burke:2007. We therefore proceed by selecting bounds on $a_1$ and $a_2$ ourselves and we choose parameters which optimize to $\kappa=1$, as discussed in §\[sub:metrics\]. Following the same Monte Carlo method used previously (e.g. see §\[sub:simple\]), we determine $0<a_1<2$ and $-1<a_2<+5$ to ensure $\kappa=1$. Utilizing these bounds, we estimate $\epsilon=(5/12)=0.417$, which is visualized in Fig. \[fig:comps\]. $\btheta=\{w_1,w_2\}\equiv\{u_1\cos\phi-u_2\sin\phi,u_1\sin\phi+ u_2\cos\phi\}$ {#sub:pal} ---------------------------------------------------------------- @pal:2008 proposed that the correlation between $u_1$ and $u_2$ can be minimized by using principal component analysis. The author suggested the parametrization $w_1 \equiv (u_1\cos\phi-u_2\sin\phi)$ and $w_2 \equiv (u_1\sin\phi + u_2\cos\phi)$, where $0<\phi<\pi/2$ and is chosen such that the correlation is minimized. Once again then, we stress that this parametrization is not designed to sample from the physically plausible parameter space. @pal:2008 does not suggest bounds on $w_1$-$w_2$ and so we proceed to select bounds in such way as to optimize $\kappa=1$. This optimization process is sensitive to $\phi$ though and it is possible to derive different bounds depending upon what one assumes for $\phi$. In order to explore this issue fully, we fix $\phi$ to a specific value between $0$ and $\pi/2$ and then optimize the bounds on $w_1$ and $w_2$ to ensure $\kappa=1$, using the same Monte Carlo method employed earlier (e.g. see §\[sub:simple\]). We then use these bounds to compute $\epsilon$ as usual. For each choice of $\phi$ then, we compute a unique value of $\epsilon$, i.e. $\epsilon(\phi)$. Repeating over a wide range of $\phi$ values, we find that $\phi=45^{\circ}$ yields the maximum efficiency of $\epsilon=0.5$ and drops to $\epsilon=0.25$ as one rotates round to $\phi=0^{\circ}$ and $90^{\circ}$. Setting $\phi=45^{\circ}$ then optimizes the efficiency of sampling the physically plausible parameter space. We stress that this choice is not made to minimize the correlation between $w_1$ and $w_2$, for which we note @pal:2008 recommend $\phi=35^{\circ}$-$40^{\circ}$. For the $\phi=45^{\circ}$ case, however, $w_1=(u_1-u_2)=u_{-}$ and $w_2=(u_1+u_2)=u_{+}$, and so we recover the same parametrization used by @brown:2001. For this reason, we do not include the parametrization of @pal:2008 in Fig. \[fig:comps\] and Table \[tab:comps\]. One interesting point is that the boundary conditions in equation (\[eqn:conditionA\]) and equation (\[eqn:conditionC\]) form two sides of the triangle described in Fig. \[fig:constraints\] and taking the bisector of these two lines yields a line inclined by $\phi=\frac{1}{2}[\tan^{-1}(\frac{1}{2})+\tan^{-1}(\frac{2}{2})]=35.8^{\circ}$, which is also marked in Fig. \[fig:constraints\]. Therefore, the suggested angle of $\phi=35^{\circ}$-$40^{\circ}$ by @pal:2008 effectively just travels up along this bisector. Indeed, one can consider this to be an alternative geometric explanation for the suggestion of @pal:2008. It also highlights how our parametrization, $\{q_1,q_2\}$, should be expected to exhibit inherently low mutual correlation since it also travels up along this bisector line. This was indeed verified to be the case earlier in §\[sub:correls\] and here we are able to provide the explicit explanation for this observation. $\btheta=\{u_1,u_{+}\}\equiv\{u_1,u_1+u_2\}$ {#sub:carter} -------------------------------------------- The final parametrization we consider is that of $\btheta=\{u_1,u_{+}\}\equiv\{u_1,u_1+u_2\}$, which has been used in papers such as @nesvorny:2012 and @hek:2013. The choice of bounds here is usually stated to be $0<u_1<+2$ and $0<(u_1+u_2)<+1$, which incidentally is the same result that one finds when one optimizes the bounds to $\kappa=1$. The loci of points form a parallelogram on the $\{u_1,u_2\}$ plane (as shown in Fig. \[fig:comps\]), unlike any of the previously considered parametrizations which formed rectangles (or a triangle in the case of $\btheta=\{q_1,q_2\}$) and produces an efficiency of exactly one half i.e. $\epsilon=0.5$. Table \[tab:comps\] shows the efficiency and bounds of this parametrization in relation to previously considered ones. -------------------------------------------------------------- ---------------------- ---------------------- ------------------------ -- Parametrization, $\btheta$ Parameter 1 Interval Parameter 2 Interval Efficiency, $\epsilon$ \[0.5ex\] $\{u_1,u_2\}$ $[0,+2]$ $[-1,+1]$ $0.250$ $\{u_{+},u_{-}\}\equiv\{u_1+u_2,u_1-u_2\}$ $[0,+1]$ $[-1,+3]$ $0.500$ $\{a_1,a_2\}\equiv\{2u_1+u_2,u_1-2u_2\}$ $[0,+2]$ $[-1,+5]$ $0.417$ $\{U_1,U_2\}\equiv\{u_1+2u_2,2u_1-u_2\}$ $[0,+3]$ $[-2,+4]$ $0.278$ $\{u_1,u_{+}\}\equiv\{u_1,u_1+u_2\}$ $[0,+2]$ $[0,+1]$ $0.500$ $\{q_1,q_2\}\equiv\{(u_1 + u_2)^2,(u_1/2)(u_1 + u_2)^{-1}\}$ $[0,+1]$ $[0,+1]$ $1.000$ \[1ex\] -------------------------------------------------------------- ---------------------- ---------------------- ------------------------ -- \[tab:comps\] ![*Loci of points sampled by various parametrizations of the quadratic LDCs. In each case, the completeness, $\kappa$, equals unity since we have optimized the prior bounds to ensure this condition. This is done since we are unable to find corresponding upper/lower bounds in the referenced literature. Grey area represents the physically plausible parameter range.* []{data-label="fig:comps"}](comps.eps){width="8.4"} Other Two-Parameter Limb Darkening Laws {#sec:otherlaws} ======================================= General principle {#sub:general} ----------------- Although the quadratic limb darkening is the most widely used two-parameter limb darkening in the literature, the so-called ‘square-root’ law and to a lesser extent the ‘logarithmic’ law have also gained traction. Like the quadratic case, enforcing the physical conditions of an everywhere-positive intensity profile and a monotonically decreasing intensity from centre-to-limb imposes three boundary conditions on the two coefficients describing each law. Hence, we once again have a two-dimensional plane featuring three (non-parallel) boundary conditions which enclose a triangle. Therefore, sampling from this triangle in a uniform manner can be achieved using exactly the same trick described for the quadratic law case. One can actually take this a step further and state that for *any* problem with two variables with a uniformly distributed joint PDF and three (non-parallel) boundary conditions, 100% complete and efficient sampling is easily achieved using the triangular sampling technique discussed in this paper. Square-root law {#sub:sqrtlaw} --------------- Arguably, the second-most popular two-parameter limb darkening law is that of the square-root law. @hamme:1993 argues that this is a superior approximation to the quadratic law for late-type stars in the near-infrared. Recent examples include applications to the eclipsing binary system LSPM J1112+7626 [@irwin:2011] and the transiting planet system GJ1214 [@berta:2012]. The law was first proposed in @diaz:1992 and describes the specific intensity as $$\begin{aligned} I(\mu)/I(1) &= 1 - c (1-\mu) - d (1-\sqrt{\mu}), \label{eqn:sqrtlaw}\end{aligned}$$ where $c$ and $d$ are the two LDCs associated with this law. Following the same procedure as used earlier in §\[sub:derivation\], imposing the condition of an everywhere-positive profile yields $$\begin{aligned} c + d < 1.\end{aligned}$$ Similarly, the condition of a monotonically decreasing intensity profile from centre-to-limb gives two constraints: $$\begin{aligned} &d > 0,\\ &2 c + d > 0.\end{aligned}$$ These three non-parallel conditions are easily imparted using the triangular sampling technique and using the replacements $q_1^{\mathrm{sqrt}}$ and $q_2^{\mathrm{sqrt}}$ defined over the interval $[0,1]$: $$\begin{aligned} q_1^{\mathrm{sqrt}} &= (c + d)^2,\\ q_2^{\mathrm{sqrt}} &= \frac{d}{2(c + d)}.\end{aligned}$$ Alternatively, sampling from uniform, bivariate Dirichlet distribution, $\mathcal{P}(\balpha=\mathbf{1};v_1^{\mathrm{sqrt}},v_2^{\mathrm{sqrt}})$, may be achieved using: $$\begin{aligned} v_1^{\mathrm{sqrt}} &= d/2,\\ v_2^{\mathrm{sqrt}} &= 1 - c - d.\end{aligned}$$ Logarithmic law {#sub:loglaw} --------------- @klinglesmith:1970 proposed a logarithmic limb darkening law with the following form $$\begin{aligned} I(\mu)/I(1) &= 1 - A (1-\mu) - B \mu (1-\log\mu), \label{eqn:loglaw}\end{aligned}$$ where $A$ and $B$ are the two associated LDCs. Again, following the procedure used earlier in §\[sub:derivation\], we find that imposing the condition of an everywhere-positive profile yields $$\begin{aligned} A < 1.\end{aligned}$$ Similarly, the condition of a monotonically decreasing intensity profile from centre-to-limb gives two constraints: $$\begin{aligned} &A + B > 0,\nonumber\\ &B < 0.\end{aligned}$$ These three non-parallel conditions are again easily imparted using the triangular sampling technique and using the replacements $q_1^{\mathrm{log}}$ and $q_2^{\mathrm{log}}$ defined over the interval $[0,1]$: $$\begin{aligned} q_1^{\mathrm{log}} &= (B + 1)^2,\\ q_2^{\mathrm{log}} &= \frac{A - 1}{B + 1}.\end{aligned}$$ Alternatively, sampling from uniform, bivariate Dirichlet distribution, $\mathcal{P}(\balpha=\mathbf{1};v_1^{\mathrm{log}},v_2^{\mathrm{log}})$, may be achieved using $$\begin{aligned} v_1^{\mathrm{log}} &= 1 - A,\\ v_2^{\mathrm{log}} &= -B.\end{aligned}$$ Exponential law {#sub:explaw} --------------- The final two-parameter limb darkening law we consider comes from @claret:2003 and takes the form $$\begin{aligned} I(\mu)/I(1) &= 1 - g (1-\mu) - h \frac{1}{1-e^\mu}, \label{eqn:explaw}\end{aligned}$$ where $g$ and $h$ are the two associated limb darkening coefficients. Following the procedure used earlier in §\[sub:derivation\] once more, we find that imposing the condition of an everywhere-positive profile yields two constraints (unlike all previous examples where this condition only imposed one meaningful constraint): $$\begin{aligned} &h < 1 - e^1,\nonumber\\ &h < 0.\end{aligned}$$ However, these two condition are parallel and since $0>(1-e^1)$, then the two conditions simply boil down to $h<(1-e^1)$. Similarly, the condition of a monotonically decreasing intensity profile from centre-to-limb gives two constraints: $$\begin{aligned} &h < 0,\nonumber\\ &\frac{h_1}{h_2} > \frac{e^1}{(1-e^1)^2}.\end{aligned}$$ The first of these two conditions is parallel to the previously derived constraint of $h<(1-e^1)$ and in fact less constraining and so we can discard it. In total then, we have only two non-parallel boundary conditions. As a result, a triangular enclosed region is not formed in the joint probability distribution and so the triangular sampling technique discussed in this paper is not applicable. Discussion & Conclusions {#sec:discussion} ======================== In this paper, we have presented new parametrizations for the LDCs of several two-parameter limb darkening laws, including the popular quadratic (§\[sec:quadratic\]) and square-root laws (§\[sub:sqrtlaw\]). When sampled over the interval $[0,1]$, our parametrizations exclusively sample the complete range of physically plausible LDCs (100% efficient and 100% complete). This is twice as efficient as the next best parametrization proposed previously (§\[sec:comp\]). In the case of the quadratic law, we show that our parametrization also reduces the mutual correlation between the two LDCs (§\[sub:correls\]) with a natural geometric explanation (§\[sub:pal\]), although this was not the motivation behind our formulation. Fitting astronomical data with our parametrization for the LDCs ensures that all model parameters fully account for one’s ignorance about the stellar intensity profile, leading to more realistic uncertainty estimates. Derived parameters make no assumption about the stellar atmosphere model, except the type of polynomial used to describe it (for which we provide several choices) and that the observations are of normal, main-sequence stars in broad bandpasses. These parametrizations are applicable to any observation affected by limb darkening, such as optical interferometry, microlensing, eclipsing binaries and transiting planets. Our parametrization may be explained as follows. Requiring the intensity profile to be everywhere-positive and monotonically decreasing from centre-to-limb imposes three non-parallel boundary conditions on two LDCs (see equation \[eqn:conditions\]). Given the two LDCs live on a two-dimensional plane, the three boundary conditions describe a triangular region where physically plausible LDCs may reside. This triangular region can be sampled uniformly by re-parametrizing the LDCs from $\{u_1,u_2\}$ to $\{q_1,q_2\}$ (see equation \[eqn:myinverseeqn\]) according to a technique used in computer graphical programming [@turk:1990]: triangular sampling. An equivalent method is to draw a random variate from a uniform, bivariate Dirichlet distribution. We note that the solution is general to any situation where two parameters are bound by three non-parallel boundary conditions. Or, even more generally, when $N$ parameters are mutually constrained by $N+1$ non-parallel boundary conditions leading to tetrahedral sampling and hyper-tetrahedral sampling. In the case of exoplanet transits, we are therefore faced with the unusual case of the field of exoplanets drawing from computer games, rather than the other way around. Acknowledgements {#acknowledgements .unnumbered} ================ This work was performed \[in part\] under contract with the California Institute of Technology (Caltech) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute. Thanks to G. Turk & J. Irwin for useful discussions and comments in preparing this manuscript. Special thanks to the anonymous reviewer for his/her positive and constructive feedback. [99]{} Aufdenberg, J. P., Ludwig, H.-G. & Kervella, P., 2005, ApJ, 633, 424 Berta, Z. K. et al., ApJ, 747, 35 Brown, T. M., Charbonneau, D., Gilliland, R. L., Noyes, R. W. & Burrows, A., 2001, ApJ, 552, 699 Burke, C. J. et al., 2007, ApJ, 671, 2115 Carter, J. A., Winn, J. N., Gilliland, R. & Holman, M. J., 2009, ApJ, 696, 241 Claret, A., 2000, A&A, 363, 1081 Claret, A. & Hauschildt, P. H., 2003, A&A, 412, 241 Claret, A. & Bloemen, S., 2011, A&A, 529, 75 Diaz-Cordoves, J. & Gimenez, A., 1992, A&A, 259, 227 van Hamme, W., 1993, AJ, 106, 2096 Hastings, W. K., 1970, Biometrika, 57, 97 Hayek, W., Sing, D., Pont, F. & Apslund, M., 2012, A&A, 539, 1 Holman, M. J. et al., 2006, ApJ, 2006, 652, 1715 Howarth, I. D., 2011, MNRAS, 418, 1165 Irwin, J., M. et al., 2011, ApJ, 742, 123 Kalas, P. et al., 2008, Science, 322, 1345 Kipping, D. M., 2012, MNRAS, 427, 2487 Kipping, D. M., 2013, MNRAS, 434, L51 Kipping, D. M., Bakos, G. Á., Buchhave, L. A., Nesvorný, D. & Schmitt, A. R., 2012, ApJ, 750, 115 Kipping, D. M., Hartman, J., Buchhave, L. A., Schmitt, A. R., Bakos, G. Á. & Nesvorný, D., 2013, ApJ, 770, 101 Kipping, D. M., Forgan, D., Hartman, J., Nesvorný, D., Bakos, G. Á., Schmitt, A. R., & Buchhave, L. A., 2013, ApJ, submitted (astro-ph:1306.1530) Kjurkchieva, D., Dimitrov, D., Vladev, A. & Yotov, V., 2013, MNRAS, 431, 3654 Klinglesmith, D. A. & Sobiesk, S., 1970, A&A, 75, 175 Kopal, Z., 1950, Harvard Col. Obs. Circ., 454, 1 Mandel, K. & Agol, E., 2002, ApJ, 580, 171 Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. & Teller, E., 1953, J. Chem. Phys., 21, 1087 Muirhead, P. S. et al., 2012, ApJ, 747, 144 Neilson, H. R. & Lester, J. B., 2012, A&A, 544, 117 Nesvorný, D., Kipping, D. M., Buchhave, L. A., Bakos, G. Á., Hartman, J. & Schmitt, A. R., 2012, Science, 336, 1133 Pal, A., 2008, MNRAS, 390, 281 Schlawin, E., Agol, E., Walkowicz, L. M., Covey, K. & Lloyd, J. P., 2010, ApJ, 722, 75 Sing, D. K., 2010, A&A, 510, 21 Skilling, J. 2004, in Fischer R., Preuss R., Toussaint U. V., eds, AIP Conf. Ser. Vol. 735, Nested Sampling. Am. Inst. Phys., p. 395 Turk, G., 1990, in Glassmer A., ed., Generating Random Points in a Triangle, in Graphics Gems I. Academic Press, p. 24 Witt, J. J., 1995, ApJ, 449, 42 Zub, M. et al., 2011, A&A, 525, 15 \[lastpage\] [^1]: E-mail: dkipping@cfa.harvard.edu [^2]: http://kurucz.harvard.edu/ [^3]: http://www.hs.uni-hamburg.de/EN/For/ThA/phoenix/
{ "pile_set_name": "ArXiv" }
--- abstract: 'Dihadron fragmentation functions describe the probability that a quark fragments into two hadrons plus other undetected hadrons. In particular, the so-called interference fragmentation functions describe the azimuthal asymmetry of the dihadron distribution when the quark is transversely polarized. They can be used as tools to probe the quark transversity distribution in the nucleon. Recent studies on unpolarized and polarized dihadron fragmentation functions are presented, and we discuss their role in giving insights into transverse spin distributions.' address: - '$^{1}$ INFN-Sezione di Pavia, 27100 Pavia, Italy.' - '$^{2}$ Dipartimento di Fisica Nucleare e Teorica, Università di Pavia, 27100 Pavia, Italy.' author: - 'A. Courtoy$^{1}$, A. Bacchetta$^{1, 2}$ and M. Radici$^{1}$' title: Dihadron fragmentation functions and their relevance for transverse spin studies --- Introduction ============ Our knowledge on the hadron structure is incomplete. We know that the Parton Distribution Functions (PDFs) describe the one-dimensional structure of hadrons. At leading order, the PDFs are three: number density, helicity and transversity. However the experimental knowledge on the latter is rather poor as it is a chiral-odd quantity not accessible through fully inclusive processes. Semi-inclusive production of two hadrons [@Collins:1993kq; @Jaffe:1997hf] offers an alternative way to access transversity, where the chiral-odd partner of transversity is represented by the Dihadron Fragmentation Functions (DiFF) $H_1^{{\sphericalangle}}$ [@Radici:2001na], which relates the transverse spin of the quark to the azimuthal orientation of the two-hadron plane. Since the transverse momentum of the hard parton is integrated out, the cross section can be studied in the context of collinear factorization. This peculiarity is an advantage over the $p_{T}$-factorization framework, where the cross sections involve convolutions of the relevant functions instead of simple products. The transversely polarized DiFF has been computed only in a spectator model [@Bacchetta:2006un]. Recently, the HERMES collaboration has reported measurements of the asymmetry containing the product $h_1 H_1^{{\sphericalangle}}$ [@Airapetian:2008sk]. The COMPASS collaboration has presented analogous preliminary results [@Martin:2007au]. The BELLE collaboration has also presented preliminary measurements of the azimuthal asymmetry in $e^+e^-$ annihilation related to the DiFF [@Vossen:2009xz]. Our present goal is to extract transversity through this channel. To this end, we need an expression for the chiral-odd DiFF $H_1^{{\sphericalangle}}$ obtained from $e^+e^-$ data. This in its turn requires a knowledge of the unpolarized DiFF $D_1$. Hence, as a first step, we present here a parameterization of the unpolarized DiFF $D_1$ as given from the Monte Carlo generator (MC) of the BELLE collaboration. Two-hadron Inclusive DIS: towards Transversity ============================================== We consider the SIDIS process $e(l)+N^\uparrow(P) \rightarrow e(l')+ \pi^+(P_1)+ \pi^-(P_2)+ X$, where the momentum transfer $q=l-l'$ is space-like, with $l,l'$, the lepton momenta before and after the scattering. The two pions coming from the fragmenting quark have momenta $P_1$ and $P_2$, respectively, and invariant mass $M_h$, which is considered to be much smaller than the hard scale of the process. We introduce the vectors $P_h=P_1+P_2$ and $R=(P_1-P_2)/2$. We describe a 4-vector $a$ as $[a^-,a^+,a^x,a^y]$, i.e. in terms of its light-cone components $a^\pm = (a^0 \pm a^3)/\sqrt{2}$ and its transverse spatial components. We introduce the light-cone fraction $z= P_h^-/k^-$. $P$ is the momentum of the nucleon target with mass $M$. We refer to Refs. [@Bacchetta:2006un; @Radici:2001na] for details and kinematics. The spin asymmetry $A_{UT}^{\sin(\phi_R^{} + \phi_S^{})\,\sin\theta}(x,y,z,M_h^2)$ is related to an asymmetric modulation of pion pairs in the angles $\phi_S^{}$ and $\phi_R^{}$, which represent the azimuthal orientation with respect to the scattering plane of the target transverse polarization and of the plane containing the pion pair momenta, respectively. The polar angle $\theta$ describes the orientation of $P_1$, in the center-of-mass frame of the two pions, with respect to the direction of $P_h$ in the lab frame. The asymmetry is expressed as $$A_{UT}^{\sin(\phi_R^{} + \phi_S^{})\,\sin\theta}(x,y, z,M_h^2) \varpropto - \frac{|\bm{R}|}{M_h}\, \frac{\sum_q e_q^2\,h_1^q(x)\ H_{1,q}^{{\sphericalangle}sp}(z,M_h^2)} {\sum_q e_q^2\,f_1^q(x)\ D_{1,q}^{ss+pp}(z,M_h^2)} \quad , \label{eq:asydis}$$ where the $x$-dependence is given by the PDFs only. The $z$ and $ M_h$ dependence are governed by the DiFFs whose functional form we need to determine. The procedure allowing us to give the required parameterizations for the DiFFs is detailed in the following sections. The Artru-Collins Asymmetry =========================== We further consider the process $e^+(l) e^-(l') \rightarrow (\pi^+ \pi^-)_{\rm jet1} (\pi^+ \pi^-)_{\rm jet2} X$, with (time-like) momentum transfer $q=l+l'$. Here, we have two pairs of pions, one originating from a fragmenting parton and the other one from the related antiparton.[^1] The differential cross sections also depend on the invariant $y = P_h\cdot l / P_h \cdot q$ which is related, in the lepton center-of-mass frame, to the angle $\theta_2 = \arccos (\bm{l_{e^+}}\cdot\bm{P}_h / (|\bm{l_{e^+}}|\,|\bm{P}_h|))$, with $\bm{l_{e^+}}$ the momentum of the positron, by $y = (1+\cos\theta_2)/2$. The dihadron Fragmentation Functions are involved in the description of the fragmentation process $q\to \pi^+ \pi^- X$, where the quark has momentum $k$. They are extracted from the correlation function [@Bacchetta:2002ux] $$\Delta^q(z,\cos\theta,M_h^2,\phi_R) = \frac{z |\vec R|}{16\,M_h}\int d^2 \vec k_T \; d k^+\,\Delta^q(k;P_h,R) \Big|_{k^- = P_h^-/z} \; , \label{eq:delta1}$$ where $$\begin{aligned} \Delta^q(k,P_h,R)_{ij} & =&\sum_X \, \int \frac{^4\xi}{(2\pi)^{4}}\; e^{+\i k \cdot \xi} \langle 0| {\cal U}^{n_+}_{(-\infty,\xi)} \,\psi_i^q(\xi)|P_h, R; X\rangle \langle P_h, R;, X| \bar{\psi}_j^q(0)\, {\cal U}^{n_+}_{(0,-\infty)} |0\rangle \,. \label{e:delta2}\end{aligned}$$ Since we are going to perform the integration over the transverse momentum $\vec{k}_T$, the Wilson lines ${\cal U}$ can be reduced to unity using a light-cone gauge. The only fragmentation functions surviving after integration over the azimuthal angle defining the position of the lepton plane w.r.t. the laboratory plane [@Boer:2003ya]. $$\begin{aligned} D_1^q(z,\cos\theta,M_h^2) &= 4\pi\, \Tr[\Delta^q(z,\cos\theta,M_h^2,\phi_R)\, \gamma^-], \\ \frac{\epsilon_T^{ij}\,R_{T j}}{M_h}\, H_1^{{\sphericalangle}\, q}(z,\cos\theta,M_h^2) &=4\pi \, \Tr[\Delta^q(z,\cos\theta,M_h^2,\phi_R)\,i\,\sigma^{i -}\,\gamma_5].\end{aligned}$$ We perform an expansion in terms of Legendre functions of $\cos\theta$ (and $\cos\overline{\theta}$) and keep only the $s$- and $p$-wave components of the relative partial waves of the pion pair. By further integrating upon $d\cos\theta$ and $d\cos\overline{\theta}$, we isolate only the specific contributions of $s$ and $p$ partial waves to the respective DiFFs. The azimuthal Artru-Collins asymmetry $A(\cos \theta_2, z, \bar z, M_h^2, \bar M_h^2)$ [@Artru:1995zu] corresponds to a $\cos(\phi_R+\phi_{\bar R})$ modulation in the cross section for the process under consideration. It can be written in terms of DiFF in the following way, $$\begin{aligned} A(\cos\theta_2,z,M_h^2,\bar{z},\bar{M}_h^2) &= &\frac{\sin^2 \theta_2}{1+\cos^2 \theta_2} \, \frac{\pi^2}{32}\, \frac{|\bm{R}|\,|\overline{\bm{R}}|}{M_h\,\overline{M}_h} \, \frac{\sum_q e_q^2 \, H_{1,q}^{{\sphericalangle}sp}(z,M_h^2)\, \overline{H}_{1,q}^{{\sphericalangle}sp}(\overline{z},\overline{M}_h^2)} {\sum_q e_q^2\, D_{1,q}^{ss+pp}(z,M_h^2) \, \overline{D}_{1,q}^{ss+pp}(\overline{z},\overline{M}_h^2) } \; , \label{eq:asye+e-}\end{aligned}$$ with $ |\bm{R}| =\frac{M_h}{2} \sqrt{1- 4\,m_{\pi}^2/M_h^2} $. To extract a parameterization of the function $H_1^{\sphericalangle}$, we need to know the function $D_1$. Electron-Positron Annihilation: The Unpolarized Cross-Section from BELLE ======================================================================== A model independent parameterization of a function means a huge freedom on the functional form one will choose. First, one can guess the causes of the shape of the data from physical arguments. One can get inspired in comparing the model results with the data: here, we take into account the results of Ref. [@Bacchetta:2006un] —including a critical eye on its shortcomings— in defining the shape of the MC histograms for the unpolarized cross section. In the process $q\to \pi^+ \pi^- X$, the prominent channels for an invariant mass of the pion pair ranging $2 m_{\pi} < M_h \lesssim 1.5$ GeV are, basically : - the fragmentation into a $\rho$ resonance decaying into $\pi^+ \pi^-$, responsible for a peak at $M_h \sim$ 770 MeV ; - the fragmentation into a $\omega$ resonance decaying into $\pi^+ \pi^-$, responsible for a small peak at $M_h \sim$ 782 MeV plus the fragmentation into a $\omega$ resonance decaying into $\pi^+ \pi^- \pi^0$ ($\pi^0$ unobserved), responsible for a broad peak around $M_h \sim$ 500 MeV ; - the continuum, i.e. the fragmentation into an “incoherent” $\pi^+ \pi^-$ pair, is probably the most important channel. It is also the most difficult channel to describe with purely model-based physical arguments. In addition to the channel decomposition, one has to take into account the flavor decomposition of the cross section. This further decomposition is particularly important if one wants to be able to use the resulting parametrization in another context, e.g., SIDIS. For the time being, the MC data provided by the BELLE collaboration are additionally separated into flavors, i.e., $uds$ contributions and $c$ contributions. The experimental analyses conclude that the charm contribution to the unpolarized cross section is non-negligible at BELLE’s energy.[^2] The main considerations one can do, before fitting the data, are the following. First, the most important contribution from the charm is in the continuum and cannot be neglected. The determination of a functional form for $D_1$ consists then in four parallel steps, i.e. the 2-dimensional parameterization of the $\rho$ and $\omega$ channels and of the continuum for $uds$ and only of the continuum for the charm. Second, it can be deduced that both the $\rho$ and $\omega$ channels play a role at high $z$ values, while it seems that the $\rho$ is less important at lower $z$ values, as it can be seen in Fig. \[rho\_mc\_d1\]. On the other hand, the continuum decreases with $z$, and this behavior is different for the $uds$ and the $c$ flavors. Also, it can be observed, e.g. in Fig. \[rho\_mc\_d1\], that the behavior in $M_h$ changes from $z$-bin to $z$-bin. Those are signs that the dependence on $z$ and $M_h$ cannot be factorized. ![ Number of events $N$ for the unpolarized $e^+e^-$ annihilation into 2 pions in a jet (plus anything else) at BELLE, normalized by the integrated luminosity $647.26 \,\mbox{pb}^{-1}$. We show only the resonant channel for the $\rho$ production.The data are represented by the dots. The error on the data (not plotted here) is assumed to be $\sqrt{N}$. The dashed lines represent the parameterization, and the band its errorband. $M_h$ in GeV.[]{data-label="rho_mc_d1"}](hist-rho-uds-1 "fig:"){height="4.5cm"} ![ Number of events $N$ for the unpolarized $e^+e^-$ annihilation into 2 pions in a jet (plus anything else) at BELLE, normalized by the integrated luminosity $647.26 \,\mbox{pb}^{-1}$. We show only the resonant channel for the $\rho$ production.The data are represented by the dots. The error on the data (not plotted here) is assumed to be $\sqrt{N}$. The dashed lines represent the parameterization, and the band its errorband. $M_h$ in GeV.[]{data-label="rho_mc_d1"}](hist-rho-uds-2 "fig:"){height="4.5cm"}\ ![ Number of events $N$ for the unpolarized $e^+e^-$ annihilation into 2 pions in a jet (plus anything else) at BELLE, normalized by the integrated luminosity $647.26 \,\mbox{pb}^{-1}$. We show only the resonant channel for the $\rho$ production.The data are represented by the dots. The error on the data (not plotted here) is assumed to be $\sqrt{N}$. The dashed lines represent the parameterization, and the band its errorband. $M_h$ in GeV.[]{data-label="rho_mc_d1"}](hist-rho-uds-3 "fig:"){height="4.5cm"} ![ Number of events $N$ for the unpolarized $e^+e^-$ annihilation into 2 pions in a jet (plus anything else) at BELLE, normalized by the integrated luminosity $647.26 \,\mbox{pb}^{-1}$. We show only the resonant channel for the $\rho$ production.The data are represented by the dots. The error on the data (not plotted here) is assumed to be $\sqrt{N}$. The dashed lines represent the parameterization, and the band its errorband. $M_h$ in GeV.[]{data-label="rho_mc_d1"}](hist-rho-uds-4 "fig:"){height="4.5cm"} Following Eq. (\[eq:asye+e-\]), the unpolarized cross section that we are considering here is differential in $(\cos \theta_2, z, M_h^2, {\overline}z,{\overline}M_h^2)$. The $\theta_2$-dependence is provided by the BELLE collaboration, and the set of variables $(\bar z, \bar M_h)$ is integrated out within the experimental bounds. The methodology is as follows. The unpolarized cross section, differential in $M_h$ and $z$, is $$\begin{aligned} \frac{d \,\sigma^{U}}{2M_h dM_h dz} =\sum_{a,{\overline}a}\; e_a^2 \,\frac{1}{3} \frac{6\,\alpha^2}{Q^2}\,\langle 1+\cos^2\theta_2 \rangle\,\;z^2 \, D_1^{a} (z, M_h^2)\,\int_0^1 d{{\overline}z}\,\int_{2m_{\pi}}^{M_{h}^{\mbox{\scriptsize max} }} 2 {\overline}M_hd {\overline}M_h\, {{\overline}z}^2\, {{\overline}D}_1^{a} ({{\overline}z}, {\overline}M_h^2)\quad ,{\nonumber}\end{aligned}$$ with the integration limits to be modified according to the experiment, and where $D_1=D_1^{ss+pp}$. For both the $uds$ and $c$ flavors, the fitted function takes the form $$\begin{aligned} FF_U&=& \frac{1}{3} \,\frac{6\alpha^2}{Q^2} \langle 1+\cos^2\theta_2 \rangle\, \sum_a\, {e_a^2}\, \int_{z_{bin}} d{z} f_{D_1}^a(z, M_h) \int_{0.2}^{1} d{{\overline}z} \int_{2m_{\pi}}^{1.5 \mbox{\tiny GeV}} d {\overline}M_h f_{D_1}^{\bar a}(\bar z, \bar M_h)\quad , \label{ff}\end{aligned}$$ where $\int_{z_{bin}} d{z}$ means that we average over the $z$-dependence each $z$-bin, and with our functional form appearing in $ f_{D_1}^a(z, M_h)=2M_h\, z^2\,D_{1}^{a}(z, M_h^2)$. The upper integration limit in Eq. (\[ff\]) is chosen to be in agreement with the condition $M_h< <Q$. The DiFFs for quarks and antiquarks are related through the charge conjugation rules described in Ref. [@Bacchetta:2006un]. The determination of a functional form $f_{D_1}$ is done by fitting, by means of a $\chi^2$ goodness-of-fit test, the MC histograms (4 $z$-bins and about 300 $M_h$-bins) for each channel. The best-fit functional forms lead to interesting results. The most important point is that there is no way the $z$ and the $M_h$ dependence can be factorized. Moreover, we have realized that no acceptable fit would be reached with a trivial functional form for the continua. In Fig. \[rho\_mc\_d1\] we show, as an example, the MC of the $\rho$ production together with its parametrization. In the depicted case, the joint $\chi^2/$d.o.f. is $\sim1.25$ [@noi-diff]. We quote the $\chi^2$ values for the other channels: $\omega$-production ($\chi^2/$d.o.f. $\sim 1.3$) ; $uds$-background ($\chi^2/$d.o.f. $\sim 1.4$) ; $c$-background ($\chi^2/$d.o.f. $\sim 1.55$) [@noi-diff]. The propagation of errors gives rise to the $1$-$\sigma$ error band shown in light blue. Towards an extraction of $H_1^{\sphericalangle}$ ================================================ The DiFF $H_1^{\sphericalangle}(z, M_h)$ can be extracted from the Artru-Collins asymmetry. The preliminary data from the BELLE collaboration [@Vossen:2009xz] will be our starting point. Those data are binned in $(z, \bar z)$ and $(M_h, \bar M_h)$. While we have stated in the previous section that no factorization of the $(z, M_h)$ variables is possible for $D_1$, the data do not allow us to make a similar statement for $H_1^{\sphericalangle}$. The next step consists in the determination of a functional form, e.g., $$\begin{aligned} f_{H_1^{\sphericalangle}}(z, M_h,\bar z, \bar M_h)&\propto& f(z)f(\bar z)\, g(M_h)g(\bar M_h)\quad . \label{simple}\end{aligned}$$ Even if we expect the $H_1^{\sphericalangle}$ to arise from an $sp$-wave interference, we presently have no guidance on the interplay of the $(z, M_h)$ variables in the asymmetry. We opt for the simpler functional form (\[simple\]) instead. Given the large uncertainties —we sum statistical and systematic errors in quadrature— on the asymmetry as well as the shape of the $(z, \bar z)$ dependence, it is easily realized that more than one functional form could fit the data. We are currently working in improving our fitting procedure in order to get as much information as we can from the data. Once we will have determined the $z$ as well as the $M_h$-dependence of the $H_1^{\sphericalangle}$ DiFF, we will have to face the flavor decomposition problem. This step will crucially influence the extraction of transversity, see Eq. (\[eq:asydis\]). We conclude by highlighting the importance of DiFFs in the extraction of transversity. We are eagerly looking forward to analyzing the published data on $e^+e^-$ from the BELLE collaboration and to going through the described methodology. We are thankful to the BELLE collaboration for useful information on the data. References {#references .unnumbered} ========== [9]{} J. C. Collins, S. F. Heppelmann and G. A. Ladinsky, Nucl. Phys.  B [**420**]{} (1994) 565 \[arXiv:hep-ph/9305309\]. R. L. Jaffe, X. m. Jin and J. Tang, Phys. Rev. Lett.  [**80**]{} (1998) 1166 \[arXiv:hep-ph/9709322\]. M. Radici, R. Jakob and A. Bianconi, Phys. Rev.  D [**65**]{} (2002) 074031 \[arXiv:hep-ph/0110252\]. A. Bacchetta and M. Radici, Phys. Rev.  D [**74**]{} (2006) 114007 \[arXiv:hep-ph/0608037\]. A. Airapetian [*et al.*]{} \[HERMES Collaboration\], JHEP [**0806**]{} (2008) 017 \[arXiv:0803.2367 \[hep-ex\]\]. A. Martin \[COMPASS Collaboration\], Czech. J. Phys.  [**56**]{} (2006) F33 \[arXiv:hep-ex/0702002\]. A. Vossen, R. Seidl, M. Grosse-Perdekamp, M. Leitgab, A. Ogawa and K. Boyle, arXiv:0912.0353 \[hep-ex\]. A. Bacchetta and M. Radici, Phys. Rev.  D [**67**]{} (2003) 094002 \[arXiv:hep-ph/0212300\]. X. Artru and J. C. Collins, Z. Phys.  C [**69**]{} (1996) 277 \[arXiv:hep-ph/9504220\]. D. Boer, R. Jakob and M. Radici, Phys. Rev.  D [**67**]{} (2003) 094003 \[arXiv:hep-ph/0302232\]. A. Bacchetta, A. Courtoy and M. Radici, in preparation. [^1]: Variables with an extra bar" refer to the pair coming from the antiquark. [^2]: R. Seidl’s talk at TMD workshop, ECT$^{\ast}$, June 2010.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The high frequency peaked BL Lac with a redshift z=0.116 was discovered 1997 in the VHE range by the University of Durham Mark 6 telescope in Australia with a flux corresponding to $\sim$0.2 times the Crab Nebula flux [@1]. It was later observed and detected with high significance by the Southern observatories and H.E.S.S. establishing this source as the best studied Southern TeV blazar. Detection from the Northern hemisphere was very difficult due to challenging observation conditions under large zenith angles. In July 2006, the H.E.S.S. collaboration reported an extraordinary outburst of VHE $\gamma$-emission [@2]. During the outburst, the VHE $\gamma$-ray emission was found to be variable on the time scales of minutes and at a mean flux of $\sim$7 times the flux observed from the Crab Nebula [@2]. The MAGIC collaboration operates a 17m imaging air Cherenkov Telescope at La Palma (Northern Hemisphere). Follow up observations of the extraordinary outburst have been triggered in a Target of Opportunity program by an alert from the H.E.S.S. collaboration. The measured spectrum and light curve are presented.' author: - '\' title: 'High zenith angle observations of PKS2155-304 with the MAGIC telescope' --- BL Lacertae objects: individual () — gamma rays: observations — methods: data analysis Introduction ============ The 17m diameter MAGIC telescope on the Canary Island of La Palma is the world’s largest single imaging atmospheric Cherenkov telescope. One of the aims of the MAGIC collaboration is to carry out observations at large zenith angles. Under these special conditions and with a good telescope sensitivity a bigger effective area is given and sources from a large section of the Southern sky can be observed with a threshold of a few hundred GeV ($\approx$100GeV-500GeV, zenith angle dependent). Here we present the results of an analysis of the Crab Nebula data set taken under large zenith angles (60$^\circ$ to 66$^\circ$). In addition to the usual image parameters [@3] timing information of the recorded signals was used for reconstruction of the shower origin and for the background suppression. Furthermore we present a reanalysis of a data set recorded at a zenith angle range between 59$^\circ$ and 64$^\circ$. The same analysis and cuts are used as for the Crab Nebula data set. Analysis ======== All the data analysed in this work was taken in wobble mode, i.e. tracking a sky direction, which is 0.4$^\circ$ off the source position. The background is estimated from the same field-of-view, which improves the background estimation and yields a better time coverage because no extra OFF data have to be taken. Compared to the previous analysis [@4] improvements are obtained because of an updated Monte Carlo (MC) sample at high zenith angles leading to a better agreement. A further improvement is achieved in the image cleaning and in the gamma/hadron separation thanks to the usage of the timing information of the images. In the new analysis we use the Time Image Cleaning [@5]: With a sub-nsec timing resolution of the data acquisition system and thanks to the parabolic structure of the telescope mirror a smaller integration window can be chosen. This reduces the number of pixels with signals due to night sky background, which survive the image cleaning. This allows reducing the pixel threshold level (i.e. recorded charge) of the image cleaning leading to a lower analysis energy threshold. For the analysis a robust set of dynamical cuts with a small number of free parameters is used [@6]. In addition, cuts in time parameters are applied, which describe the time evolution along the major image axis and the RMS of the time spread. These two additional parameters lead to a better background suppression yielding a better sensitivity of the analysis. The energy estimation is done with the Random Forest regression method [@7]. Results ======= Crab Nebula ----------- The Crab Nebula is one of the best studied celestial objects because of the strong persistent emission of the Nebula over 21 decades of frequencies. It was the first object that was detected at TeV energies by the Whipple collaboration [@8] in the year 1989 and is the strongest steady source of VHE $\gamma$-rays. Due to the stability and the strength of the $\gamma$-ray emission the Crab Nebula is generally considered the standard candle of the TeV astronomy. The measured $\gamma$-ray spectrum extends from 60GeV [@9] up to 80TeV [@10] and appears to be constant over the years (from 1990 to present). ### Observation and Detection In October 2007, the MAGIC telescope took data of the Crab Nebula with a zenith angle range of 60$^\circ$ up to 66$^\circ$. The data was taken under dark sky conditions and in wobble mode. After quality cuts an effective on-time of 2.15hrs is obtained. Using detection cuts (i.e. optimized on significance of a different Crab Nebula sample), a total of 247 excess events above 187 background events with a scale factor of 0.33 have been detected (see Fig. \[fig:crab\_theta\]). According to Li&Ma formula 17 [@11] the significance of this signal is 12.8$\sigma$. This corresponds to an analysis sensitivity of 8.7 $\frac{\sigma}{\sqrt{h}}$. Using the same set of cuts we obtain the following sensitivities for integral fluxes ($\Phi$): $$\begin{aligned} \Phi(E>0.4\,\mathrm{TeV}) &\Rightarrow& 5.7\% \mathrm{\,\,Crab\,\,in\,\,50\,hrs} \\ \Phi(E>0.63\,\mathrm{TeV}) &\Rightarrow& 5.6\% \mathrm{\,\,Crab\,\,in\,\,50\,hrs} \\ \Phi(E>1.0\,\mathrm{TeV}) &\Rightarrow& 5.9\% \mathrm{\,\,Crab\,\,in\,\,50\,hrs} \\ \Phi(E>1.5\,\mathrm{TeV}) &\Rightarrow& 6.8\% \mathrm{\,\,Crab\,\,in\,\,50\,hrs}\end{aligned}$$ ![\[fig:crab\_theta\]The On-Source and normalized background distribution of $\Theta^{2}$. The On-Source is shown in the black crosses and the background is shown in the gray shaded region. 2.22hrs of Crab data show an excess with a significance of 12.8$\sigma$.](./icrc1195_fig01.eps){width="49.00000%"} ### Differential Energy Spectrum The differential energy spectrum can be described well by a power law: $$\begin{aligned} \frac{\mathrm{d}N}{\mathrm{d}E}=(2.7\pm0.4)\cdot10^{-7}\left(\frac{\mathrm{E}}{\mathrm{TeV}}\right)^{-2.46\pm0.13}\left(\frac{\mathrm{phe}}{\mathrm{TeV}\,\mathrm{s}\,\mathrm{m}^{2}}\right). \nonumber\end{aligned}$$ The spectrum is shown in Fig. \[fig:crab\_spec\]. The gray band represents the range of results obtained by varying the total cut efficiency between 40% and 70%. For comparison, the Crab Nebula spectrum from data taken at low zenith angles is drawn as a dashed line [@9]. A very good agreement has been found. ![\[fig:crab\_spec\]Differential energy spectrum of the Crab Nebula. Black line: power law fit to the data, gray band: systematic uncertanties, dashed line: published data taken at low zenith angles [@9].](./icrc1195_fig02.eps){width="49.00000%"} PKS2155-304 ----------- Like the Crab Nebula being the so-called standard candle of $\gamma$-ray astronomy, the blazar PKS2155-304 is the so-called lighthouse of the Southern hemisphere. The high frequency peaked BL Lac PKS2155-304 at a redshift of z=0.116 was discovered in the VHE range by the University of Durham Mark 6 $\gamma$-ray telescope (Australia) in 1997 with a flux corresponding to $\sim$0.2 times the Crab Nebula flux [@1]. It was later observed and detected with high significance by the Southern observatories CANGAROO [@12] and H.E.S.S. [@13] establishing this source as be the best studied Southern TeV blazar. Detection from the Northern hemisphere is difficult due to challenging observation conditions under large zenith angles. In July 2006, the H.E.S.S. collaboration reported an extraordinary outburst of VHE $\gamma$-emission [@2]. During this outburst, the $\gamma$-ray emission was found to be variable on time scales of minutes with a mean flux of $\sim$7 times the flux observed from the Crab Nebula. Follow up observations of the outburst by the MAGIC telescope have been triggered in a Target of Opportunity program by an alert from the H.E.S.S. collaboration [@4]. ### Observation and Detection The MAGIC telescope observed the blazar from 28 July to 2 August 2006 in a zenith angle range from 59$^\circ$ to 64$^\circ$. The data were taken under dark sky conditions and in wobble mode. After quality cuts a total effective on-time of 8.7hrs is obtained. For the detection of , the same cuts are used as for the detection of the Crab Nebula. Three OFF regions are used and 1029 excess events above 846 background events are detected. A significance of 25.3 standard deviations is obtained. The corresponding $\Theta^{2}$-plot is presented in Fig. \[fig:2155\_theta\]. ![\[fig:2155\_theta\]The On-Source and normalized background distribution of $\Theta^{2}$. The denotations are the same as in figure \[fig:crab\_theta\]. A clear excess with a significance of more than 25 standard deviations for a source at the position of PKS2155-304 is found.](./icrc1195_fig03.eps){width="49.00000%"} ### Differential Energy Spectrum The differential energy spectrum is shown in Fig. 4 as a black line together with the measured spectrum of H.E.S.S. during the strong outburst [@2] (dashed line). Note that H.E.S.S. and MAGIC data are not simultaneous. The obtained spectral points in this analysis are fitted from 400GeV on, because at lower energies H.E.S.S. reported a change of the slope ($-3.53\pm0.05$ above 400GeV to $-2.7\pm0.06$ below 400GeV). The fitted MAGIC data points are consistent with a power law: $$\begin{aligned} \frac{\mathrm{d}N}{\mathrm{d}E}=(1.8\pm0.2)\cdot10^{-7}\left(\frac{\mathrm{E}}{\mathrm{TeV}}\right)^{-3.5\pm0.2}\left(\frac{\mathrm{phe}}{\mathrm{TeV}\,\mathrm{s}\,\mathrm{m}^{2}}\right) \nonumber\end{aligned}$$ with a fit probability after the $\chi^{2}$-test of 81%. Above 400GeV, the energy spectrum measured by H.E.S.S. from the preceding flare of PKS2155-304 is one order of magnitude higher than the spectrum measured by MAGIC, but the spectral slope ($-3.53\pm0.05$) is consistent within the statistical errors. ![Differential energy spectrum (black line) together with systematic errors due to varying cuts efficiencies (gray band). The black dashed line corresponds to the H.E.S.S. measurement during the flare.](./icrc1195_fig04.eps){width="49.00000%"} ![\[fig:2155\_deab\]Measured and intrinsic differential energy density of PKS2155-304. The effect of the EBL is taken into account by using the recent model of Kneiske adopted by MAGIC [@3c279]. The power law fit to the observed spectrum is given by the dotted line, while the curved fit to the intrinsic spectrum is shown by the solid line. The fitted peak position is located at $E_{peak}=(672^{+104}_{-157})\,\mathrm{GeV}$. For comparison the Crab Nebula spectrum is shown as the dashed line.](./icrc1195_fig05.eps){width="49.00000%"} ### Intrinsic Energy Spectrum The VHE photons of PKS2155-304 interact with the low-energy photons of the extragalactic background light ([@gould; @hauser]). The predominant reaction $\gamma_{VHE}+\gamma_{EBL}\rightarrow e^{+}e^{-}$ leads to an attenuation of the intrinsic spectrum $\mathrm{d}N/\mathrm{d}E_{intr}$ that can be described by $$\begin{aligned} \mathrm{d}N/\mathrm{d}E_{obs}=\mathrm{d}N/\mathrm{d}E_{intr} \cdot \exp[-\tau_{\gamma \gamma}(E, z)] \nonumber\end{aligned}$$ with the observed spectrum $\mathrm{d}N/\mathrm{d}E_{obs}$, and the energy dependent optical depth $\tau_{\gamma \gamma}(E, z)$. Here we use the recent model of Kneiske et al. that has been adopted by MAGIC [@kneiske; @3c279]. The measured spectrum and the reconstructed deabsorbed spectrum are shown in Fig. \[fig:2155\_deab\]. For comparison the Crab Nebula spectrum is also shown. A power law fit to the deabsorbed spectrum results in a spectral index of $2.4\pm0.1$. However, the fit probability is rather low (4%) which motivates a higher order fit function. We have chosen a curved power-law fit (a parabolic shape in log-log representation) of the following form: $\mathrm{d}N/\mathrm{d}E=N_{0}(E/1\mathrm{TeV})^{-\alpha + \beta \cdot \ln(E/1\mathrm{TeV})}$. The best fit parameters are: , $\beta=-0.5\pm0.2$ (solid line in Fig. \[fig:2155\_deab\]) and the fit probability is 77%. The observed curvature is indicative of a maximum in the energy density and is usually interpreted as due to inverse Compton (IC) scattering. The fitted peak position is determined to be at $E_{peak}=(672^{+104}_{-157})\,\mathrm{GeV}$. ### Light curve The integral light curves above 400GeV shown in Fig. \[fig:lc\] have a binning of one flux point per night (bottom panel) and a binning in two runs which corresponds to about 10minutes per bin (top panel). Significant detections in most of the time bins are obtained. A significant intranight variability is found for the second night MJD53945 (29 July 2006) giving a probability for a constant flux of less than $5\cdot10^{-9}$. For the other nights, no significant intranight variability is found. In the lower panel of Fig. \[fig:lc\], a night-by-night light curve is shown. A fit by a constant to the run-by-run light curve results in a chance probability of less than $10^{-12}$. However, a fit by a constant to the night-by-night light curve results in a chance probability of $7\times10^{-2}$. We, therefore, conclude that there is a significant variability on the time scales reaching from days (largest scale we probed) down to 20 minutes (shortest scale we probed). ![image](./icrc1195_fig06.eps){width="90.00000%"} Conclusion ========== A study of high zenith angles (60$^\circ$ - 66$^\circ$) observations with the MAGIC telescope was performed. A new Time Image Cleaning and also time parameters were used for the background suppression, which leads to a significant improvement of the sensitivity. From Crab Nebula observations a sensitivity of 5.7% of the Crab Nebula flux for 50hrs of observations above 0.4TeV has been determined. The differential energy spectrum of the Crab Nebula is in excellent agreement with the published data at lower zenith angles. This improved analysis is used to reanalyze data of taken with MAGIC in 2006. The energy spectrum from 400GeV up to 4TeV has a spectral index of ($-3.5\pm0.2$) and shows no change of spectral slope with flux state. It agrees well with the results of H.E.S.S. [@2] and CANGAROO [@12]. Furthermore we corrected the measured spectrum for the effect of the EBL absorption using the recent model of Kneiske et al. The resulting intrinsic spectrum shows a clear curvature. A fitted peak position in the energy density distribution is at $E_{peak}=(672^{+104}_{-157})\,\mathrm{GeV}$. The light curves show a significant variability on daily as well as on intra-night time scales. Finally we conclude that high zenith angle observations with the MAGIC telescope have proven to yield high quality spectra and light curve at a low energy threshold. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank the Instituto de Astrofisica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The support of the German BMBF and MPG, the Italian INFN and Spanish MICINN is gratefully acknowledged. This work was also supported by ETH Research Grant TH 34/043, by the Polish MNiSzW Grant N N203 390834, and by the YIP of the Helmholtz Gemeinschaft. D.M’s research is supported by a Marie Curie Intra European Fellowship within the 7th European Community Framework Programme. [99]{} P. M. Chadwick et al., *ApJ* 513:161-167, 1999. F. A. Aharonian et al. (H.E.S.S. Coll.), *ApJ* 664:L71-L74, 2007. A. M. Hillas, In Proc. 19th Int. Cosm. Ray Conf., La Jolla, USA, 3:445-448, 1985. D. Mazin and E. Lindfors for the MAGIC Coll., In Proc 29th Int. Cosm. Ray Conf., Merida, Mexico, 3:1033-1036, 2008. E. Aliu et al., *Astropart. Phys.*, 30:293, 2009. B. Riegel and T. Bretz, In Proc 29th Int. Cosm. Ray Conf., Pune, India, 4:315, 2005. J. Albert et al. (MAGIC Coll.), *NIMA* 588:424-432, 2008. T. Weekes et al., *ApJ*, 342:379-395, 1998. J. Albert et al. (MAGIC Coll.), *ApJ*, 674:1037-1055, 2008. F. A. Aharonian et al. (HEGRA Coll.), *ApJ*, 539:317-324, 2000. T. P. Li and Y. Q. Ma, *ApJ*, 272:317-324, 1983. Sakamoto et al. (CANGAROO Coll.), *ApJ*, 676:113-120, 2008. F. A. Aharonian et al. (H.E.S.S. Coll.), *A&A*, 430:865-875, 2005. R. J. Gould & Schreder, *Phys.Rev. Lett.*, 16:252, 1966. M. G. Hauser & E. Dwek, *ARA&A*, 39:249, 2001. T. Kneiske, K. Mannheim and D. Hartmann, *A&A*, 386:1, 2002. E. Aliu et al. (MAGIC Coll.), *Science*, 320:1752, 2008.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove the existence of the global attractor in $\dot H^s$, $s > 11/12$ for the weakly damped and forced mKdV on the one dimensional torus. The existence of global attractor below the energy space has not been known, though the global well-posedness below the energy space is established. We directly apply the I-method to the damped and forced mKdV, because the Miura transformation does not work for the mKdV with damping and forcing terms. We need to make a close investigation into the trilinear estimates involving resonant frequencies, which are different from the bilinear estimates corresponding to the KdV.' author: - | **PRASHANT GOYAL\ ** title: '**GLOBAL ATTRACTOR FOR WEAKLY DAMPED, FORCED mKdV EQUATION BELOW ENERGY SPACE**' --- Introduction ============ We consider the modified Korteweg-de Vries (in short, mKdV) equation: $$\begin{aligned} \label{intro10} &\partial_{t}u + \partial_{x}^{3}u \pm 2\partial_{x} u^{3} +\gamma u = f,\hspace{2.5mm} t>0, \hspace{1.5mm} x \in \mathbb{T}, \\ &u(x,0) = u_{0}(x) \in \dot H^{s}(\mathbb{T}), \label{intro9}\end{aligned}$$ where $\mathbb{T}$ is the one-dimensional torus, $\gamma >0$ is the damping parameter and $f \in \dot H^{1}(\mathbb{T})$ is the external forcing term which does not depends on $t.$ In equation (\[intro10\]), “$+$" and “$-$" represent the focussing and defocussing cases, respectively. We consider the inhomogeneous Sobolev spaces $H^{s} = \{f\hspace{2mm} | \sum_{k \in \mathbb{Z}} \langle k \rangle^{2s}|\hat{f}(k)|^{2} < \infty \}$ where $\langle \cdot \rangle = (1+|\cdot|)$ and the homogeneous Sobolev spaces $\dot H^s = \{ f \in H^s| \hat{f}(0) = 0 \}$. The mKdV equation models the propagation of nonlinear water waves in the shallow water approximation. We only consider the focussing case as the defocussing case follows with the same assertion. Also, considering inhomogeneous Sobolev norm is very important as for homogeneous Sobolev norm, Proposition \[TL Main result\] does not hold for more details (see appendix by Nobu Kishimoto). From the arguments in [@G02], [@G01] and [@G03], the existence of global attractor for equations (\[intro10\])-(\[intro9\]) directly follows for $s\geq 1$ in $H^{s}$. In the present paper, we prove the existence of global attractor below the energy space in $\dot {H}^{s}(\mathbb{T})$ for $1 > s >11/12.$ Miura [@M01],[@M02] and [@ME] studied the properties of solutions to the Korteweg-de Vries (KdV) equation and its generalization. Miura [@M01] established the Miura transformation between the solutions of mKdV and KdV. Indeed, if $u$ satisfies equation (\[intro10\]) with $``+"$ sign, then the function defined by $$p = \partial_{x}u + iu^{2}$$ satisfies the KdV equation, where $i = \sqrt{-1}$. Colliander, Keel, Staffilani, Takaoka and Tao [@CKSTT02] presented the $I$-method and proved the existence of global solution for mKdV in the Sobolev space $H^{s}(\mathbb{T})$ for $s \geq 1/2$ by using the Miura transformation. However, the Miura transformation does not work well for the weakly damped and forced mKdV. In fact, if we consider the mKdV and KdV equations with the damping and forcing term and apply the Miura transformation, we get $$\begin{aligned} \label{intro3} p_{t} + p_{xxx} - 6ipp_{x} + \gamma p = (2iu + \partial_{x})f - i\gamma u^{2}.\end{aligned}$$ It is clear from (\[intro3\]) that the Miura transformation does not transform the solution of mKdV equation to the solution of KdV equation. For this reason, the results of damped and forced KdV can not be directly converted to those of damped and forced mKdV by the Miura transform unlike the case without damping and forcing terms. The study of global attractor is important as it characterizes the global behaviour of all solutions. The asymptotic behaviour of solutions below the energy space has not been known, though the global well-posedness below the energy space is already proved for the Cauchy problem of -. To study the asympototic behaviour of the solution of mKdV equation below energy space, we need to study the global attractor below energy space. Chen, Tian and Deng [@CLX01] used Sobolev inequalities and *a priori* estimates on $u_{x},u_{xx}$ derived by the energy method to show the existence of global attractor in $H^{2}.$ Dlotko, Kania and Yang [@TKY] considered more generalized KdV equation and showed the existence for global attractor in $H^{1}.$ It is instructive to look at known results on KdV, since KdV has been more extensively studied than mKdV. Tsugawa [@T] proved the existence of global attractor for KdV equation in $\dot{H}^{s}$ for $0 >s >-3/8$ by using the $I$-method. Later, Yang [@X] closely investigated Tsugawa’s argument to bring down the lower bound from $s >-3/8$ to $s \geq -1/2$. Though mKdV has many common properties with KdV, there is a big difference between KdV and mKdV in the structure of resonance. For KdV, we consider the homogeneous Sobolev spaces instead of the inhomogeneous one, which eliminates the resonant frequencies in quadratic nonlinearity (see Bourgain [@B]). On the other hand, for the homogeneous mKdV equation, to eliminate the resonant frequencies in cubic nonlinearity, we need to consider the reduced equation (or the renormalized equation) $$\begin{aligned} \label{intro 4} \partial_{t}u + \partial_{x}^{3}u + 6\left(u^{2} - \frac{1}{2\pi}\|u\|^{2}_{L^{2}}\right)\partial_{x}u = 0.\end{aligned}$$ Without damping and forcing terms, the $L^2$ norm of the solution is conserved. So, the transformation from the original mKdV eqation to the reduced mKdV equation is just the translation with constant velocity. But this is not the case with damped and forced mKdV. The resonant structure of cubic nonlinearity is quite different from that of quadratic nonlinearity. Therefore, in the mKdV case, we need to directly handle the resonant trilinear estimate as well as the non-resonant trilinear estimate. In this respect, it seems difficult to employ the modified energy similar to that used in [@T],[@X]. Especially, the scaling argument is one of the main ingredient of the $I$ method. So we need to make the dependence of estimates on the scaling parameter $\lambda$ also. Hence, the following questions naturally arise: How should we treat the nonlinearity of mKdV equation with the damping and forcing terms? When we can not use Miura transformation, how should we treat mKdV equation ? To deal with such issues, we apply the $I$-method directly to - in the present paper and prove the following result: \[intro theorem\] Assume $11/12 < s < 1$ and $u_{0} \in \dot H^{s}$. Let $S(t)$ is the semi-group generated by the solution of mKdV. Then, there exists two operators $L_{1} (t)$ and $L_{2} (t)$ such that $$\begin{aligned} &S(t) u_{0} = L_{1}(t)u_{0} + L_{2}(t) u_{0}, \\ &\sup \limits_{t>T_{1}} \|L_{1}(t)u_{0}\|_{H^{1}} < K, \\ &\|L_{2}(t)u_{0}\|_{H^{s}} < K exp(- \gamma (t-T_{1})), \hspace{3mm} \forall \hspace{1mm} t > T_{1},\end{aligned}$$ where $K=K(\|f\|_{H^{1}},\gamma)$ and $T_{1}=T_{1}(\|f\|_{H^{1}},\|u_{0}\|_{H^{s}}, \gamma).$ In Theorem \[intro theorem\], the map $L_{1}$ is uniformaly compact and $L_{2}$ uniformly convergs to $0$ in $H^{s}$. Therefore, from [@R Theorem $1.1.1$], we get the existence of global attractor. For the proof of Theorem \[intro theorem\], we consider the following equation: $$\begin{aligned} \label{intro1} &\partial_{t}u + \partial_{x}^{3}u + 6\left(u^{2} - \frac{1}{2\pi}\|u\|^{2}_{L^{2}}\right)\partial_{x}u + \gamma u = F \hspace{4mm} t > 0, x \in \mathbb{T}, \\ &u(x,0) = u_{0}(x) \label{intro2}\end{aligned}$$ where $$F = f \left(x+ \int\limits_{0}^{t} \|u(\tau)\|_{L^{2}}^{2}d\tau \right).$$ If we put $q(x,t) = u(x+ \int\limits_{0}^{t} \|u(\tau)\|_{L^{2}}^{2}d\tau, \hspace{.5mm} t),$ then $q$ satisfies Equations (\[intro1\])-(\[intro2\]). We divide this paper into six sections. In Section $2$, we describe the preliminaries required for the present paper. Section $3$ descirbes the proof of trilinear estimate by using the Strichartz estimate for mKdV equation proved by J. Bourgain [@B]. Section $4$ contains *a priori* estimates. We describe the proof of Theorem \[intro theorem\] in Section $5.$ Finally in Section $6$, some multilinear estimates are proved. Preliminaries ============= In this section, we present the notations and definitions which are used throughout this article. Notations --------- In this subsection, we list the notations which we use throughout this paper. $C,c$ are the various time independent constants which depend on $s$ unless specified. $a+$ and $a-$ represent $a+\epsilon$ and $a-\epsilon$, respectively for arbitrary small $\epsilon > 0.\hspace{1.5mm} A \lesssim B$ denotes the estimate of the form $A \leq CB$. Similarly, $A \sim B$ denotes $A \lesssim B$ and $B \gtrsim A.$ Define $(dk)_{\lambda}$ to be normalized counting measure on $\mathbb{Z}/\lambda$: $$\int \phi(k) (dk)_{\lambda} = \frac{1}{\lambda} \sum\limits_{k \in \mathbb{Z}/\lambda} \phi(k).$$ Let $\hat{f}(k)$ and $\tilde{f}(k,\tau)$ denotes the Fourier transform of $f(x,t)$ in $x$ and in $x$ and $t$, respectively. We define the Sobolev space $H^{s}([0,\lambda])$ with the norm $$\|f\|_{H^{s}} = \|\hat{f}(k)\langle k \rangle^{s}\|_{L^{2}((dk)_{\lambda})},$$ where $\langle \cdot \rangle = (1 + |\cdot|).$ For details see [@CKSTT02],[@T]. We define the space $X^{s,b}$ embedded with the norm $$\|u\|_{X^{s,b}} = \| \langle k \rangle^{s} \langle \tau - 4\pi^{2}k^{3} \rangle \tilde{u}(k,\tau)\|_{L^{2}((dk)_{\lambda}d\tau)}.$$ We often study the KdV and mKdV equation in $X^{s,\frac{1}{2}}$ space but it hardly contorls the norm $L^{\infty}_{t}H^{s}_{x}$ see [@B],[@CKSTT02],[@T]. To ensure the continuity of the solution, we define a slightly smaller space with the norm $$\|u\|_{Y^{s}} = \|u\|_{X^{s,\frac{1}{2}}} + \|\langle k \rangle^{s}\tilde{u}(k,\tau)\|_{L^{2}((dk)_{\lambda})L^{1}(d\tau)}.$$ $Z^{s}$ space is defined via the norm $$\|u\|_{Z^{s}} = \|u\|_{X^{s,-\frac{1}{2}}} + \| \langle k \rangle^{s} \langle \tau - 4\pi^{2}k^{3} \rangle^{-1} \tilde{u}(k,\tau)\|_{L^{2}((dk)_{\lambda})L^{1}(d\tau)}.$$ For the time interval $[t_{1},t_{2}],$ we define the restricted spaces $X^{s,b}$ and $Y^{s}$ embedded with the norms $$\begin{aligned} \|u\|_{X^{s,b}_{([0,\lambda] \times [t_{1},t_{2}])}} &= \inf \lbrace \|U\|_{X^{s,b}} : U|_{([0,\lambda] \times [t_{1},t_{2}])} = u \rbrace, \\ \|u\|_{Y^{s}_{([0,\lambda] \times [t_{1},t_{2}])}} &= \inf \lbrace \|U\|_{Y^{s}} : U|_{([0,\lambda] \times [t_{1},t_{2}])} = u \rbrace.\end{aligned}$$ We state the mean value theorem as follow: If $a$ is controlled by $b$ and $|k_{1}| \ll |k_{2}|,$ then $$a(k_{1} + k_{2}) - a(k_{2}) = O\left(|k_{1}|\frac{b(k_{2})}{|k_{2}|} \right).$$ For details see [@CKSTT02 Section 4]. Rescaling --------- In this subsection, we rescale the mKdV equation. We can rewrite equations (\[intro1\])-(\[intro2\]) in $\lambda$-rescaled form as follow: $$\begin{aligned} \label{rescaled1} &\partial_{t} v + \partial_{xxx} v + 6\left(v^{2} - \frac{1}{2\pi}\|v\|_{L^{2}}^{2} \right)\partial_{x}v + \lambda^{-3} \gamma v = \lambda^{-3} g, \\ &v(x,t_{0}) = v_{t_{0}}(x), \label{rescaled2}\end{aligned}$$ where $$\begin{aligned} g(x,t) &= \lambda^{-1}F(\lambda^{-1}x,\lambda^{-3}t), \\ v_{t_{0}}(x) &= \lambda^{-1}u(\lambda^{-1}x,\lambda^{-3}t_{0}),\end{aligned}$$ for initial time $t_{0}.$ If $u$ is the solution of the equations (\[intro1\])-(\[intro2\]), then $v(x,t) = \lambda^{-1}u(\lambda^{-1}x,\lambda^{-3}t)$ is the solution of the equations (\[rescaled1\])-(\[rescaled2\]). Rescaling is helpful in proving the local in time result as well as *a priori* estimate. I-Operator ---------- We define an operator $I$ which plays an important role for the $I$-method. Let $\phi : \mathbb{R} \rightarrow \mathbb{R}$ be a smooth monotone $\mathbb{R}$-valued function defined as: $$\phi(k) = \begin{cases} 1 &|k| < 1, \\ |k|^{s-1} &|k| > 2. \end{cases}$$ Then, for $m(k) = \phi(\frac{k}{N}),$ we define $$m(k) = \begin{cases} 1 &|k| < N, \\ |k|^{s-1}N^{1-s} &|k| > 2N, \end{cases}$$ where we fix $N$ to be a large cut-off. We define the operator $I$ as: $$\widehat{Iu}(k) = m(k)\hat{u}(k).$$ We can rescale the operator $I$ as follow: $$\widehat{I'u}(k) = m'(k)\hat{u}(k),$$ where $m'(\frac{k}{\lambda}) = m(k).$ Let $N' = \frac{N}{\lambda}.$ Then $$m'(k) = \begin{cases} 1 &|k| < N', \\ |k|^{s-1}N'^{(1-s)} &|k| > 2N'. \end{cases}$$ We use the rescaled $I$-operator for proving the local results for mKdV equation in time. Moreover, proving *a priori* estimate also use the same operator. Strichartz Estimate ------------------- Strichartz estimate plays an important role for the proof of the trilinear estimate. Bourgain in [@B], proves the $L^{4}$ Strichartz estimate for mKdV equation. In the present article, we use the same estimate. We list the following result: \[st1\] Let $b > \frac{1}{3}.$ Then, we have $$\|u\|_{L^{4}(\mathbb{R} \times \mathbb{T})} \lesssim C \|u\|_{X^{0,b}}.$$ Local-Wellposedness ------------------- In this subsection, we state the local result in time which can be proved by using the contraction mapping. Let $\eta(t) \in C_{0}^{\infty}$ be a cut-off function such that: $$\label{lw:4} \eta(t) = \begin{cases} 1 & \text{if}\ |t| \leq 1, \\ 0 & \text{if}\ |t| >2. \end{cases}$$ Suppose that $$D_{\lambda}(t)f(x) =\int e^{2i\pi k x}e^{-(2i\pi k)^{3}t} \hat{f}(k)(dk)_{\lambda}.$$ We assume the following well known lemmas: \[lw : lemma1\] $$\| \eta (t) D_{\lambda}(t)w\|_{X^{1,\frac{1}{2}}} \leq \|w\|_{H^{1}}.$$ \[lw : lemma2\] Let $F \in {X^{1,-\frac{1}{2}}}.$ Then $$\| \eta (t)\int_{0}^{t} D_{\lambda}(t-t')F(t')dt'\|_{Y^{1}} \leq \|F\|_{Z^{1}}.$$ For the proof of Lemmas \[lw : lemma1\] and \[lw : lemma2\] see [@CKSTT02]. \[local wellposdeness\] Let $\frac{1}{2} \leq s < 1.$ Then the IVP (\[rescaled1\])-(\[rescaled2\]) is locally well-posed for the initial data $v_{t_{0}}$ satisfying $I'v_{t_{0}} \in \dot H^{1}(\mathbb{T})$ and $I'g \in \dot H^{1}(\mathbb{T}).$ Moreover, there exists a unique solution on the time interval $[t_{0},t_{0} + \delta]$ with the lifespan $\delta \sim (\|I'v_{t_{0}}\|_{H^{1}} + \lambda^{-3}\|I'g\|_{H^{1}} + \gamma \lambda^{-3})^{-\alpha}$ for some $\alpha > 0$ and the solution satisfies $$\begin{aligned} \|I'v\|_{Y^{1}} &\lesssim \|I'v_{t_{0}}\|_{H^{1}} + \lambda^{-3}\|I'g\|_{H^{1}}, \\ \sup\limits_{t_{0} \leq t \leq t_{0} + \delta}^{}\|I'v(t)\|_{H^{1}} &\lesssim \|I'v_{t_{0}}\|_{H^{1}} + \lambda^{-3}\|I'g\|_{H^{1}}. \end{aligned}$$ Note that $$\begin{aligned} g(x,t) &= \lambda^{-1}F(\lambda^{-1}x,\lambda^{-3}t) \\ &= \lambda^{-1}f\left(x + \frac{1}{2\pi}\int\limits_{0}^{t} \|I'v\|_{L^{2}}^{2} \right)\end{aligned}$$ The proof of the Proposition \[local wellposdeness\] follows along the same lines as for KdV equation given in [@T] with the help of trilinear estimate given in Proposition \[TL for L\^[1]{}\]. The only difference arises in the estimate of $g$ as it depends on unknown $u$. To deal with this issue, we define a new metric. Indeed, let $$B= \lbrace w\in X^{1,\frac{1}{2}} : \|w\|_{X^{1,\frac{1}{2}}} \lesssim C\left( \|I'v_{0}\|_{H^{1}} + \lambda^{-3}\|I'g\|_{H^{1}} \right) \rbrace$$ and define the metric $$d(w,w') = \|w-w'\|_{X^{0,\frac{1}{2}}} + \|v-v'\|_{X^{0,\frac{1}{2}}},$$ for $I'v = w.$ As ${X^{0,\frac{1}{2}}}$ is reflexive, the ball $B$ is complete with respect to the metric $d$ for details see [@Kato 9.14 and Lemma 7.3] . Therefore, it is enough to show $$\begin{aligned} \|N(v,w) - N(v',w')\|_{Y^{0}} &\lesssim \|\eta(t)(P(v,w)-P(v',w'))\|_{Z^{0}} \\ &\lesssim \left(\gamma \lambda^{-3} + \lambda^{0+}\left( \|I'v_{0}\|_{H^{1}} + \lambda^{-3}\|I'g\|_{H^{1}} \right)^{2} +\lambda^{-3}\|I'g\|_{H^{1}} \right) \\ &\left( \|w-w'\|_{X^{0,\frac{1}{2}}} + \|v-v'\|_{X^{0,\frac{1}{2}}} \right),\end{aligned}$$ where $$N(w)= \eta(t)D_{\lambda}(t)I'v_{0} - \eta(t)\int D_{\lambda}(t-t')\eta(t')P(t')dt'$$ with $$P(v,w) = 6I'\left(v^{2} - \frac{1}{2\pi}\|v\|^{2}_{L^{2}}\right)\partial_{x}v + \gamma \lambda^{-3}w - \lambda^{-3}I'g.$$ As the metric consist of both $w$ and $u$ terms, we consider the pair of equation as: $$\begin{aligned} &\partial_{t} v + \partial_{xxx} v + 6\left(v^{2} - \frac{1}{2\pi}\|v\|_{L^{2}}^{2} \right)\partial_{x}v + \lambda^{-3} \gamma v = \lambda^{-3} g, \label{local equation 2} \\ &\partial_{t} w + \partial_{xxx} w + 6I'\left(v^{2} - \frac{1}{2\pi}\|v\|_{L^{2}}^{2} \right)\partial_{x}(I')^{-1}w + \lambda^{-3} \gamma w = \lambda^{-3} I'g. \label{Local equation 1}\end{aligned}$$ The estimate of $v$ in $H^{s}$ follows from that of $w$ in $H^{1}$ because $\|v\|_{H^{s}} \lesssim \|w\|_{H^{1}} .$ Therefore, we do not need to assume extra condition on ball for the variable $``v"$ .Let $$\begin{aligned} g'(x,t) &= \lambda^{-1}F(\lambda^{-1}x,\lambda^{-3}t) \\ &= \lambda^{-1}f\left(x + \frac{1}{2\pi}\int\limits_{0}^{t} \|I'v'\|_{L^{2}}^{2} \right)\end{aligned}$$ At first, we consider the external forcing term for Equation as: $$\begin{split} &\|I'g - I'g'\|_{X^{0,-\frac{1}{2}}} \lesssim \|I'g-I'g'\|_{L^{2}} \\ =&\Biggl|\Biggl|\lambda^{-1}I'f\left(\lambda^{-1}x + \int_{0}^{\lambda^{-3}t}\|\lambda v(\lambda \cdot,\lambda^{3}\tau )\|_{L^{2}}^{2}d\tau \right) -\\ & \lambda^{-1}I'f\left(\lambda^{-1}x + \int_{0}^{\lambda^{-3}t}\|\lambda v'(\lambda \cdot,\lambda^{3}\tau )\|_{L^{2}}^{2}d\tau \right) \Biggl|\Biggl|_{L^{2}} \\ &\lesssim \left\|\lambda^{-1}\int_{0}^{1}\frac{d}{d\theta}I'f(\lambda^{-1}x + \theta\alpha(t) + (1-\theta)\beta(t))d\theta\right\|_{L^{2}} \end{split}$$ where $$\alpha(t) = \int_{0}^{\lambda^{-3}t}\|\lambda v(\lambda \cdot,\lambda^{3}\tau )\|_{L^{2}}^{2}d\tau \hspace{6mm}\text{and}\hspace{6mm} \beta(t) = \int_{0}^{\lambda^{-3}t}\|\lambda v'(\lambda \cdot,\lambda^{3}\tau )\|_{L^{2}}^{2}d\tau.$$ Now from mean value theorem and the fact that translation is invariant, we get $$\begin{aligned} \|I'g-I'g'\|_{L^{2}} \lesssim \|I'g\|_{H^{1}}\|v-v'\|_{X^{0,\frac{1}{2}}}.\end{aligned}$$ Similarly for Equation , we get $$\begin{aligned} \|g-g'\|_{L^{2}} \lesssim \|g\|_{H^{1}}\|v-v'\|_{X^{0,\frac{1}{2}}}.\end{aligned}$$ The nonlinear term can be estimated similar to the $4$-linear estimate of Lemma \[Energy 1 estimate\]. Note that the $4$-linear estimate has third order derivative on the other hand the nonlinear term has only one. We can make the similar cases for the nonlinear term as given in Integrals $(1)-(3)$ and prove the estimate. Hence, we can use the contraction principle. This shows that the solution $u \in X^{1,\frac{1}{2}}.$ We need to show that the solution belongs to $Y^{1}.$ But from Proposition \[TL for L\^[1]{}\], the nonlinear term of the integral equation belongs to $Y^{1}.$ In the same way, we can verify other two terms of integral equation by using Schwarz inequality. Therefore, the solution $u \in Y^{1}. $ Trilinear Estimate ================== Define an operator $J$ such that $$\begin{aligned} \label{operator J} \hat{J}[u,v,w] = i\frac{k}{3} \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})(k_{2} +k_{3})(k_{3} + k_{1})\neq 0 }}\hat{u}(k_{1})\hat{v}(k_{2})\hat{w}(k_{3}) -i k\hat{u}(k)\hat{v}(k)\hat{w}(-k).\end{aligned}$$ where $\hat{u} \hspace{1.5mm} \text{and} \hspace{1.5mm} \tilde{v}$ denote the Fourier transforms in $x$ variable and both $x \hspace{1mm} \text{and}\hspace{1mm} t$ variables, respectively. We establish the following trilinear estimate for $J$: \[TL Main result\] Let $s \geq \frac{1}{2} \hspace{1mm}\text{and}\hspace{1mm}u,v,w \in {X^{s,\frac{1}{2}}}$ are $\lambda$-periodic in $x$ variable. Then, we have $$\label{TL1} \|J[u,v,w]\|_{X^{s,-\frac{1}{2}}} \leq C\lambda^{0+}\|u\|_{X^{s,\frac{1}{2}}}\|v\|_{X^{s,\frac{1}{2}}}\|w\|_{X^{s,\frac{1}{2}}}.$$ We note that if $u$ is real valued, then $$\label{TL2} J[u,u,u] = \left(u^{2} - \frac{1}{2\pi}\|u\|^{2}_{L^{2}}\right)\partial_{x}u.$$ yields the nonlinearity of mKdV. The first term and the second term of (\[operator J\]) can be estimated in $H^{s}$ for $s\geq \frac{1}{4}$ and $s \geq \frac{1}{2}$, respectively. So, the bound $s=\frac{1}{2}$ comes from the second term. Simple computations yield $$\begin{aligned} \left(u^{2} - \frac{1}{2\pi}\|u\|^{2}_{L^{2}}\right)\partial_{x}u =& i \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})\neq 0 }} \hat{u}(k_{1})\hat{u}(k_{2})k_{3} \hat{u}(k_{3}) \\ =& i \lbrace \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})(k_{2} +k_{3})(k_{3} + k_{1})\neq 0 }}\hat{u}(k_{1})\hat{u}(k_{2}) k_{3}\hat{u}(k_{3}) \\ &+ \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})(k_{3} + k_{1})\neq 0 \\ (k_{2} +k_{3}) = 0 }}\hat{u}(k_{1})\hat{u}(-k_{3}) k _{3}\hat{u}(k_{3}) \\ &+ \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})(k_{2} +k_{3})\neq 0 \\(k_{3} + k_{1}) =0 }}\hat{u}(-k_{3})\hat{u}(k_{2}) k_{3}\hat{u}(k_{3}) \\ &+ \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})\neq 0 \\ (k_{2} +k_{3}) = (k_{3} + k_{1}) = 0}} k_{3} \hat{u}(k_{1})\hat{u}(-k_{3})^{2} \rbrace \\ =& i\frac{k}{3} \{ \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})\neq 0 }} \hat{u}(k_{1})\hat{u}(k_{2}) \hat{u}(k_{3}) \} \\ &- ik|\hat{u}(k)|^{2} \hat{u}(k).\end{aligned}$$ Note that the right hand side of the above formula is equivalent to $\hat{J}.$ Therefore, the nonlinearity of mKdV equation can be control if we prove the Proposition \[TL Main result\] . If $u$ is a complex-valued function, then we have only to consider $$\left(|u^{2}| - \frac{1}{2\pi}\|u\|^{2}_{L^{2}}\right)\partial_{x}u - \frac{i}{2\pi}Im\langle \partial_{x}u, u \rangle_{L^{2}}u$$ instead of the left hand side of the above equality. This yield the nonlinearity of the complex mKdV. \[Proof of Proposition \[TL Main result\]\] We first consider the trilinear estimate corresponding to non resonant frequencies. We claim that $$\left\| i \frac{k}{3} \int\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})(k_{2} +k_{3})(k_{3} + k_{1})\neq 0 }}\hat{u}_{1}(k_{1})\hat{u}_{2}(k_{2})\hat{u}_{3}(k_{3})\right\|_{X^{s,-\frac{1}{2}}} \lesssim \prod\limits_{i=1}^{3} \|u_{i}\|_{X^{s,\frac{1}{2}}}.$$ From duality, it is enough to show $$\label{TL *} \left| \int\limits_{\substack{k_{1} +k_{2} + k_{3} + k_{4} = 0 \\(k_{1} + k_{2})(k_{2} +k_{3})(k_{3} + k_{1})\neq 0 }} \langle k_{1}\rangle\int\limits^{}_{\sum\limits_{i=1}^{4}\tau_{i} = 0}\prod\limits_{i=1}^{4} \tilde{u}_{i}(k_{i},\tau_{i})(dk_{i})_{\lambda}d\tau_{i}\right|\lesssim \prod\limits_{i=1}^{3} \|u_{i}\|_{X^{s,\frac{1}{2}}} \|u_{4}\|_{X^{-s,\frac{1}{2}}}.$$ Consider LHS of (\[TL \*\]) and let the region of the first integration to be $``*"$ and region of the second integration is denoted by $``**"$. Define $\sigma_{i} = \tau_{i} - 4\pi k^{3}_{i} \hspace{1.5mm} \text{for}\hspace{1.5mm}1 \leqq i \leq 4.$ Multiply and divide by $\langle k_{4} \rangle^{\frac{1}{2}} \langle \sigma_{4}\rangle^{\frac{1}{2}}$ to get $$\label{TL3} \left| \int\limits_{*} \int\limits^{}_{**} \langle k_{1} \rangle^{1} \langle k_{4}\rangle^{s} \langle \sigma_{4} \rangle^{-\frac{1}{2}} \tilde{u}_{1} \tilde{u}_{2} \tilde{u}_{3} (\langle k_{4} \rangle^{-s} \langle \sigma_{4} \rangle^{\frac{1}{2}} \tilde{u}_{4})\right|.$$ We divide this estimate into following four cases: 1. Let $| \sigma_{4} | = \max\{| \sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ 2. Let $| \sigma_{3} | = \max\{| \sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ 3. Let $| \sigma_{2} | = \max\{| \sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ 4. Let $| \sigma_{1} | = \max\{| \sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ From the symmetry and the duality argument, it is enough to show for Case $1$ because other cases can be treated in the same way. As we know, $k_{1} + k_{2} + k_{3} + k_{4} = 0$ and $\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4} =0,$ from simple calculations, we have $$\label{TL4} \langle \sigma_{4} \rangle \gtrsim 3(|k_{1} + k_{2}||k_{2} + k_{3}||k_{3} + k_{1}|) \sim 3(|k_{2} + k_{3}||k_{3} + k_{4}||k_{4} + k_{2}|).$$ From symmetry, we can assume that $|k_{1}| \geq |k_{2}| \geq |k_{3}|.$ Now we can again subdivide all three cases into four cases: - $|k_{1}| \sim |k_{2}| \sim |k_{3}| \sim |k_{4}|$ - $|k_{1}| \sim |k_{4}| \gg |k_{2}| \gtrsim |k_{3}|$ - $|k_{1}| \sim |k_{4}| \sim |k_{2}| \gtrsim |k_{3}|$ Note that there are other cases also but if we consider $|k_{1}|\gg |k_{4}|$, the derivative corresponding to $|k_{4}|$ get very small and the estimate is easy to verify. \[TL all are equal\] For **Case $1a$**, we give the following proof: Note that we wish to prove $$\begin{aligned} \label{TL M} \|\partial_{x}M(u,u,u)\|_{X^{s,-\frac{1}{2}}} \lesssim \|u\|^{3}_{X^{s,\frac{1}{2}}},\end{aligned}$$ where $$\mathcal{F}_{x}[M(u,v,w)] = \sum\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\|k_{1}| \sim |k_{2}| \sim |k_{3}| }}\hat{u}(k_{1}) \hat{v}(k_{2}) \hat{w}(k_{3}),$$ and $\mathcal{F}$ denotes the Fourier transform in $x$ variable. Hence, $$\begin{aligned} \|\partial_{x}M(u,u,u)\|_{X^{s,-\frac{1}{2}}} \sim & \left( \int\limits_{k}^{} \langle k \rangle^{3} \left( \int\limits_{-\infty}^{\infty} \langle \sigma \rangle^{-1} \left| \mathcal{F}_{x,t}[M(u,u,u)] \right|^{2} d\tau \right) (dk)_{\lambda} \right)^{\frac{1}{2}} \\ \sim & \| (\langle k \rangle^{\frac{1}{2}}|\tilde{u}|)^{3} \langle \sigma \rangle^{-\frac{1}{2}} \|_{L^{2}(\mathbb{T} \times \mathbb{R})},\end{aligned}$$ where $\mathcal{F}_{x,t}$ is the Fourier transform in both $x$ and $t$ variables. Let $\tilde{v}(k,\tau) = \langle k \rangle^{\frac{1}{2}} |\tilde{u}(k,\tau)|.$ Hence, we get $$\begin{aligned} \| (\langle k \rangle^{\frac{1}{2}}|\tilde{u}|)^{3} \langle \sigma \rangle^{-\frac{1}{2}} \|_{L^{2}(\mathbb{T} \times \mathbb{R})} &\lesssim \|v^{3}\|_{X^{0,-\frac{1}{2}}}, \\ &\lesssim \|v^{3}\|_{L^{\frac{4}{3}}(\mathbb{T} \times \mathbb{R})}, \end{aligned}$$ From the duality of Strichartz’s estimate and Proposition \[st1\], we get $$\begin{aligned} \| (\langle k \rangle^{\frac{1}{2}}|\tilde{u}|)^{3} \langle \sigma \rangle^{-\frac{1}{2}} \|_{L^{2}(\mathbb{T} \times \mathbb{R})} &\lesssim \|v\|^{3}_{L^{4}(\mathbb{T} \times \mathbb{R})}, \\ &\lesssim \lambda^{0+}\|u\|^{3}_{_{X^{s,\frac{1}{2}}}}.\end{aligned}$$ Therefore, we can handle Case $1a$ directly. **Case $1b.$** We assume that the size of the Fourier support of $u_j$ satisfies $$\begin{aligned} & |k_1|\sim|k_4| \gg |k_2|, |k_3|, \nonumber\\ & |\sigma_4| \gtrsim |k_2 + k_3||k_3 + k_4||k_4 + k_2|,\nonumber \\ & \frac{1}{\lambda} \leq |k_2 + k_3| \leq 1. \label{periodic 6}\end{aligned}$$ **Remark 1.** The restriction $k_1 + k_2 + k_3 + k_4 = 0$ and the assumption imply that $|k_1| \sim |k_4|.$ But it does not follow that $|k_2| \sim |k_3|$ unless (\[periodic 6\]) additionally assumed.\ We prove the following estimate of the quardlinear functional on $\mathbb{R} \times \lambda\mathbb{T}$ with parameter $\lambda \geq 1$. For the above conditions, we have $$\begin{aligned} & \left| \int\limits_{*} \int\limits^{}_{**} \langle k_{1} \rangle^{1} \langle k_{4}\rangle^{s} \langle \sigma_{4} \rangle^{-\frac{1}{2}} \tilde{u}_{1} \tilde{u}_{2} \tilde{u}_{3} (\langle k_{4} \rangle^{-\frac{1}{2}} \langle \sigma_{4} \rangle^{\frac{1}{2}} \tilde{u}_{4})\right| \nonumber \\ \lesssim & (1 + \lambda^{0+}) \min \lbrace \|u_2\|_{X^{1/4+,1/2}} \|u_3\|_{X^{0,1/2}} , \|u_2\|_{X^{0,1/2}}\|u_3\|_{X^{1/4+,1/2}} \rbrace \times \|u_1\|_{X^{s,1/2}} \|u_4\|_{X^{-s,1/2}}. \label{periodic 1}\end{aligned}$$ We follow the argument in [@CKSTT02 Case 3 in the proof of Proposition 5 on page 733-734]. We first note that $$\begin{aligned} \label{periodic 2} |\sigma_4| \gtrsim |k_2+k_3||k_1|^{2}.\end{aligned}$$ From the Plancherel theorem, inequality (\[periodic 2\]) and the Sobolev embedding, the left side of (\[periodic 1\]) can be bounded by the following inequalities. $$\begin{aligned} &\left| \int\limits_{*} \int\limits^{}_{**} \langle k_{1} \rangle^{1} \langle k_{4}\rangle^{s} \langle \sigma_{4} \rangle^{-\frac{1}{2}} \tilde{u}_{1} \tilde{u}_{2} \tilde{u}_{3} (\langle k_{4} \rangle^{-s} \langle \sigma_{4} \rangle^{\frac{1}{2}} \tilde{u}_{4})\right| \nonumber\\ & \lesssim \int\limits_{*} \int\limits^{}_{**} \langle k_{1} \rangle^{s} |\bar{\tilde{u}}_1 (k_1)| (|k_2 + k_3|^{-1/2} |\tilde{u}_2 (k_2)| |\tilde{u}_3 (k_3)|)|\sigma_4|^{1/2} |k_{4}|^{-s} |\tilde{u}_4 (k_4)| d\tau \nonumber \\ & \lesssim \|D_{x}^{s}v_1\|_{L^{4}(\mathbb{R} \times \lambda \mathbb{T})} \|D_{x}^{-1/2} (v_2 v_3)\|_{L^{4}(\mathbb{R} \times \lambda \mathbb{T})} \|v_4\|_{X^{-s,1/2}}\nonumber \\ & \lesssim \|v_1\|_{X^{s,1/3+}} \|D_{x}^{-1/4} (v_2 v_3)\|_{L^{4}(\mathbb{R};L^{2}(\lambda \mathbb{T}))} \|v_4\|_{X^{-s,1/2}}, \label{periodic 3}\end{aligned}$$ where $\tilde{v_j} = |\tilde{u_j}|$. Furthermore, by the Plancherel’s theorem, $1/\lambda \leq |k_{2}| + |k_{3}| \leq 1,$ Schwarz inequality and the Young’s inequality, we have $$\begin{aligned} &\|D_{x}^{-1/4} (v_2 v_3)\|_{L^{2}(\lambda\mathbb{T})} \lesssim \int\limits_{1/ \lambda \leq |k_{23}| \leq 1} |k_{23}|^{-1/2} \left|\hspace{1.5mm} \int\limits_{k_{23} = k_{2} + k_{3}} \tilde{v_{2}}(k_{2})\tilde{v_{3}}(k_{3}) \right|^{2} \nonumber\\ &\lesssim \Bigg( \int\limits_{1/ \lambda \leq |k_{23}| \leq 1} |k_{23}|^{-1} \Bigg)^{1/2} \Bigg(\int\limits_{1/ \lambda \leq |k_{23}| \leq 1} \Bigg|\hspace{1.5mm} \int\limits_{k_{23} = k_{2} + k_{3}} \tilde{v_{2}}(k_{2})\tilde{v_{3}}(k_{3}) \Bigg|^{4}\Bigg)^{1/2} \nonumber \\ &\lesssim (1 + \log\lambda)^{1/2} \min \lbrace \|v_2\|_{L^{2}(\lambda\mathbb{T})}^{2} \|v_3\|_{H^{1/4 +}(\lambda\mathbb{T})}^{2} , \|v_3\|_{L^{2}(\lambda\mathbb{T})}^{2} \|v_2\|_{H^{1/4 +}(\lambda\mathbb{T})}^{2} \rbrace. \label{periodic 4}\end{aligned}$$ The integration in $t$ over $\mathbb{R}$ of the squared left side of (\[periodic 4\]) yield $$\begin{aligned} \|D_{x}^{-1/4} (v_2 v_3)\|_{L^{4}(\mathbb{R} ;L^{2}(\lambda\mathbb{T}))} \nonumber \lesssim (1 + \lambda^{0+}) \min \lbrace \|v_2\|_{L^{8}(\mathbb{R};L^{2}(\lambda\mathbb{T}))}^{2}\\ \|v_3\|_{L^{8}(\mathbb{R};H^{1/4 +}(\lambda\mathbb{T}))}^{2} , \|v_3\|_{L^{8}(\mathbb{R};L^{2}(\lambda\mathbb{T}))}^{2} \|v_2\|_{L^{8}(\mathbb{R};H^{1/4 +}(\lambda\mathbb{T})}^{2} \rbrace \nonumber \\ \lesssim (1 + \lambda^{0+}) \min \lbrace \|v_2\|^{2}_{X^{0,1/2}} \|v_3\|^{2}_{X^{1/4 +,1/2}}, \|v_2\|^{2}_{X^{0,1/2}} \|v_3\|^{2}_{X^{1/4 +,1/2}} \rbrace. \label{periodic 5}\end{aligned}$$ Accordingly, from (\[periodic 3\])-(\[periodic 5\]) we obtained the desire inequality (\[periodic 1\]). **Case 1c.** Inequality (\[periodic 2\]) becomes $$|\sigma_{4}| \gtrsim |k_{2} + k_{4}||k_{1}|^{2}.$$ Therefore, we can estimate case **1c** in the similar way as case **1b**.\ **For the resonant part** (the second term of operator $J$ (\[operator J\])), the proof is similar to Lemma \[TL all are equal\] with $M$ defined in the formula (\[TL M\]) changes to the following: $$\mathcal{F}_{x}[M(u,u,u)] = |\hat{u}(k)|^{2} |\hat{u}(k)| .$$ Now, we prove the trilinear estimate corresponding to the function space $Z^{s}$: \[TL for L\^[1]{}\] For $s \geq \frac{1}{2} \hspace{1mm}\text{and}\hspace{1mm}u,v,w \in {X^{s,\frac{1}{2}}}$, we have $$\label{TL5} \|J[u,v,w]\|_{Z^{s}} \leq C \lambda^{0+} \|u\|_{Y^{s}}\|v\|_{Y^{s}}\|w\|_{Y^{s}}.$$ From Proposition \[TL Main result\], it is enough to show $$\|\langle k \rangle^{s} \langle k \rangle \langle \sigma \rangle^{-1} J[u,v,w]\|_{L^{2}_{(dk)_{k}}L^{1}_{d\tau}} \leq C\|u\|_{X^{s,\frac{1}{2}}}\|v\|_{X^{s,\frac{1}{2}}}\|w\|_{X^{s,\frac{1}{2}}}.$$ Similar to Proposition \[TL Main result\], we also divide this problem into the following four cases. 1. Let $|\sigma| = \max\{|\sigma|,|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 3 \}.$ 2. Let $|\sigma_{1}| = \max\{|\sigma|,|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 3 \}.$ 3. Let $|\sigma_{2}| = \max\{|\sigma|,|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 3 \}.$ 4. Let $|\sigma_{3}| = \max\{|\sigma|,|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 3 \}.$ Case $1$ is the worst one. Indeed, otherwise we have by Schwarz’s inequality, $$\begin{aligned} &\|\langle k \rangle^{s} \langle k \rangle \langle \sigma \rangle^{-1} \sum\limits_{k} \hat{u}_{1} \hat{u}_{2} \hat{u}_{3} \|_{L^{2}_{(dk)_{k}}L^{1}_{\tau}} \\ &\lesssim \left\|\left( \int\limits_{-\infty}^{\infty} \frac{1}{\langle \sigma \rangle^{2(\frac{1}{2} + \epsilon)}} d\tau \right)^{\frac{1}{2}} \left( \int\limits_{-\infty}^{\infty} \frac{\langle k \rangle^{2s} \langle k \rangle^{2}}{\langle \sigma \rangle^{2(\frac{1}{2} - \epsilon)}} \left| \sum\limits_{k}^{} \hat{u}_{1} \hat{u}_{2} \hat{u}_{3} \right|^{2} d\tau \right)^{\frac{1}{2}} \right\|_{L^{2}_{(dk)_{\lambda}}}. \\ &\lesssim C \left\| \frac{\langle k \rangle \langle k \rangle^{s}}{\langle \sigma \rangle^{(\frac{1}{2} - \epsilon)}} \sum\limits_{k}^{} \hat{u}_{1} \hat{u}_{2} \hat{u}_{3} d\tau \right\|_{L^{2}_{(dk)_{k}}L^{2}_{\tau}},\end{aligned}$$ and hence it reduces to the same proof as in Proposition \[TL Main result\]. Therefore, we only have to prove Case $1.$ From symmetry, assume that $|k_{1}| \geq |k_{2}| \geq |k_{3}|.$ We divide Case $1$ into further three cases as follow: - $|k_{1}| \sim |k_{2}| \sim |k_{3}|.$ - $|k_{1}| \gg |k_{2}| \gtrsim |k_{3}|.$ - $|k_{1}| \sim |k_{2}| \gg |k_{3}|.$ **Case $1a$**. By the Schwarz’s inequality, we have $$\begin{aligned} & \int\limits_{-\infty}^{\infty} \langle \sigma \rangle^{-1} |\mathcal{F}_{t,x}[M(u,u,u)]|d\tau \\ &\leq \left(\int\limits_{-\infty}^{\infty} \langle \sigma \rangle^{-1 - \epsilon}d\tau \right)^{\frac{1}{2}} \left(\int\limits_{-\infty}^{\infty} \langle \sigma \rangle^{-1 + \epsilon}|\mathcal{F}_{t,x}[M(u,u,u)]|^{2} d\tau \right)^{\frac{1}{2}},\end{aligned}$$ where $M$ is defined in (\[TL M\]). This case is reduces to Lemma \[TL all are equal\].\ **Case $1b$**. In this case, we can clearly see that $\langle \sigma \rangle \gtrsim |k_{2} + |k_{3}||(\langle k \rangle^{2} + \langle \sigma \rangle)$. Due to symmetry, we can assume that $|k| \sim |k_{1}|.$ By using Schwarz’s inequality, we get $$\begin{aligned} &\left|\sum\limits_{k} \langle k \rangle^{s} \langle k \rangle \langle \sigma \rangle^{-1} \hat{u}_{1} \hat{u}_{2} \hat{u}_{3} \right|_{L^{2}_{(dk)_{k}}L^{1}_{\tau}} \\ &\lesssim \left\|\sum\limits_{k}^{}\left( \int\limits_{-\infty}^{\infty} \frac{\langle k \rangle^{2}}{\langle \sigma \rangle^{2}} d\tau \right)^{\frac{1}{2}} \left( \int\limits_{-\infty}^{\infty} \langle k \rangle^{2s} \left| \hat{u}_{1} \hat{u}_{2} \hat{u}_{3} \right|^{2} d\tau \right)^{\frac{1}{2}} \right\|_{L^{2}_{(dk)_{\lambda}}}.\end{aligned}$$ As we can see $$\begin{aligned} \left( \int\limits_{-\infty}^{\infty} \frac{\langle k \rangle^{2}}{\langle \sigma \rangle^{2}} d\tau \right)^{\frac{1}{2}} &\lesssim \left( \int\limits_{-\infty}^{\infty} \frac{\langle k \rangle^{2}}{(\langle \sigma \rangle + |k_{2} + k_{3}| \langle k \rangle^{2})^{2}} d\tau \right)^{\frac{1}{2}}, \\ &= \left( \int\limits_{-\infty}^{\infty} \frac{\langle k \rangle^{2}}{(|\tau - k^{3}| + |k_{1} + k_{2}|\langle k \rangle^{2})^{2}} d\tau \right)^{\frac{1}{2}}, \\ &= \left( \int\limits_{-\infty}^{k^{3}} \frac{\langle k \rangle^{2}}{( k^{3}-\tau +|k_{2} + k_{3}| \langle k \rangle^{2})} d\tau \right)^{\frac{1}{2}} + \left( \int\limits_{k^{3}}^{\infty} \frac{\langle k \rangle^{2}}{(\tau - k^{3} + |k_{2} + k_{3}|\langle k \rangle^{2})} d\tau \right)^{\frac{1}{2}} \\ & \lesssim C|k_{2} + k_{3}|^{-1/2}.\end{aligned}$$ Hence, from Hölder’s inequality, Proposition \[st1\] and inequality (\[periodic 5\]), we get $$\begin{aligned} &\left\|\sum\limits_{k}^{} |k_{2} + k_{3}|^{-1/2} \langle k \rangle^{s} \hat{u}_{1} \hat{u}_{2} \hat{u}_{3} \right\|_{L^{2}_{(dk)_{\lambda}} L^{2}_{\tau}} \\ &\sim \left\| \sum\limits_{k}^{} (|k_{1}|^{s}\hat{u}_{1}) (|k_{2} + k_{3}|^{-1/2}\hat{u}_{2} \hat{u}_{3}) \right\|_{L^{2}_{(dk)_{\lambda}} L^{2}_{\tau}}, \\ &\lesssim \|D_{x}^{s}u_{1}\|_{L^{4}_{x,t}} \|D_{x}^{-\frac{1}{2}}(u_{2}u_{3})\|_{L^{4}_{x,t}} \\ &\lesssim \lambda^{0+} \|u_{1}\|_{X^{s,\frac{1}{3}+}} \|u_{2}\|_{X^{\frac{1}{4}+,\frac{1}{2}}} \|u_{3}\|_{X^{0,\frac{1}{2}}}.\end{aligned}$$ The estimate for the resonant term follows in the same way as Case $1a$. Let $u = u_{L} + u_{H}$ where $supp \hspace{1mm}\hat{u}_{L}(k) \subset \{|k| \ll N\}$ and $supp \hspace{1mm}\hat{u}_{H}(k) \subset \{|k| \gtrsim N\}.$ We prove the following corollary: \[TL corollary\] Let $1 \gg \epsilon\geq 0.$ Let $u,v,w \in X^{s,\frac{1}{2} - \epsilon}.$ Then, the following three estimates hold: 1. If $v,u$ are low and $w$ is high frequency functions, then we have $$\begin{aligned} &\left\|(u_{L}v_{L} - \sum\limits_{l = -\infty}^{\infty}\hat{u}_{L}(l)\hat{v}_{L}(-l))w_{H}\right\|_{X^{1-2\epsilon,-\frac{1}{2} + \epsilon}} \\ &\lesssim\lambda^{0+} C\min\{ \|u_{L}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|v_{L}\|_{X^{0 ,\frac{1}{2} - \epsilon}}, \|v_{L}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|u_{L}\|_{X^{0 ,\frac{1}{2} - \epsilon}}\}\|w_{H}\|_{X^{0,\frac{1}{2} - \frac{\epsilon}{2}}}.\end{aligned}$$ 2. If $v,w$ are high and $u$ is low frequency functions, then $$\begin{aligned} &\left\|(u_{L}v_{H} - \sum\limits_{l = -\infty}^{\infty}\hat{u}_{L}(l)\hat{v}_{H}(-l))w_{H}\right\|_{X^{1-2\epsilon,-\frac{1}{2} + \epsilon}} \\ &\lesssim\lambda^{0+} C\min\{ \|u_{L}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|v_{H}\|_{X^{0 ,\frac{1}{2} - \epsilon}}, \|v_{H}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|u_{L}\|_{X^{0 ,\frac{1}{2} - \epsilon}}\}\|w_{H}\|_{X^{0,\frac{1}{2} - \frac{\epsilon}{2}}}.\end{aligned}$$ 3. If $u,v\hspace{1.2mm}\text{and}\hspace{1.2mm} w$ all are high frequency functions, then $$\begin{aligned} &\left\|(u_{H}v_{H} - \sum\limits_{l = -\infty}^{\infty}\hat{u}_{H}(l)\hat{v}_{H}(-l))w_{H}\right\|_{X^{-2\epsilon,\frac{1}{2} + \epsilon}} \\ & \lesssim \lambda^{0+} \|u_{H}\|_{X^{0,\frac{7}{18} + \epsilon}} \|v_{H}\|_{X^{0,\frac{7}{18} + \epsilon}} \|w_{H}\|_{X^{0,\frac{7}{18} + \epsilon}}.\end{aligned}$$ **1.** We know that $$\mathcal{F}_{x}\left[ (u_{L}v_{L} - \sum\limits_{l = -\infty}^{\infty}\hat{u}_{L}(l)\hat{v}_{L}(-l))w_{H} \right] = \sum\limits_{\substack{k_{1} +k_{2} + k_{3} +k_{4} = 0 \\ k_{1} + k_{2} \neq 0 \\ (k_{1} + k_{2})(k_{2} + k_{3})(k_{3} + k_{1}) \neq 0}} \hat{u}_{L}(k_{1})\hat{v}_{L}(k_{2})\hat{w}_{H}(k_{3}),$$ where $\mathcal{F}_{x}$ denotes the Fourier transform in the $x$ variable. Hence, we need to show that $$\begin{aligned} &\left\| \sum\limits_{k}^{} e^{ikx} \int\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})(k_{2} +k_{3})(k_{3} + k_{1})\neq 0 }} \langle k_{1} \rangle^{1-2\epsilon} \hat{u}_{L}(k_{1}) \hat{v}_{L}(k_{2}) \hat{w}_{H}(k_{3}) \right\|_{X^{0,-\frac{1}{2} + \epsilon}} \\ &\lesssim C\min\{ \|u_{L}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|v_{L}\|_{X^{0 ,\frac{1}{2} - \epsilon}}, \|v_{L}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|u_{L}\|_{X^{0 ,\frac{1}{2} - \epsilon}}\}\|w_{H}\|_{X^{0 ,\frac{1}{2} - \frac{\epsilon}{2}}}.\end{aligned}$$ From duality, it is enough to show $$\begin{aligned} \label{TL6} &\left| \int\limits_{\substack{k_{1} +k_{2} + k_{3} = k \\(k_{1} + k_{2})(k_{2} +k_{3})(k_{3} + k_{1})\neq 0 }} \int\limits_{\sum\limits^{4}_{i=1}\tau_{i} = 0}^{} \langle k_{4} \rangle^{1 -2\epsilon} \tilde{u}_{1}(k_{1}) \tilde{u}_{2}(k_{2}) \tilde{u}_{3}(k_{3}) \tilde{u}_{4}(k_{4}) \right| \\ &\lesssim C\min\{ \|u_{L}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|v_{L}\|_{X^{0 ,\frac{1}{2} - \epsilon}}, \|v_{L}\|_{X^{\frac{1}{2} + \epsilon,\frac{1}{2} - \epsilon}} \|u_{L}\|_{X^{0 ,\frac{1}{2} - \epsilon}}\}\|w_{H}\|_{X^{0 ,\frac{1}{2} - \frac{\epsilon}{2}}} \nonumber.\end{aligned}$$ where $u_{1} = u_{L},\hspace{1mm} u_{2} = v_{L},\hspace{1mm} u_{3} = w_{H}$ and let $u_{4} = u_{L} + u_{H}.$ Let $\sigma_{i} = \tau_{i} - 4\pi^{2}k^{3}_{i} \hspace{1.5mm} \text{for} \hspace{1.5mm} 1 \leq i \leq 4.$ We divide the proof into the following four cases: 1. Let $|\sigma_{4}| = \max\{|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ 2. Let $|\sigma_{1}| = \max\{|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ 3. Let $|\sigma_{2}| = \max\{|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ 4. Let $|\sigma_{3}| = \max\{|\sigma_{i}| \hspace{1mm}\text{for} \hspace{1mm} 1 \leq i \leq 4 \}.$ It is enough to prove for Case $1$ because other cases can be treated in the same way. According to the given conditions, we have $|k_{1}|, |k_{2}| \ll N'$ and $|k_{3}| \sim |k_{4}| \gtrsim N'.$ So, from (\[TL4\]), $\langle \sigma_{4} \rangle \gtrsim \langle k_{4} \rangle ^{2}|k_{3} + |k_{4}| \hspace{1.5mm} \text{and} \hspace{1.5mm} 1/\lambda \leq |k_{3} + k_{4}| \leq 1.$ Let the region for the first integration is denoted as $``*"$ and the region of second integration is denoted as $``**".$ By using Plancherel’s theorem, Hölder’s inequality, for the term (\[TL6\]), we get $$\begin{aligned} &\left| \int\limits_{* } \int\limits_{**}^{} \langle k_{4}\rangle^{1-2\epsilon} \tilde{u}_{1} \tilde{u}_{2} \tilde{u}_{3} \tilde{u}_{4} \right| \\ &\lesssim \left| \int\limits_{* } \int\limits_{**}^{} \langle k_{4} \rangle^{1-2\epsilon} \langle k_{4} \rangle^{-1+2\epsilon} (|k_{1} + k_{2}|^{-1/2}|\tilde{u}_{1}| |\tilde{u}_{2}|) |\tilde{u}_{3}| (|\tilde{u}_{4}|\langle \sigma_{4} \rangle^{\frac{1}{2} - 2\epsilon}) \right|, \\ &\lesssim \|D_{x}^{-1/2}(v_{1}v_{2})\|_{L^{4}_{x,t}} \|v_{3}\|_{L^{4}_{x,t}} \|\tilde{v}_{4}\langle \sigma_{4} \rangle^{\frac{1}{2} - 2\epsilon} \|_{L^{2}_{k,\tau}}.\end{aligned}$$ for $v_{j} = |u_{j}|.$ From Sobolev embedding, inequality (\[periodic 5\]) and Proposition \[st1\], we get the desired inequality.\ **2.** We can prove this case along the similar line.\ **3.** Form duality argument and Proposition \[st1\], we get the desire estimate. \[Infinity Estimate\] $$\|u\|_{L^{\infty}_{x,t}} \lesssim \|u\|_{X^{\frac{1}{2} + \epsilon, \frac{1}{2} + \epsilon}}.$$ $$\|u\|_{L^{\infty}_{t}L^{2}_{x}}^{2} = \sup\limits_{t \in \mathbb{R}}^{}\|U(-t)u(t)\|_{L^{2}_{x}}^{2},$$ where $U(t) = e^{-t\partial_{x}^{3}}.$ By Sobolev embedding, we have $$\begin{aligned} \sup\limits_{t \in \mathbb{R}}^{}\|U(-t)u(t)\|_{L^{2}_{x}}^{2} &\lesssim \int \sup\limits_{t \in \mathbb{R}}^{}|U(-t)u(t)|^{2}dx \\ &\lesssim \int \langle \partial_{t} \rangle^{\frac{1}{2} + \epsilon} |U(-t)u(t)|^{2}dx \\ &\sim \|u\|^{2}_{X^{0,\frac{1}{2} +\epsilon}}.\end{aligned}$$ Hence, we get $$\|u\|_{L^{\infty}_{t}L^{2}_{x}}^{2} \lesssim \|u\|^{2}_{X^{0,\frac{1}{2} +\epsilon}}.$$ A Priori Estimate ================= In this section, we show a priori estimate of the solution to the mKdV equation which are needed for the proof of Theorem \[intro theorem\]. The energy for the mKdV equation is given as: $$\label{Priori energy} E(u) = \int (\partial_{x}u)^{2}- (u)^{4} dx.$$ For the operator $I',$ we have $$E(I'v) = \int (\partial_{x}I'v)^{2}- (I'v)^{4} dx.$$ From equations (\[rescaled1\])-(\[rescaled2\]), we obtain $$\begin{aligned} \label{Energy} \dv{(E(I'v))}{t} =& \left[\int (-\partial_{x}^{2} I'v - (I'v)^{3})(-\partial_{x}^{3} I'v - \partial_{x} I'v^{3})\right] \nonumber \\ &+ \left[\int -\lambda^{-3} \partial_{x}^{2} I'v I'g - \lambda^{-3} (I'v)^{3} I'g + \dfrac{1}{2}(I'v)^{4} \gamma \lambda^{-3}\right].\end{aligned}$$ For a Banach space $X,$ we define the space $L^{\infty}_{T'}X$ via the norm: $$\|u\|_{L^{\infty}_{T'}X} = \sup_{t \in [0,T']} \|u(t)\|_{X}.$$ Multiply equation (\[rescaled1\]) by $v$ and take $L^{2}$ norm to obtain the following lemma: \[L2 bound\] $$\|v(t)\|_{L^{2}}^{2} \lesssim \|v_{0}\|_{L^{2}}\hspace{1mm} exp(-\gamma \lambda^{-3}t) + \frac{\lambda^{-3}}{\gamma}\|g\|_{L^{\infty}_{t}L^{2}}^{2}(1- exp(-\gamma \lambda^{-3} t)).$$ We establish the following lemma: \[Energy 2 estimate\] Let $v$ is the solution of IVP (\[rescaled1\])-(\[rescaled2\]) for $t\in [0,T'].$ Then, we have $$\begin{aligned} \label{eq11} \|I'v(T')\|_{L^{2}}^{2} exp(\gamma \lambda^{-3}T') \leq C_{1}(\|v(0)\|_{L^{2}}^{2} + \frac{1}{\gamma}\|g\|_{L^{2}}^{2} exp(\gamma \lambda^{-3}T'))\end{aligned}$$ and $$\begin{aligned} \label{eq12} \|I'v(T')\|^{2}_{\dot H^{1}} exp(\gamma \lambda^{-3} T') \leq C_{1} \Big(\|I'v(0)\|^{2}_{\dot H^{1}} + \frac{1}{\gamma^{2}} \|I'g\|^{2}_{L^{\infty}_{T'}\dot H^{1}} exp(\gamma \lambda^{-3} T') \nonumber \\ + \|v(0)\|_{L^{2}}^{6} + \frac{1}{\gamma^{4}} \|g\|^{6}_{L^{2}}exp(\gamma \lambda^{-3}T')\Big) +\left| \int \limits_{0}^{T'} M(t)dt\right|,\end{aligned}$$ where $$M(t) =exp(\gamma \lambda^{-3} t) \int_{\lambda\mathbb{T}} \lbrace -\partial_{x}^{2} I'v - (I'v)^{3}\rbrace \lbrace-\partial_{x} I'v^{3}-\partial_{x}^{3} I'v \rbrace.$$ Similar to Lemma \[L2 bound\], we have $$\begin{aligned} \frac{d}{dt}\|v(T')\|_{L^{2}}^{2} exp(\gamma \lambda^{-3}T') &= \left( -\gamma\lambda^{-3}\|v(t)\|_{L^{2}} + 2\lambda^{-3} \int_{\lambda\mathbb{T}}v(t)g(t)dx \right)exp(\gamma\lambda^{-3}T') \\ &\leq \frac{\lambda^{-3}}{\gamma}\|g\|^{2}_{L^{2}}exp(\gamma\lambda^{-3}T').\end{aligned}$$ Intriguing over $[0,T']$ and from the definition of operator $I$, we get . From equations (\[rescaled1\])-(\[rescaled2\]), we get $$\begin{aligned} &\dv{}{t} \left( E(I'v(t))exp(\gamma \lambda^{-3} t') \right) \hspace{4.5cm} \\ =& \dv{}{t} E(I'v(t))exp(\gamma \lambda^{-3} t') + \gamma \lambda^{-3} E(I'v(t))exp(\gamma \lambda^{-3} t'), \\ =& \left[ \int \lbrace- \partial_{x}^{2} I'v - (I'v)^{3} \rbrace \lbrace \lambda^{-3}I'g - \gamma \lambda^{-3}I'v - \partial_{x}^{3} I'v - \partial_{x} I'v^{3} \rbrace \right]exp(\gamma \lambda^{-3} t') \\ &+ \gamma \lambda^{-3} exp(\gamma \lambda^{-3} t') \int \frac{1}{2} (\partial_{x} I'v)^{2} - \dfrac{1}{4}(I'v)^{4}, \\ =& \left[\int (-\partial_{x}^{2} I'v - (I'v)^{3})(-\partial_{x}^{3} I'v - \partial_{x} I'v^{3})\right]exp(\gamma \lambda^{-3} t') \\ &+ \left[\int (-\partial_{x}^{2} I'v - (I'v)^{3})({\lambda}^{-3} I'g - \gamma \lambda^{-3} I'v)\right]exp(\gamma \lambda^{-3} t') \\ &+ \gamma \lambda^{-3} exp(\gamma \lambda^{-3} t')\int \dfrac{1}{2} (\partial_{x} I'v)^{2} - \dfrac{1}{4}(I'v)^{4}, \\ =& M(t')+ \left[\int -\lambda^{-3} \partial_{x}^{2} I'v I'g - \lambda^{-3} (I'v)^{3} I'g -\frac{1}{2}\gamma\lambda^{-3}(\partial_{x}I'v)^{2}+ \dfrac{3}{4}(I'v)^{4} \gamma \lambda^{-3}\right] exp(\gamma \lambda^{-3} t'). \end{aligned}$$ Put the value of $E,$ integrate over $[0,T']$, take absolute value on both side and from Gagliardo-Nirenberg inequality, we get $$\begin{aligned} &\left(\|I'v(T')\|_{\dot H^{1}}^{2} - \|I'v(T')\|_{L^{4}}^{4} \right)exp(\gamma\lambda^{-3}T')\\ =& \|I'v(0)\|_{\dot H^{1}}^{2} - \|I'v(0)\|_{L^{4}}^{4} +\int\limits_{0}^{T'} M(t')dt' +\int\limits_{0}^{T'} \Big[\int -\lambda^{-3} \partial_{x}^{2} I'v I'g - \lambda^{-3} (I'v)^{3} I'g \\ &-\frac{1}{2}\gamma\lambda^{-3}(\partial_{x}I'v)^{2}+ \dfrac{3}{4}(I'v)^{4} \gamma \lambda^{-3}\Big] exp(\gamma \lambda^{-3} t')dt', \\ \lesssim &\|I'v(0)\|_{\dot H^{1}}^{2} - \|I'v(0)\|_{L^{4}}^{4}+ \left| \int\limits_{0}^{T'} M(t')dt'\right| + \lambda^{-3}\int\limits_{0}^{T'}\Big[\|I'g\|_{\dot H^{1}}\|I'v(t')\|_{\dot H^{1}} \\ &+ \|I'v(t')\|_{\dot H^{1}}\|I'v(t')\|^{2}_{L^{2}}\|I'g\|_{L^{2}} -\gamma\frac{1}{2}\|I'v(t')\|_{\dot H^{1}}^{2} \\ &+\gamma \frac{3}{4} \|I'v(t')\|_{\dot H^{1}} \|I'v(t')\|^{3}_{L^{2}} \Big] exp(\gamma \lambda^{-3} t')dt'. \end{aligned}$$ From Young’s inequality, we have $$\begin{aligned} &\|I'v(T')\|_{\dot H^{1}}^{2}exp(\gamma\lambda^{-3}T') \lesssim \|I'v(0)\|_{\dot H^{1}}^{2} + \frac{1}{\gamma^{2}}\|I'g\|_{L^{\infty}_{T'}\dot H^{1}}^{2}exp(\gamma\lambda^{-3}T') \\ +& \left| \int\limits_{0}^{T'} M(t')dt'\right| + C_{1}\|I'v(T')\|^{6}_{L^{2}}exp(\gamma\lambda^{-3}T') \\ &+C_{1} \int\limits_{0}^{T'} \left(\|I'v(t')\|_{L^{2}}^{6} + \frac{1}{\gamma^{2}}\|I'v(t')\|_{L^{2}}^{4}\|I'g\|_{L^{2}}^{2}\right)\gamma\lambda^{-3} exp(\gamma\lambda^{-3}t')dt'.\end{aligned}$$ From inequality we get $$\begin{aligned} \left(\|I'v(t')\|_{L^{2}}^{6} + \frac{1}{\gamma^{2}}\|I'v(t')\|_{L^{2}}^{4}\|I'g\|_{L^{2}}^{2}\right) \lesssim \|I'v(0)\|_{L^{2}}^{6}exp(-3\gamma\lambda^{-3}t') + \frac{1}{\gamma^{3}}\|I'g\|^{6}_{L^{2}}. \end{aligned}$$ and hence we obtain inequality . For mKdV equation, we just consider the half part of damping term in $exp(\gamma\lambda^{-3}T')$ as compare to KdV equation. We need to state the following Leibnitz rule type lemma: \[proposition 0\] $$\|f(t)g(x,t)\|_{X^{s,b}} \lesssim \|\hat{f}\|_{L^{1}}\|g\|_{X^{s,b}} + \|f\|_{H^{b}_{t}}\|\langle k \rangle^{s} \tilde{g}\|_{L^{2}_{(dk)_{\lambda}}L^{1}_{d\tau}}.$$ Assume that $\tau = \tau_{1} + \tau_{2}.$ Let $\sigma = \tau - k^{3}, \sigma_{1} = \tau_{1}$ and $\sigma_{2} = \tau_{2} - k^{3}.$ Then $$\begin{aligned} \langle \sigma\rangle^{b} = \langle \tau - k^{3} \rangle^{b} \lesssim \langle \tau_{1} \rangle^{b} + \langle \tau - \tau_{1} - k^{3} \rangle^{b}.\end{aligned}$$ Hence $$\begin{aligned} &\langle \sigma \rangle^{b} \langle k \rangle^{s} \mathcal{F}[f(t)g(x,t)] = \langle \sigma \rangle^{b} \langle k \rangle^{s} \int_{\tau_{1}} \hat{f}(\tau_{1})\tilde{g}(k, \tau - \tau_{1}) d\tau_{1} , \\ &\lesssim \langle k \rangle^{s} \int_{\tau_{1}} \langle \tau_{1} \rangle^{b} |\hat{f}(\tau_{1})\tilde{g}(k, \tau - \tau_{1})| + \langle \tau -\tau_{1} - k^{3} \rangle^{b} |\hat{f}(\tau_{1})\tilde{g}(k, \tau - \tau_{1})| d\tau_{1}.\end{aligned}$$ After summing over $k$ and taking $L^{2}$ norm, we get $$\|\langle \sigma \rangle^{b} \langle k \rangle^{s} \mathcal{F}[f(t)g(x,t)]\|_{L^{2}_{k,\tau}} \leq \|\langle k \rangle^{s} \langle \tau_{1} \rangle^{b}\hat{f} * \tilde{g}\|_{L^{2}_{t,k}} + \|\langle k \rangle^{s} \langle \tau - \tau_{1} - k^{3} \rangle^{b}\hat{f} * \tilde{g}\|_{L^{2}_{t,k}}.$$ From Young’s inequality in $\tau$, we obtain $$\|\langle k \rangle^{s} \langle \tau_{1} \rangle^{b}\hat{f} * \tilde{g}\|_{L^{2}_{\tau}} + \|\langle k \rangle^{s} \langle \tau - \tau_{1} - k^{3} \rangle^{b}\hat{f} * \tilde{g}\|_{L^{2}_{\tau}} \lesssim \|\hat{f}\|_{L^{1}}\|g\|_{X^{s,b}} + \|f\|_{H^{b}_{t}}\|\langle k \rangle^{s} \tilde{g}\|_{L^{2}_{(dk)_{\lambda}}L^{1}_{d\tau}}.$$ Similar to [@T Proposition 3.1], we finally have the following proposition: \[proposition 1\] Let $\frac{1}{2} \leq s < 1$. Let $T >0$ is given, $\epsilon >0$ be sufficiently small and $u$ be a solution of IVP (\[intro1\])-(\[intro2\]) on $[0,T].$ Assume that $N^{\frac{1}{2}(1 - \epsilon)} \geq \gamma, N^{\epsilon -} \geq C_{6}T$ and $$\begin{aligned} (\|u(0)\|_{L^{2}}^{2} + \frac{1}{\gamma^{2}} \|f\|_{L^{2}}^{2}exp(\gamma T)) \leq N^{\frac{1}{6}(1 - \epsilon)} C_{3} \\ (\|Iu(0)\|_{\dot H^{1}}^{2} + \frac{1}{\gamma^{2}} \|If\|_{\dot H^{1}}^{2}exp(\gamma T)) \leq N^{\frac{1}{6}(1 - \epsilon)} C_{3}.\end{aligned}$$ Then, we have $$\begin{aligned} &\|Iu(T)\|_{L^{2}}^{2}exp(\gamma T) \leq C_{4}(\|u(0)\|_{L^{2}}^{2} + \frac{1}{\gamma^{2}} \|f\|_{L^{2}}^{2}exp(\gamma T)), \\ &\|Iu(T)\|_{\dot H^{1}}^{2}exp(\gamma T) \leq C_{4}(\|Iu(0)\|_{\dot H^{1}}^{2} + \|u(0)\|_{L^{2}}^{6}+ \frac{1}{\gamma^{4}}\|f\|_{L^{2}}^{6}exp(\gamma T)\\ & + \frac{1}{\gamma^{2}} \|If\|_{\dot H^{1}}^{2}exp(\gamma T)) + (\|Iu(0)\|_{H^{1}}^{2} + \frac{1}{\gamma^{2}} \|If\|_{H^{1}}^{2}exp(\gamma T)),\end{aligned}$$ where $C_{4}$ is independent of $N$ and $T.$ Without loss of generality, we can replace $f$ with $F$ as $F$ is just a translation of $f.$ We can rescale Proposition \[proposition 1\] by taking $\lambda = N^{\frac{1}{6}(1-\epsilon)}, N' = \frac{N}{\lambda}, T' = \lambda^{3} T.$ Also, we note that $\|I'v\|^{2}_{\dot H^{1}} = \lambda^{-3}\|Iu\|^{2}_{\dot H^{1}}, \|I'g \|^{2}_{L^{\infty}_{T'}\dot H^{1}} = \lambda^{-3}\|If\|^{2}_{\dot H^{1}}.$ We rewrite Proposition \[proposition 1\] as following: \[proposition 2\] Let $\frac{1}{2} \leq s < 1$, $T' >0$ is given and let $v$ be a solution of IVP (\[rescaled1\])-(\[rescaled2\]) on $[0,T'].$ Assume that $\lambda^{3} \geq \gamma$ and that for suitable $C_{6},C_{3} >0,$ $ N'^{-}\lambda^{0-} \geq C_{6}T'\lambda^{2}$ and $$\begin{aligned} (\|v(0)\|_{L^{2}}^{2} + \frac{1}{\gamma^{2}} \|g\|_{L^{2}}^{2}exp(\gamma \lambda^{-3} T')) \leq C_{3} \\ (\|I'v(0)\|_{\dot H^{1}}^{2} + \frac{1}{\gamma^{2}} \|I'g\|_{L^{\infty}_{T'}\dot H^{1}}^{2}exp(\gamma \lambda^{-3} T')) \leq C_{3}.\end{aligned}$$ Then, we have $$\begin{aligned} &\|I'v(T')\|_{L^{2}}^{2}exp(\gamma \lambda^{-3} T') \leq C_{4} (\|v(0)\|_{L^{2}}^{2} + \frac{1}{\gamma^{2}} \|g\|_{L^{2}}^{2}exp(\gamma \lambda^{-3} T')) \\ &\|I'v(T')\|_{\dot H^{1}}^{2}exp(\gamma \lambda^{-3} T') \leq C_{4} (\|I'v(0)\|_{\dot H^{1}}^{2} + \frac{1}{\gamma^{2}} \|I'g\|_{L^{\infty}_{T'}\dot H^{1}}^{2}exp(\gamma \lambda^{-3} T') \\ & \hspace{40mm}+ \|v(0)\|_{L^{2}}^{6} + \frac{1}{\gamma^{4}} \|g\|_{L^{\infty}_{T'} L^{2}}^{6}exp(\gamma \lambda^{-3} T')) \\ &\hspace{40mm}+ \lambda^{-2}(\|I'v(0)\|_{H^{1}}^{2} + \frac{1}{\gamma^{2}}\|I'g\|_{H^{1}}^{2}exp(\gamma\lambda^{-3}T')),\end{aligned}$$ where $C_{4}$ is independent of $N',T'$ and $\lambda.$ \[sobolev remark\] Because of non homogeneity of non homogeneous Sobolev space, we can not rescale the Proposition \[proposition 1\] into Proposition \[proposition 2\] with the order of rescaling factor as $\lambda^{-3}$ like the KdV equation. Also, if we consider the homogeneous Sobolev space, the trilinear and multilinear estimates may not follows for counterexample see appendix. Therefore, we consider the non homogeneous Sobolev space with the rescaling estimate $\|I'v\|_{H^{1}}^{2} \lesssim \lambda^{-1}\|Iu\|_{H^{1}}^{2}.$ We estimate $L^{2}$ and $\dot H^{1}$ separately to prove Proposition \[proposition 2\] in $H^{1}$. Although, it is not necessary for our problem to have the separate estimates but for the shake of general proof, we estimate it separately. Take $\delta > 0$ and $j \in \mathbb{N}$ such that $\delta j = T'$ where $\delta \sim (\|I'v(0)\|_{H^{1}} + \|I'g\|_{L^{\infty}_{T'}H^{1}} + \gamma \lambda^{-3})^{-\alpha}, \alpha >0.$ For $0 \leq m \leq j,\hspace{1mm} m \in \mathbb{Z},$ we prove $$\begin{aligned} &\|I'v(m\delta)\|^2_{\dot H^1}exp(\gamma \lambda^{-3}m\delta)\notag \\ \leq& 2C_1 (\|I'v(0)\|^2_{\dot H^1}+\|v(0)\|^6_{L^2}+\frac{1}{\gamma^2} \|I'g\|^2_{\dot H^1}exp(\gamma \lambda^{-3}m\delta)\notag \\ & +\frac{1}{\gamma^4}\|g\|^6_{L^2}exp(\gamma \lambda^{-3}k\delta))+ \lambda^{-2}(\|I'v(0)\|_{H^{1}}^{2} + \frac{1}{\gamma^{2}}\|I'g\|_{H^{1}}^{2}exp(\gamma\lambda^{-3}T'))\notag \\ \leq& 4C_1C_3 + \lambda^{-2}(\|I'v(0)\|_{H^{1}}^{2} + \frac{1}{\gamma^{2}}\|I'g\|_{H^{1}}^{2}exp(\gamma\lambda^{-3}T')) \label{prop prio 2} \end{aligned}$$ by induction. For $m=0$, hold trivially. We assume hold true for $m=l$ where $0\leq l\leq j-1$. From Lemma \[Energy 2 estimate\], we have $$\begin{aligned} &\|I'v((l+1)\delta)\|^2_{\dot H^1}exp(\gamma \lambda^{-3}(l+1)\delta)\leq C_1 (\|I'v(0)\|^2_{\dot H^1}+\|v(0)\|^6_{L^2} \nonumber\\ & +\frac{1}{\gamma^2} \|I'g\|^2_{\dot H^1}exp(\gamma \lambda^{-3}(l+1)\delta) \ +\frac{1}{\gamma^4}\|g\|^6_{L^2}exp(\gamma \lambda^{-3}(l+1)\delta)+\left|\int^{(l+1)\delta}_0M(t)dt\right| \nonumber\end{aligned}$$ Therefore, it suffices to prove $$\begin{aligned} \left| \int \limits_{0}^{(l+1)\delta} M(t)dt \right| + \lesssim \lambda^{-2} (\|I'v(0)\|_{H^{1}} + \frac{1}{\gamma^{2}}\|I'g\|_{L^{\infty}_{(l+1)\delta}H^{1}} exp(\gamma \lambda^{-3}(l+1)\delta)).\end{aligned}$$ If $\gamma = 0$ and $f = 0$ in Equation (\[Energy\]), then we have the following estimate: \[Energy 1 estimate\] $$\left| \int\limits_{0}^{T'} M(t)dt \right| \lesssim \lambda^{0+} N'^{-1+} \|Iu\|^{4}_{X^{1,\frac{1}{2}}_{T'}} + \lambda^{0+} N'^{-2}\|Iu\|^{6}_{X^{1,\frac{1}{2}}_{T'}}.$$ We prove Lemma \[Energy 1 estimate\] in last section.\ Lemma \[Energy 1 estimate\] implies that $$\begin{aligned} \left| \int\limits_{0}^{(l+1)\delta} M(t)dt \right| \sim& \sum\limits_{k=0}^{l}\left| \int\limits_{k\delta}^{(k+1)\delta}M(x,t) dt\right|, \\ \lesssim& (N')^{-1+}\lambda^{0+} \sum\limits_{k=0}^{l}\|exp(\frac{1}{4}\gamma \lambda^{-3}t)I'v\|_{X^{1,\frac{1}{2}}_{([0,\lambda] \times [k\delta, (k+1)\delta])}}^{4}\\ &+ (N')^{-2}\lambda^{0+} \sum\limits_{k=0}^{l}\|exp(\frac{1}{6}\gamma \lambda^{-3}t)I'v\|_{X^{1,\frac{1}{2}}_{([0,\lambda] \times [k\delta, (k+1)\delta])}}^{6}.\end{aligned}$$ From Proposition \[proposition 0\], we obtain $$\begin{aligned} &\left|\int\limits_{0}^{(l+1)\delta} M(t)dt \right| \\ \lesssim & (N')^{-1+}\lambda^{0+} \sum\limits_{k=0}^{l} \|\widehat{exp(\gamma \lambda^{-3}t)}\|_{L^{1}_{[k\delta,(k+1)\delta]}} \|I'v\|^{4}_{X^{1,\frac{1}{2}}_{([0,\lambda] \times [k\delta, (k+1)\delta])}} \\ &+ (N')^{-1+}\lambda^{0+} \sum\limits_{k=0}^{l} \|exp(\gamma \lambda^{-3} t)\|_{H^{\frac{1}{2}}_{[k\delta, (k+1)\delta]}} \|\langle k \rangle^{s} \tilde{I'v}\|^{4}_{L^{2}_{[0,\lambda]}L^{1}_{[k\delta,(k+1)\delta]}} \\ &+ (N')^{-2}\lambda^{0+} \sum\limits_{k=0}^{l} \|\widehat{exp(\gamma \lambda^{-3}t)}\|_{L^{1}_{[k\delta,(k+1)\delta]}} \|I'v\|^{6}_{X^{1,\frac{1}{2}}_{([0,\lambda] \times [k\delta, (k+1)\delta])}} \\ &+ (N')^{-2}\lambda^{0+} \sum\limits_{k=0}^{l} \|exp(\gamma \lambda^{-3} t)\|_{H^{\frac{1}{2}}_{[k\delta, (k+1)\delta]}} \|\langle k \rangle^{s} \tilde{I'v}\|^{6}_{L^{2}_{[0,\lambda]}L^{1}_{[k\delta,(k+1)\delta]}}.\end{aligned}$$ From simple computations, we can verify that $$\max\limits_{0 \leq l \leq k} \|\widehat{exp(\gamma \lambda^{-3}t)}\|_{L^{1}_{[l\delta,(l+1)\delta]}} \lesssim C\hspace{.5mm} exp(\gamma \lambda^{-3}(l+1)\delta)$$ and $$\max\limits_{0 \leq l \leq k} \|exp(\gamma \lambda^{-3} t)\|_{H^{\frac{1}{2}}_{[l\delta, (l+1)\delta]}} \lesssim C \hspace{.5mm} exp(\gamma \lambda^{-3}(l+1)\delta)$$ are bounded. From the first inequality of Proposition \[local wellposdeness\], we have $$\begin{aligned} \|I'v\|^{4}_{X^{1,\frac{1}{2}}_{([0,\lambda] \times [k\delta, (k+1)\delta])}} + \|\langle \partial_{x} \rangle I'v\|^{4}_{L^{2}_{[0,\lambda]}L^{1}_{[k\delta,(k+1)\delta]}} \lesssim \|I'v(k\delta)\|_{H^{1}_{[0,\lambda]}}^{4} + (\lambda^{-3}\|I'g\|)^{4}_{L^{\infty}_{(l+1)\delta}H^{1}_{[0,\lambda]}}. \label{Priori 5}\\ \|I'v\|^{6}_{X^{1,\frac{1}{2}}_{([0,\lambda] \times [k\delta, (k+1)\delta])}} + \|\langle \partial_{x} \rangle I'v\|^{6}_{L^{2}_{[0,\lambda]}L^{1}_{[k\delta,(k+1)\delta]}} \lesssim \|I'v(k\delta)\|_{H^{1}_{[0,\lambda]}}^{6} + (\lambda^{-3}\|I'g\|)^{6}_{L^{\infty}_{(l+1)\delta}H^{1}_{[0,\lambda]}}. \label{Priori 6}\end{aligned}$$ Therefore, we have $$\begin{aligned} \label{Lambda 3and4 2} \left|\int\limits_{0}^{(l+1)\delta}M(t)dt \right| \lesssim (C_{6}\lambda^{2}T')^{-1} \sum\limits_{k=0}^{l}(\|I'v(k\delta)\|_{H^{1}_{[0,\lambda]}}^{4} + (\lambda^{-3}\|I'g\|)^{4}_{L^{\infty}_{(l+1)\delta}H^{1}_{[0,\lambda]}}exp(\gamma\lambda^{-3}(l+1)\delta)).\end{aligned}$$ From inequalities (\[Priori 5\]),(\[Priori 6\]) and the assumption in Proposition \[proposition 2\], we get $$\begin{aligned} \left|\int\limits_{0}^{(l+1)\delta} M(t)dt\right| \lesssim 2(C_{6}\lambda^{2} T')^{-1} C_{3}(C_{1}^{2} + C_{1}^{3})(l+1)(\|I'v(0)\|_{H^{1}} \\ +\frac{1}{\gamma^{2}}\|I'g\|^{2}_{L^{\infty}_{T'}H^{1}_{[0,\lambda]}}exp(2\gamma \lambda^{-3}(l+1)\delta)).\end{aligned}$$ We choose $C_{6}$ sufficiently large such that $2(C_{6}T')^{-1} C_{3}(C_{1}^{2} + C_{1}^{3})(l+1) \leq 2(C_{6}\delta)^{-1}C_{3}(C_{1}^{2} + C_{1}^{3}) \ll 1,$ which leads to Proposition \[proposition 2\]. Proof of Theorem \[intro theorem\] ================================== In this section, we describe the proof of Theorem \[intro theorem\]. Let $0 < \epsilon \ll 12s-11$ be fixed. We choose $T_{1} > 0$ so that $$\begin{aligned} exp( \gamma T_{1}) > &(\|u_{0}\|^{2}_{H^{s}}+\|u_0\|^6_{L^2})(\frac{1}{\gamma^2} \|f\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2})^{-1} \mathop{max} \bigg\lbrace \gamma^{\frac{4(1-s)}{1-\epsilon}}, (C_{6} T_{1})^{\frac{2(1-s)}{\epsilon-}},\notag \\ & \left(\frac{C_{3}}{2}\|u_{0}\|^{-2}_{H^{s}}\right)^{\frac{12(s-1)}{(1-\epsilon)+12(s-1)}} , \left( 2 C_{3}^{-1} \gamma^{-2} \|f\|^{2}_{H^{1}}exp(\gamma T_{1}) \right)^{\frac{6(-2s+2)}{1-\epsilon}} \bigg\rbrace , \label{main11}\end{aligned}$$ which is possible as $\frac{6(-2s+2)}{1-\epsilon} < 1$. $T_{1}$ depends only on $\|u_{0}\|_{H^{s}}$, $\|f\|_{H^{1}}$ and $\gamma$. Set $$\begin{aligned} \label{main12} N = \mathop{max} \bigg\lbrace \gamma^{\frac{2}{1-\epsilon}},(C_{6} T_{1})^{\frac{1}{\epsilon-}}, \left( \frac{C_{3}}{2}\|u_{0}\|^{-2}_{H^{s}}\right)^{\frac{-6}{12(1-s)+(1-\epsilon)}} , \left( 2 C_{2}^{-1} \gamma^{-2} \|f\|^{2}_{H^{1}} e^{2\gamma T_{1}} \right)^{\frac{6}{1-\epsilon}} \bigg\rbrace.\end{aligned}$$ From the choice of $T_1$ and $N$, we know $$N^{\frac{1-\epsilon}{2}} \geq \gamma, \ \ \ \ \ \ N^{\epsilon-} \geq C_{6}T_{1},$$ and $$\|Iu_{0}\|_{H^{1}}^{2} \leq N^{2-2s} \|u_{0}\|_{H^{s}}^{2} \leq \dfrac{C_{3}}{2} N^{\frac{1-\epsilon}{6}-},$$ $$\gamma^{-2} \|If\|^{2}_{H^{1}} e^{2\gamma T_{1}}\leq \dfrac{C_{3}}{2} N^{\frac{1-\epsilon}{6}-}.$$ Hence, from Proposition \[proposition 1\], we gains $$\begin{aligned} \|u(T_{1})\|_{H^{s}}^{2} \leq& \|Iu(T_{1})\|_{H^{1}}^{2}\notag \\ \leq &C_3 (\|Iu_0\|^2_{ H^1}exp(-\gamma T_{1})+\|u_0\|^6_{L^2}exp(-\gamma T_{1})+\frac{1}{\gamma^2} \|If\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2})\notag \\ \leq &C_3 (N^{2(1-s)}(\|u_0\|^2_{ H^s}exp(-\gamma T_{1})+\|u_0\|^6_{L^2}exp(-\gamma T_{1}))+\frac{1}{\gamma^2} \|f\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2}).\nonumber \end{aligned}$$ From and , we get $$N^{2(1-s)}exp(-\gamma T_1) (\|u_0\|^2_{ H^s}+\|u_0\|^6_{L^2})< \frac{1}{\gamma^2} \|f\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2}$$ which helps us give the bound $$\|u(T_{1})\|_{H^{s}}^{2} \leq 2C_{3}( \frac{1}{\gamma^2} \|f\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2})< K_{1},$$ where $K_{1}$ depends only on $\|f\|_{ H^1}$ and $\gamma$. In the next place, one can fix $T_{2} > 0$ and solve mKdV equation on time interval $[T_{1},T_{1} + T_{2}]$ with initial data replaced by $u(T_{1}).$ Let $K_{2} > 0$ be sufficiently large such that $$\begin{aligned} K_{2} exp( \gamma t) > &(\|u_{0}\|^{2}_{H^{s}}+\|u_0\|^6_{L^2})(\frac{1}{\gamma^2} \|f\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2})^{-1} \mathop{max} \bigg\lbrace \gamma^{\frac{4(1-s)}{1-\epsilon}}, (C_{6} t)^{\frac{2(1-s)}{\epsilon-}},\notag \\ & \left((C_{3})^{-1}2 K_{1}\right)^{\frac{12(s-1)}{(1-\epsilon)+12(s-1)}} , \left( 2 C_{3}^{-1} \gamma^{-2} \|f\|^{2}_{H^{1}}exp(\gamma T_{1}) \right)^{\frac{6(-2s+2)}{1-\epsilon}} \bigg\rbrace , \label{TH1.2 3}\end{aligned}$$ for any $t > 0$. Set $N^{2(1-s)} = K_{2}exp(\gamma T_{2})$, then inequality verifies the assumptions in Proposition \[proposition 1\] and hence we obtain $$\begin{aligned} \|Iu(T_{1} + T_{2})\|_{H^{1}}^{2} \leq &C_4 (N^{2(1-s)}\|u(T_1)\|^2_{ H^s}exp(\gamma T_2)+\|u(T_1)\|^6_{L^2}exp(-\gamma T_2)+\frac{1}{\gamma^2} \|f\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2}) \notag \\ \leq &C_4 (K_1K_2+K^2_1+\frac{1}{\gamma^2} \|f\|^2_{ H^1}+\frac{1}{\gamma^4}\|f\|^6_{L^2})<K_3. \nonumber\end{aligned}$$ For $t > T_{1},$ we define the maps $L_{1}(t)$ and $L_{2}(t)$ as $$\widehat{L_{1}(t)u_{0}} = \widehat{S(t)u_{0}}|_{|\zeta| < N_t}, \ \ \ \widehat{L_{2}(t)u_{0}} = \widehat{S(t)u_{0}}|_{|\zeta| > N_t},$$ where $S(t)u_{0} = u(t)$ and $N_t = (K_{2}exp(\gamma(t-T_{1}))^{-\frac{1}{2(1-s)}}.$ It’s easy to see that for $t > T_{1},$ $$\begin{aligned} \|L_{1}(t)u_{0}\|_{H^{1}}^{2} \leq \|Iu(t)\|_{H^{1}}^{2} &< K_{3}, \notag \\ \|L_{2}(t)u_{0}\|_{H^{s}}^{2} \leq N^{2s-2}\|Iu(t)\|_{H^{1}}^{2} &< K_{2}^{-1} K_{3}ezp(-\gamma(t - T_{1})).\nonumber\end{aligned}$$ Hence we obtain Theorem \[intro theorem\] by taking $K = \mathop{max}\lbrace K_{3}^{\frac{1}{2}}, K_{2}^{-\frac{1}{2}}K_{3}^{\frac{1}{2}}\rbrace$. Multilinear Estimates ===================== In this section, we prove the $4$-linear and $6$-linear estimates given in Lemma \[Energy 1 estimate\]. For $\gamma =0$ and $g=0$ in (\[Energy\]), we have $$\begin{aligned} \dv{E}{t} (E(I'v)) =& \left[\int (-\partial_{x}^{2} I'v - (I'v)^{3})(-\partial_{x}^{3} I'v - \partial_{x} I'v^{3})\right], \\ E(I'v(T)) - E(I'v(0)) =& \int\limits_{0}^{T}\int\limits_{0}^{\lambda}\partial_{x}^{3}I'v[(I'v)^{3} - I'v^{3}]dx dt + \int\limits_{0}^{T}\int\limits_{0}^{\lambda}\partial_{x}(I'v)^{3}[(I'v)^{3} - I'v^{3}]dx dt, \\ =& I'_{1} + I'_{2},\end{aligned}$$ for any arbitrary $T > 0.$ For an $\epsilon >0$ let $w_{j} \in X^{s,\frac{1}{2}}$ such that $w|_{[0,\lambda]\times[0,T]} = v_{j}$ and $\|v_{j}\|_{X^{s,\frac{1}{2}}_{T}} \leq C\|w_{j}\|_{X^{s,\frac{1}{2}}} \leq C\|v_{j}\|_{X^{s,\frac{1}{2} + \epsilon}_{T}}$ for $1 \leq j \leq 4.$ Let $\eta_{T}(t) = \eta(t/T)$ and let $\tilde{\eta}$ denotes the Fourier transform only in $t.$ From the Plancherel’s theorem, it suffices to prove the following: $$\begin{aligned} I'_{1} = \int\limits_{\mathbb{R}}\int\limits_{0}^{\lambda}\eta(t)\partial_{x}^{3}I'w[(I'w)^{3} - I'w^{3}]dx dt, \lesssim \int\limits_{\substack{k_{1} + k_{2} + k_{3} + k_{4} = 0 \\ (k_{1} + k_{2})(k_{2} + k_{3})(k_{3} + k_{1}) \neq 0}} \int\tilde{\eta}(\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4})\\ \Bigl|\langle k_{1} \rangle^{3}(\widetilde{I'w_{1}})\left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) (\widetilde{I'w_{2}})(\widetilde{I'w_{3}})(\widetilde{I'w_{4}})\Bigl| (dk_{i})_{\lambda}d\tau_{i} \\ + \int\limits_{\Omega} \int \Biggl|\langle k_{1} \rangle^{3}(\widetilde{I'w_{1}})\left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right)(\widetilde{I'w_{2}})(\widetilde{I'w_{3}})(\widetilde{I'w_{4}})\Bigl| (dk_{i})_{\lambda}d\tau_{i}, = I_{11} + I_{12},\end{aligned}$$ where $\Omega = \lbrace k_{1} + k_{2} + k_{3} + k_{4} = 0 :\hspace{1.5mm} |k_{1} + k_{2}| \neq 0,\hspace{1.5mm} (|k_{2} + k_{3}||(k_{3} + k_{1}|) = 0 \rbrace$ and $w_{i} = w_{i}(k_{i},\tau_{i}).$ Let $w = w_{L} + w_{H}$ where $supp \hspace{1mm}\hat{w}_{L}(k) \subset \{|k| \ll N'\}$ and $supp \hspace{1mm}\hat{w}_{H}(k) \subset \{|k| \gtrsim N'\}.$ From dyadic partition of $|k_{i}|,$ we let $|k_{i}| \sim N'_{i}.$ Let $\sigma_{i} = \tau_{i} - 4\pi^{2}k_{i}^{3}$ for $1 \leq i \leq 4.$ We can assume that $\langle \sigma_{4} \rangle = \max \{\langle\sigma_{i} \rangle,\hspace{1mm} 1 \leq i \leq 4 \rbrace$ as all other cases can be treated in the same way. Let $*$ be the region of integration for $I_{11}$. After substituting $w = w_{L} + w_{H},$ we can write $I_{11}$ as a sum of the following three integrals: - $$\begin{aligned} &\int\limits_{*}^{} \int\tilde{\eta}(\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4}) \Bigg|\langle k_{1} \rangle^{3}(\widetilde{I'w_{H}}) \nonumber \\ &\left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) (\widetilde{I'w_{L}})(\widetilde{I'w_{L}})(\widetilde{I'w_{H}})\Bigg| (dk_{i})_{\lambda}d\tau_{i} .\end{aligned}$$ - $$\begin{aligned} &\int\limits_{*}^{} \int\tilde{\eta}(\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4}) \Bigg|\langle k_{1} \rangle^{3}(\widetilde{I'w_{H}} )\nonumber \\ &\left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) (\widetilde{I'w_{L}})(\widetilde{I'w_{H}})(\widetilde{I'w_{H}})\Bigg| (dk_{i})_{\lambda}d\tau_{i} .\end{aligned}$$ - $$\begin{aligned} &\int\limits_{*}^{} \int\tilde{\eta}(\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4}) \Bigg|\langle k_{1} \rangle^{3}(\widetilde{I'w_{H}})\nonumber \\ &\left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) (\widetilde{I'w_{H}})(\widetilde{I'w_{H}})(\widetilde{I'w_{H}})\Bigg| (dk_{i})_{\lambda}d\tau_{i}.\end{aligned}$$ We omit other cases as they follows in the similar manner. For this case, we have $|k_{1}| \sim |k_{4}| \gtrsim N'$ and $|k_{2}| \sim |k_{3}| \ll N'.$ Hence, by using mean value theorem, we get $$\begin{aligned} \left| \left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) \right| \lesssim \frac{|k_{2}| + |k_{3}|}{|k_{4}|} . \end{aligned}$$ For *Integral 1*, we get $$\begin{aligned} \textit{Integral 1} \lesssim N_{4}^{-1+ 2\epsilon} \int\limits_{*}^{} \int\tilde{\eta}(\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4})(\langle k_{1} \rangle \widetilde{I'w_{H}} \langle \sigma \rangle^{\frac{1}{2} })\bigg[\langle k_{1} \rangle \lbrace (|k_{2}|\widetilde{I'w_{L}})(\widetilde{I'w_{L}}) + \\ (\widetilde{I'w_{L}})(|k_{3}|\widetilde{I'w_{L}}) \rbrace(\langle k_{1} \rangle \widetilde{I'w_{H}}) \langle \sigma \rangle^{-\frac{1}{2} }) \bigg].\end{aligned}$$ Plancherel’s theorem, Schwarz’s inequality and Corollary \[TL corollary\](1) imply $$\begin{aligned} \textit{Integral 1} &\lesssim\lambda^{0+} N'^{-1+ 2\epsilon} \|I'w_{H}\|_{X^{1,\frac{1}{2}}} \|I'w_{L}\|_{X^{1,\frac{1}{2}}} (N_{3})^{-\frac{1}{2}}\|I'w_{L}\|_{X^{1,\frac{1}{2}}} \|I'w_{H}\|_{X^{1,\frac{1}{2}}}, \\ &\lesssim\lambda^{0+} N'^{-1+ 2\epsilon} \|I'w\|^{4}_{X^{1,\frac{1}{2}}}.\end{aligned}$$ Note that, we neglect $(N_{3})^{-\frac{1}{2}}$ as it is not contributing in the decay. From given conditions, we have $|k_{1}| \sim |k_{4}|\gg |k_{3}| \gtrsim N'$ and $|k_{2}| \ll N'.$ Also, the definition of $m$ implies $m(k_{2}) \sim 1.$ Therefore, $$\begin{aligned} \left| \left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) \right| &\lesssim \frac{m(k_{1})}{m(k_{2})m({k_{3}})m({k_{4}})} \\ &\sim \frac{1}{m(k_{3})} \\ & \lesssim N'^{-1+s}|k_{3}|^{1-s}\\ & \lesssim N'^{-1}|k_{3}|.\end{aligned}$$ For *Integral 2*, we get $$\begin{aligned} &\textit{Integral 2} \\ &\lesssim N'^{-1+ 2\epsilon} \int\limits_{*}^{} \int\tilde{\eta}(\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4})(\langle k_{1} \rangle \widetilde{I'w_{H}} \langle \sigma \rangle^{\frac{1}{2} })\bigg[ \langle k_{1} \rangle ( \widetilde{I'w_{L}})(|k_{3}| \widetilde{I'w_{H}})(\langle k_{1} \rangle \widetilde{I'w_{H}}) \langle \sigma \rangle^{-\frac{1}{2} }) \bigg].\end{aligned}$$ From Plancherel’s theorem, Schwarz’s inequality and Corollary \[TL corollary\](2), we have $$\begin{aligned} \textit{Integral 2} &\lesssim N'^{-1+ 2\epsilon} N^{-\frac{1}{2}}_{2} \|I'w_{L}\|_{X^{1,\frac{1}{2}}} \|I'w_{H}\|_{X^{1,\frac{1}{2}}} \|I'w_{H}\|_{X^{1,\frac{1}{2}}} \|I'w_{L}\|_{X^{1,\frac{1}{2}}} \\ &\lesssim N'^{-1+2\epsilon} \|I'w\|^{4}_{X^{1,\frac{1}{2}}}.\end{aligned}$$ Clearly, we have $|k_{1}|\sim |k_{2}|\sim |k_{3}|\sim |k_{4}| \gtrsim N'.$ Hence, from definition of $m,$ we have $$\begin{aligned} \left| \left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) \right| &\lesssim \frac{m(k_{1})}{m(k_{2})m({k_{3}})m({k_{4}})} \\ &\sim \frac{N'^{-2s+2}|k_{1}|^{s-1}}{|k_{2}|^{s-1} |k_{3}|^{s-1} |k_{4}|^{s-1}} |k_{4}| |k_{4}|^{-1} \\ & \lesssim N'^{-2+2s} |k_{2}|^{1-s} |k_{3}|^{1-s} |k_{4}|^{1-s} |k_{1}|^{s-1} |k_{4}||k_{4}|^{-1}\\ & \lesssim N'^{-1}|k_{4}|,\end{aligned}$$ for $1/2 \leq s <1.$ Therefore, *Integral 3* implies $$\begin{aligned} &\textit{Integral 3} \\ &\lesssim N'^{-1+ 2\epsilon} \int\limits_{*}^{} \int\tilde{\eta}(\tau_{1} + \tau_{2} + \tau_{3} + \tau_{4})(\langle k_{1} \rangle \widetilde{I'w_{H}} \langle \sigma \rangle^{\frac{1}{2} })\bigg[ (\langle k_{1} \rangle \widetilde{I'w_{H}})(\langle k_{1} \rangle \widetilde{I'w_{H}})( |k_{4}| \widetilde{I'w_{H}}) \langle \sigma \rangle^{-\frac{1}{2} }) \bigg].\end{aligned}$$ From Plancherel’s theorem, Schwarz’s inequality and Corollary \[TL corollary\](3), we have $$\begin{aligned} \textit{Integral 3} &\lesssim \lambda^{0+} N'^{-1+ 2\epsilon} \|I'w_{H}\|_{X^{1,\frac{7}{18}+}} \|I'w_{H}\|_{X^{1,\frac{7}{18}+}} \|I'w_{H}\|_{X^{1,\frac{7}{18}+}} \|I'w_{H}\|_{X^{1,\frac{7}{18}+}} \\ &\lesssim\lambda^{0+} N'^{-1+2\epsilon} \|I'w\|^{4}_{X^{1,\frac{1}{2}}}.\end{aligned}$$ \[Symmt\] Note that $$\begin{aligned} \left[ k^{3}_{1}\left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) \right]_{sym} = \sum\limits_{j=1}^{4} k_{j}^{3} - \frac{1}{m_{1} m_{2} m_{3} m_{4}} \sum\limits_{j=1}^{4} k_{j}^{3}m_{j}^{2}\end{aligned}$$ for details (see [@CKSTT02 Section 4]). Although, even after using symmetrization, we are not able to improve the decay for the above $4$-linear estimate for nonresonant frequencies. Although, this symmetrization leads to the cancellation in the resonant case. Hence, for the term $I_{11}$, the estimate holds. For $I_{12},$ we use the symmetrization as follow: - $k_{2} + k_{3} = 0.$ - $k_{1} + k_{3} = 0.$ **Case 1.** Clearly, we have $k_{2} = - k_{3}$ and $k_{1} = - k_{4}.$ Therefore, from Remark \[Symmt\], we have $$\begin{aligned} \left[ k^{3}_{1}\left( 1- \frac{m(k_{2} + k_{3} +k_{4})}{m(k_{2})m({k_{3}})m({k_{4}})} \right) \right]_{sym} = \sum\limits_{j=1}^{4} k_{j}^{3} - \frac{1}{m_{1} m_{2} m_{3} m_{4}} \sum\limits_{j=1}^{4} k_{j}^{3}m_{j}^{2},\end{aligned}$$ which vanishes for $k_{1} = -k_{4}$ and $k_{2} = -k_{3}.$\ **Case 2.** This case is similar to **Case 1.** Now, we consider $I_{2}.$ From the Fourier transformation, we get $$\begin{aligned} I_{2} =& \int\limits_{0}^{T}\int\limits_{0}^{\lambda}\partial_{x}(I'v)^{3}[(I'v)^{3} - I'v^{3}]dx dt, \\ \lesssim & \int\limits_{\substack{\sum\limits_{i=1}^{6}k_{i} =0}} \int\limits_{\sum\limits_{i=1}^{6} \tau_{i} = 0} \bigg|\langle k_{1} + k_{2} + k_{3} \rangle(\widetilde{I'v_{1}}) (\widetilde{I'v_{2}}) (\widetilde{I'v_{3}}) \\ &\left( 1- \frac{m(k_{4} + k_{5} +k_{6})}{m(k_{4})m({k_{5}})m({k_{6}})} \right) (\widetilde{I'v_{4}})(\widetilde{I'v_{5}})(\widetilde{I'v_{6}})\bigg| (dk_{i})_{\lambda}d\tau_{i}, \end{aligned}$$ We may suppose $\langle k_{1} \rangle = \max\lbrace \langle k_{i} \rangle, 1 \leq i \leq 3 \rbrace.$ Putting $v = v_{L} + v_{H},$ we divide the integral $I_{2}$ into the following three integrals: - $$\begin{aligned} \int\limits_{\substack{\sum\limits_{i=1}^{6} k_{i} = 0}} \int\limits_{\sum\limits_{i=1}^{6} \tau_{i} = 0} (\langle k_{1} \rangle \widetilde{I'v_{H}}) (\widetilde{I'v_{L}} + \widetilde{I'v_{H}}) (\widetilde{I'v_{L}} + \widetilde{I'v_{H}}) \left( 1- \frac{m(k_{4} + k_{5} +k_{6})}{m(k_{4})m({k_{5}})m({k_{6}})} \right) \\(\widetilde{I'v_{L}})(\widetilde{I'v_{L}})(\widetilde{I'v_{H}}) (dk_{i})_{\lambda}d\tau_{i}.\end{aligned}$$ - $$\begin{aligned} \int\limits_{\substack{\sum\limits_{i=1}^{6} k_{i} = 0}} \int\limits_{\sum\limits_{i=1}^{6} \tau_{i} = 0} (\langle k_{1} \rangle \widetilde{I'w_{H}}) (\widetilde{I'v_{L}} + \widetilde{I'v_{H}}) (\widetilde{I'v_{L}} + \widetilde{I'v_{H}}) \left( 1- \frac{m(k_{4} + k_{5} +k_{6})}{m(k_{4})m({k_{5}})m({k_{6}})} \right) \\(\widetilde{I'v_{H}})(\widetilde{I'v_{H}})(\widetilde{I'v_{L}}) (dk_{i})_{\lambda}d\tau_{i}.\end{aligned}$$ - $$\begin{aligned} \int\limits_{\substack{\sum\limits_{i=1}^{6} k_{i} = 0}} \int\limits_{\sum\limits_{i=1}^{6} \tau_{i} = 0} (\langle k_{1} \rangle \widetilde{I'v_{H}}) (\widetilde{I'v_{L}} + \widetilde{I'v_{H}}) (\widetilde{I'v_{L}} + \widetilde{I'v_{H}}) \left( 1- \frac{m(k_{4} + k_{5} +k_{6})}{m(k_{4})m({k_{5}})m({k_{6}})} \right) \\(\widetilde{I'v_{H}})(\widetilde{I'v_{H}})(\widetilde{I'v_{H}}) (dk_{i})_{\lambda}d\tau_{i}.\end{aligned}$$ Clearly, we have $|k_{4}|,|k_{5}| \ll N'$ and $|k_{6}| \gtrsim N'.$ Hence, the worst condition is $|k_{3}|,|k_{2}| \ll N'$ and $|k_{1}| \gtrsim N'.$ The proof is the same as in $I_{1}.$ From the mean value theorem, we get $$\begin{aligned} \label{second term 1} \left|\left( 1- \frac{m(k_{4} + k_{5} +k_{6})}{m(k_{4})m({k_{5}})m({k_{6}})} \right)\right| \lesssim \frac{|k_{4}| + |k_{5}|}{|k_{6}|}.\end{aligned}$$ We may assume $\langle \sigma_{1} \rangle = \max \lbrace \langle \sigma_{i} \rangle \hspace{1mm}: \hspace{1mm} 1 \leq i \leq 6\rbrace$ as other cases can be treated in the same way. Therefore, $$\label{second term 2} \langle \sigma_{1} \rangle^{ 2\epsilon} = \langle \sigma_{1} \rangle^{3\epsilon} \langle \sigma_{1} \rangle^{-\epsilon} \lesssim \langle \sigma_{1} \rangle^{ 3\epsilon} \langle \sigma_{2} \rangle^{-\frac{\epsilon}{2}} \min\lbrace\langle \sigma_{3} \rangle^{-\frac{\epsilon}{2}}, \langle \sigma_{6} \rangle^{-\frac{\epsilon}{2}} \rbrace.$$ From Plancherel’s theorem, Hölder’s inequality, Proposition \[st1\], Lemma \[Infinity Estimate\] and inequalities (\[second term 1\]) and (\[second term 2\]), we get $$\begin{aligned} \textit{Integral 4} \lesssim N'^{-1}\|\mathcal{F}^{-1} (\langle \sigma \rangle^{3\epsilon}\langle k_{1} \rangle\widetilde{I'v_{H}})\|_{L^{4}_{x,t}} \|\mathcal{F}^{-1} (\langle \sigma_{2} \rangle^{-\frac{\epsilon}{2}} \widetilde{I'v_{L}})\|_{L^{\infty}_{x,t}} \|\mathcal{F}^{-1}( \langle \sigma_{3} \rangle^{-\frac{\epsilon}{2}} \widetilde{I'v_{L}})\|_{L^{\infty}_{x,t}} \\ \| \mathcal{F}^{-1}(\langle k_{4} \rangle \widetilde{I'v_{L}})\|_{L^{4}_{x,t}}\|I'v_{L}\|_{L^{4}_{x,t}}\|I'v_{H})\|_{L^{4}_{x,t}} \\ \lesssim N'^{-2}\|I'v_{H}\|_{X^{1,\frac{1}{3}+4\epsilon}}\|I'v_{L}\|_{X^{\frac{1}{2} + \epsilon, \frac{1}{2} - \frac{\epsilon}{2}}} \|I'v_{L}\|_{X^{\frac{1}{2} + \epsilon, \frac{1}{2} - \frac{\epsilon}{2}}} \|I'v_{L}\|_{X^{1, \frac{1}{3} + \epsilon}} \\ \|I'v_{L}\|_{X^{0,\frac{1}{3} +\epsilon}} \|I'v_{H}\|_{X^{1, \frac{1}{3} + \epsilon}}.\end{aligned}$$ We neglect extra derivatives corresponding to $N_{2},N_{3}$ and $N_{5}$ to get $$\textit{Integral 4} \lesssim N'^{-2} \|I'v\|^{6}_{X^{1,\frac{1}{2}}}.$$ Clearly, we have $|k_{4}|,|k_{5}| \gtrsim N'$ and $|k_{6}| \ll N'.$ Hence, the worst condition is $|k_{3}| \ll N'$ and $|k_{1}|,|k_{2}| \gtrsim N'$ as $|k_{1}|$ always have high frequency. From definition of $m$, we get $$\begin{aligned} \label{second term 3} \left|\left( 1- \frac{m(k_{4} + k_{5} +k_{6})}{m(k_{4})m({k_{5}})m({k_{6}})} \right)\right| \lesssim \left| \frac{m(k_{1})}{m(k_{4})m({k_{5}})} \right| \lesssim N'^{-1}N_{5}.\end{aligned}$$ From Plancherel’s theorem, Hölder’s inequality, Proposition \[st1\], Lemma \[Infinity Estimate\] and inequalities (\[second term 2\]) and (\[second term 3\]), we get $$\begin{aligned} \textit{Integral 5} \lesssim N'^{-1}\| \mathcal{F}^{-1} (\langle \sigma \rangle^{3\epsilon}\langle k_{1} \rangle \widetilde{I'v_{H}})\|_{L^{4}_{x,t}} \|I'v_{H}\|_{L^{4}_{x,t}} \| \mathcal{F}^{-1} (\langle \sigma_{3} \rangle^{-\frac{\epsilon}{2}} \widehat{I'v_{L}})\|_{L^{\infty}_{x,t}} \|I'v_{H}\|_{L^{4}_{x,t}} \\ \| \mathcal{F}^{-1} (\langle k_{5} \rangle \widetilde{I'v_{H}})\|_{L^{4}_{x,t}} \| \mathcal{F}^{-1} (\langle \sigma_{6} \rangle^{-\frac{\epsilon}{2}}\widetilde{I'v_{H}})\|_{L^{4}_{x,t}} \\ \lesssim N'^{-1}\|I'v_{H}\|_{X^{1,\frac{1}{3}+4\epsilon}}\|I'v_{H}\|_{X^{0, \frac{1}{3} + \epsilon}} \|I'v_{L}\|_{X^{\frac{1}{2} + \epsilon, \frac{1}{2} - \frac{\epsilon}{2}}} \|I'v_{H}\|_{X^{0, \frac{1}{3} + \epsilon}} \\ \|I'v_{H}\|_{X^{1,\frac{1}{3} +\epsilon}}\|I'v_{L}\|_{X^{\frac{1}{2} + \epsilon, \frac{1}{2} - \frac{\epsilon}{2}}}.\end{aligned}$$ We neglect extra derivatives corresponding to $N_{3}$ and $N_{6}$ to get $$\textit{Integral 4} \lesssim N'^{-3} \|I'v\|^{6}_{X^{1,\frac{1}{2}}}.$$ Clearly, we have $|k_{4}|,|k_{5}|, |k_{6}| \gtrsim N'.$ Hence, the worst condition is $|k_{3}|, |k_{2}| \ll N'$ and $|k_{1}| \gtrsim N'.$. From definition of $m$, we get $$\begin{aligned} \label{second term 4} \left|\left( 1- \frac{m(k_{4} + k_{5} +k_{6})}{m(k_{4})m({k_{5}})m({k_{6}})} \right)\right| \lesssim \left| \frac{m(k_{1})}{m(k_{4})m(k_{5})m(k_{6})} \right| \lesssim N'^{-2}|k_{5}||k_{6}|.\end{aligned}$$ From Plancherel’s theorem, Hölder’s inequality, Proposition \[st1\], Lemma \[Infinity Estimate\] and inequalities (\[second term 2\]) and (\[second term 4\]), we get $$\begin{aligned} \textit{Integral 6} \lesssim N'^{-2}\| \mathcal{F}^{-1} (\langle \sigma \rangle^{3\epsilon}\langle k_{1} \rangle \widetilde{I'v_{H}})\|_{L^{4}_{x,t}} \| \mathcal{F}^{-1} (\langle \sigma_{2} \rangle^{-\frac{\epsilon}{2}} \widetilde{I'v_{L}})\|_{L^{\infty}_{x,t}} \| \mathcal{F}^{-1} (\langle \sigma_{3} \rangle^{-\frac{\epsilon}{2}} \widetilde{I'v_{L}})\|_{L^{\infty}_{x,t}} \\ \|\mathcal{F}^{-1} (\langle k_{4} \rangle \widetilde{I'v_{H}})\|_{L^{4}_{x,t}} \| \mathcal{F}^{-1} (\langle k_{5}\rangle \widetilde{I'v_{H}})\|_{L^{4}_{x,t}}\|I'v_{H})\|_{L^{4}_{x,t}} \\ \lesssim N'^{-2}\|I'v_{H}\|_{X^{1,\frac{1}{3}+4\epsilon}}\|I'v_{L}\|_{X^{\frac{1}{2} + \epsilon, \frac{1}{2} - \frac{\epsilon}{2}}} \|I'v_{L}\|_{X^{\frac{1}{2} + \epsilon, \frac{1}{2} - \frac{\epsilon}{2}}} \|I'v_{H}\|_{X^{1, \frac{1}{3} + \epsilon}} \\ \|I'v_{H}\|_{X^{1,\frac{1}{3} +\epsilon}}\|I'v_{H}\|_{X^{0, \frac{1}{3} + \epsilon}}.\end{aligned}$$ We neglect extra derivatives corresponding to $N_{2}$ and $N_{3}$ to get $$\textit{Integral 4} \lesssim N'^{-3 } \|I'v\|^{6}_{X^{1,\frac{1}{2}}}.$$ Note that the sexalinear term does not depend on the scaler parameter $\lambda$. Appendix {#appendix .unnumbered} ======== The following example is given by Prof. Nobu Kishimoto which explain why we need to use the inhomogeneous Soblev norm in place of homogeneous norm. In fact, for homogeneous norm the Proposition \[TL Main result\] does not hold. Define the space $\dot X^{s,\frac{1}{2}}$ via the norm $$\|u\|_{\dot X^{s,\frac{1}{2}}} = \||k|^{s}\langle \tau - 4\pi^{2}k^{3} \rangle^{b}\tilde{u}(k,\tau)\|_{L^{2}((dk)_{\lambda},d\tau)}.$$ Assume $\lambda \geq 1$ and $\sqrt{\lambda} \in \mathbb{Z}/\lambda.$ Let $\lambda \mathbb{T} = \mathbb{R}/\lambda \mathbb{Z}.$ We define the functions $v_{1},v_{2},v_{3}$ on $\lambda\mathbb{T} \times \mathbb{R}$ by $$\begin{aligned} \tilde{v}_{1}(k,\tau) &= 1_{[-1,1]}(\tau - 4\pi^{2}k^{3})\cdot1_{\{1/\lambda\}}(k), \\ \tilde{v}_{2}(k,\tau) &= 1_{[-1,1]}(\tau - 4\pi^{2}k^{3})\cdot1_{\{-2/\lambda\}}(k), \\ \tilde{v}_{3}(k,\tau) &= 1_{[-1,1]}(\tau - 4\pi^{2}k^{3})\cdot1_{\{\sqrt{\lambda}\}}(k).\end{aligned}$$ We have $$\begin{aligned} \|v_{1}\|_{\dot X^{s,\frac{1}{2}}} \sim \|v_{2}\|_{\dot X^{s,\frac{1}{2}}} \sim \left(\frac{1}{\lambda}\right)^{s} \lambda^{-\frac{1}{2}} = \lambda^{s-\frac{1}{2}}, \\ \|v_{3}\|_{\dot X^{s,\frac{1}{2}}} \sim (\sqrt{\lambda})^{s} \lambda^{-\frac{1}{2}} = \lambda^{\frac{s}{2}-\frac{1}{2}}.\end{aligned}$$ We see that $$\begin{aligned} &\left|\tilde{J}[v_{1},v_{2},v_{3}](\sqrt{\lambda})-\frac{1}{\lambda},\tau)\right| \\ &\sim \sqrt{\lambda}\left|\int_{\tau_{1} + \tau_{2} + \tau_{3} = \tau}\int_{\substack{k_{1} + k_{2} + k_{3} = \sqrt{\lambda} - \lambda^{-1} \\ (k_{1} + k_{2})(k_{2} + k_{3})(k_{3} + k_{1}) \neq 0}} \prod\limits_{j=1}^{3} \tilde{v}_{j}(k_{j},\tau_{j})(dk_{1})_{\lambda}(dk_{2})_{\lambda}d\tau_{1}d\tau_{2} \right| \\ &\gtrsim \lambda^{-3/2} 1_{[-1,1]}(\tau -4\pi^{2}(\sqrt{\lambda}-\lambda^{-1})^{3} + 4\pi^{2}M),\end{aligned}$$ where $$M = 3\left(\frac{1}{\lambda} + \frac{-2}{\lambda} \right)\left(\frac{-2}{\lambda} + \sqrt{\lambda} \right)\left(\sqrt{\lambda} + \frac{1}{\lambda} \right),$$ so that $|M| \sim 1.$ Hence, we have $$\|J[v_{1},v_{2},v_{3}]\|_{\dot X^{s,\frac{1}{2}}} \gtrsim \lambda^{-\frac{3}{2}}\cdot (\sqrt{\lambda})^{s}\lambda^{-\frac{1}{2}} =\lambda^{\frac{s}{2} - 2}.$$ Therefore, if the trilinear estimate $$\|J[v_{1},v_{2},v_{3}]\|_{\dot X^{s,\frac{1}{2}}} \lesssim \lambda^{0+} \|v_{1}\|_{\dot X^{s,\frac{1}{2}}} \|v_{2}\|_{\dot X^{s,\frac{1}{2}}} \|v_{3}\|_{\dot X^{s,\frac{1}{2}}}$$ were true, it would imply that $$\lambda^{\frac{s}{2} - 2} \lesssim (\lambda^{-s-\frac{1}{2}})^{2}\lambda^{\frac{s}{2} - \frac{1}{2}} \hspace{4mm}\Leftrightarrow \hspace{2mm} \lambda^{2s} \lesssim \lambda^{\frac{1}{2}+} \hspace{2mm} (\lambda \geq 1).$$ For large $\lambda,$ this holds only if $s \leq \frac{1}{4}+.$ Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to express his deep gratitude to Professor Yoshio Tsutsumi for giving him valuable suggestions and constant encouragement. The author is also greatful to Professors Kotaro Tsugawa and Nobu Kishimoto and Mr. Minjie Shan for fruitful discussions. Finally, the author is indebted to the referee for his or her valuable remarks. [9]{} Boling, Guo, and Li Yongsheng, Attractor for Dissipative Klein – Gordon – Schrödinger Equations in $R^{3}$. Journal of Differential Equations 136, no. 2 (1997): 356-377. J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations, Geom. Funct. Anal. 3 (1993), no. 2, 107–156. MR 1209299, https://doi.org/10.1007/BF01896020 J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. II. The KdV-equation, Geom. Funct. Anal. 3 (1993), no. 3, 209–262. MR 1215780, https://doi.org/10.1007/BF01895688. Chen, Wenxia, Lixin Tian, and Xiaoyan Deng, The global attractor and numerical simulation of a forced weakly damped MKdV equation. Nonlinear Analysis: Real World Applications 10, no. 3 (2009): 1822-1837. Chen, Wen-xia, Li-xin Tian, and Xiao-yan Deng, Global attractor of dissipative MKdV equation \[J\]. Journal of Jiangsu University (Natural Science Edition) 1 (2007): 021. J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Resonant decomposition s and the I-method for the cubic nonlinear Schrodinger equation in $\mathbb{R}^2$, Discret. Cont. Dyn. Syst., 21 (2008), 665-686. Colliander, James, Markus Keel, Gigliola Staffilani, Hideo Takaoka, and Terence Tao, Sharp global well-posedness for KdV and modified KdV on $\mathbb{R}$ and $\mathbb{T}$. Journal of the American Mathematical Society 16, no. 3 (2003): 705-749. Dlotko, Tomasz, Maria B. Kania and Meihua Yang, Generalized Korteweg–de Vries equation in $H^{1}$. Nonlinear Analysis: Theory, Methods and Applications 71, no. 9 (2009): 3934-3947. J.-M. Ghidaglia, Finite-dimensional behavior for weakly damped driven Schrödinger equations. Ann. Inst. H. Poincaré Anal. Non Linéaire 5 (1988), no. 4, 365–405. J.-M. Ghidaglia, A note on the strong convergence towards attractors of damped forced KdV equations. J. Differential Equations 110 (1994), no. 2, 356–359. Jean-Micheal Ghidaglia, Weakly damped forced Korteweg-de Vries equations behave as a finite-dimensional dynamical system in the long time, Journal of Differential Equations, 74 (1988), no. 2, 369-390. Goubet, Olivier, Regularity of the attractor for a weakly damped nonlinear Schrödinger equation. Applicable analysis 60, no. 1-2 (1996): 99-119. Harvard Haraux, Alain, Two remarks on dissipative hyperbolic problems. Research Notes in Mathematics 122 (1985): 161-179. Kato, Tosio. Quasi-linear equations of evolution, with applications to partial differential equations. In Spectral theory and differential equations, pp. 25-70. Springer, Berlin, Heidelberg, 1975. Lu, Kening, and Bixiang Wang, Global attractors for the Klein–Gordon–Schrödinger equation in unbounded domains. Journal of Differential Equations 170, no. 2 (2001): 281-316. Olivier Goubet, Luc Molinet, Global attractor for weakly damped Nonlinear Schrödinger equations in $L^{2}$. Nonlinear Analysis Theory Methods and Applications, Elsevier, 2009, 71, pp.317-320. &lt;hal- 00421278&gt; Robert M. Miura, Korteweg-de Vries equation and generalizations. I. A remarkable explicit nonlinear transformation, J. Mathematical Phys. 9 (1968), 1202–1204. MR 0252825, https://doi.org/10.1063/1.1664700 Robert M. Miura, Clifford S. Gardner, and Martin D. Kruskal, Korteweg-de Vries equation and generalizations. II. Existence of conservation laws and constants of motion, J. Mathematical Phys. 9 (1968), 1204–1209. MR 0252826, https://doi.org/10.1063/1.1664701. Robert M. Miura, The Korteweg-de Vries equation: a survey of results, SIAM Rev. 18 (1976), no. 3, 412–459. MR 0404890, https://doi.org/10.1137/1018076. Robert M. Miura, Errata, The Korteweg-de Vries equation: a survey of results (SIAM Rev. 18 (1976), no. 3, 412–459), SIAM Rev. 19 (1977), no. 4, vi. MR 0467039, https://doi.org/10.1137/1019101. Miyaji, Tomoyuki, and Yoshio Tsutsumi, Existence of global solutions and global attractor for the third order Lugiato–Lefever equation on T. Annales de l’Institut Henri Poincare (C) Non Linear Analysis. Elsevier Masson, 2016. Nakanishi, Kenji, Hideo Takaoka, and Yoshio Tsutsumi, Local well-posedness in low regularity of the mKdV equation with periodic boundary condition. Disc. Cont. Dyn. Systems 28 (2010): 1635-1654. Takaoka, Hideo, and Yoshio Tsutsumi, Well-posedness of the Cauchy problem for the modified KdV equation with periodic boundary condition. International Mathematics Research Notices 2004, no. 56 (2004): 3009-3040. Temam, Roger, Infinite-dimensional dynamical systems in mechanics and physics. Vol. 68. Springer Science and Business Media, 2012. Tsugawa, Kotaro, Existence of the global attractor for weakly damped, forced KdV equation on Sobolev spaces of negative index. Commun. Pure Appl. Anal. 3 (2004), no. 2, 301–318. 35Q53 (35B41 37L30). Wang, Ming, Dongfang Li, Chengjian Zhang, and Yanbin Tang, Long time behavior of solutions of gKdV equations. Journal of Mathematical Analysis and Applications 390, no. 1 (2012): 136-150. Yang, Xingyu, Global attractor for the weakly damped forced KdV equation in Sobolev spaces of low regularity. NoDEA Nonlinear Differential Equations Appl. 18 (2011), no. 3, 273–285. (Reviewer: W.-H. Steeb) 35B41 (35Q53 37L30).
{ "pile_set_name": "ArXiv" }
--- abstract: 'A central concern in an interactive intelligent system is optimization of its actions, to be maximally helpful to its human user. In recommender systems for instance, the action is to choose what to recommend, and the optimization task is to recommend items the user prefers. The optimization is done based on earlier user’s feedback (e.g. “likes” and “dislikes”), and the algorithms assume the feedback to be faithful. That is, when the user clicks “like,” they actually prefer the item. We argue that this fundamental assumption can be extensively violated by human users, who are not passive feedback sources. Instead, they are in control, actively steering the system towards their goal. To verify this hypothesis, that humans steer and are able to improve performance by steering, we designed a function optimization task where a human and an optimization algorithm collaborate to find the maximum of a 1-dimensional function. At each iteration, the optimization algorithm queries the user for the value of a hidden function $f$ at a point $x$, and the user, who sees the hidden function, provides an answer about $f(x)$. Our study on 21 participants shows that users who understand how the optimization works, strategically provide biased answers (answers not equal to $f(x)$), which results in the algorithm finding the optimum significantly faster. Our work highlights that next-generation intelligent systems will need user models capable of helping users who steer systems to pursue their goals.' author: - 'Fabio Colella$^{\dagger1}$' - 'Pedram Daee$^{\dagger1}$' - 'Jussi Jokinen$^{2}$' - 'Antti Oulasvirta$^{2}$' - 'Samuel Kaski$^{13}$' title: 'Human Strategic Steering Improves Performance of Interactive Optimization[^1]' --- Introduction ============ ![image](teaser_figure_paper_w_arrows_pedram.pdf){width="\textwidth"} Interactive intelligent systems with humans in the loop are becoming increasingly widespread. These can range from a personalized recommender system asking about user preference about a recommendation [@elahi2016survey; @portugal2018use; @ruotsalo2015interactive], or a user guiding the results of a machine learning system [@steeringclassification; @sacha2017you; @amershi2014power; @daee2017knowledge; @afrabandpey2019human], to a precision medicine system asking expert’s opinion about model characteristics or new data [@Holzinger2016; @Iiris_precision_medicine]. In all these cases, the intelligent system assumes that the user inputs are faithful responses to the requested query. In other words, users are considered as passive oracles. However, results show that even when instructed to be an oracle data provider, users still try to not only provide input on current and past actions, but also to provide guidance on future predictions [@amershi2014power]. Furthermore, studies suggest that users attribute mental models to the system they are interacting with [@Williams_chi] and are able to predict the behaviour of intelligent systems [@tango]. For these reasons we argue that intelligent systems should consider users as active planners. A real-life example can be found in interaction with a movie recommendation system. Users can provide a liking or disliking feedback for each movie, which the system then uses to recommend new content. Now, clever users can try to answer in a steering way (e.g., expressing “like” for a movie they are not interested in) to reach their personal goal of receiving some specific recommendations. For example, a user may not appreciate “The Hobbit: An Unexpected Journey” but may express liking with the intent of receiving more recommendations of fantasy movies similar to Tolkien’s. We hence use *steering* to refer to user feedback which is different from the factually true value (i.e., in this case the real grade of appreciation of the movie), and analyse how steering behaviour affects the performance of an intelligent system. We designed a study to investigate the behaviour of users when interacting with an interactive intelligent system. In particular, we consider the fundamental task of interactive optimization in the form of finding the maximum of a 1-dimensional function. Similar settings have been considered in previous works studying how humans perform optimization and function learning [@borji2013bayesian; @griffiths2009modeling]. However, we analyse the task from a different angle. In particular, [@borji2013bayesian] studied the strategies people use to find the maximum of a hidden function. The users had to sequentially decide on the next point $x$ to be queried about the hidden function (and observing the corresponding $f(x)$) with the goal of finding the maximum with as few queries as possible. The results indicated that users’ search strategy is similar to a Bayesian optimization algorithm. Our work is fundamentally different from these, in that in our study a Bayesian optimization algorithm queries the $x$ values, and the user, who sees the hidden function, provides $f(x)$. In other words, the idea is that an AI running an optimization algorithm collaborates with the user. We hypothesize that users who learn a model of the optimization algorithm, are able to steer it towards their goal. To this end, the next section discusses the optimization problem. The user study setting[^2] is introduced in Section \[study\]. The paper concludes with a discussion about the results and implications. Bayesian Optimization ===================== Consider the problem of finding the argument that maximizes an objective function in a design space $\mathcal{X}$, i.e., $x^* = \operatorname*{arg\,max}_{x \in \mathcal{X}} f(x)$. Now consider that the function of interest $f(x)$ is unknown and expensive to evaluate, and the only way to gain information about it is to sequentially query it, i.e., ask the function value about a point of interest, such as $x_q \in \mathcal{X}$, and observe the corresponding function value $f(x_q)$ (or in general a noisy version of it). The natural goal for this black box optimization problem could be to find $x^*$ with the minimum number of queries. This problem has been extensively studied in Bayesian optimization (BO) literature [@BO_review; @frazier2018tutorial] and has been addressed in many applications such as automatic machine learning (searching in space of machine learning models) [@hoffman2014correlation], design of new materials [@frazier2016bayesian; @BOSS2019], reinforcement learning (finding the best action of an agent) [@brochu2010tutorial], and personalized search systems [@ruotsalo2015interactive; @Daee2016]. As the target function is hidden, BO builds a surrogate on the function’s observations. The surrogate is usually a Gaussian process (GP) regression model [@williams2006gaussian] enabling direct quantification of the uncertainty about the target function. Using the surrogate, the optimizer needs to select the next point to query on the target function. As the query budget is limited, the query algorithm needs to compromise between asking queries that would provide new information about the hidden function (for example in areas that have not been explored), versus exploiting the current best guess about the position of the maximum. This is known as the exploration-exploitation trade-off. Upper confidence bound (UCB) [@UCBoriginal] is a well-established query algorithm that in each iteration queries the point which has the maximum value for the sum of the mean and variance of the GP surrogate. Previous works have indicated similarity between human search behaviour and the UCB algorithm [@borji2013bayesian; @wu2018generalization]. In our study, the optimization algorithm is a GP-based Bayesian optimization model using UCB for querying new points. We allow the users, who can see the target function, to provide answers to the queries. User Study {#study} ========== ![image](final_UI_interface.png){width="\textwidth"} Method ------ #### Participants We recruited $N = 21$ participants for a user study comparing user performance to standard Bayesian optimization in interactive optimization. The participants were aged 25–35, and there were 12 women. Everyone was awarded one movie ticket upon completion of the study. Eighteen participants had a background in computer science, technology, or engineering, and 16 had a master or a higher academic degree. The participants self-reported their familiarity with GP using a rating scale from 1 to 5. The mean of the scale was 3.29 ($SD = 1.42$), with 9 participants reporting good knowledge, and 12 poor knowledge. #### Materials and procedure The experiment consisted of two sessions, with 10 trials of 10 iterations in the first one, and 20 trials of 5 iterations in the second (in addition, both sessions had five practice trials in the beginning). The session with 10 trials was always conducted first, as it had more iterations and was hence easier, encouraging learning. Each trial was an independent optimization task, wherein the participant was presented subsequent query points on a randomly generated function. The goal of the participant was to collaborate with the system by providing information about the generated function $f$. This was accomplished by the system selecting a query point $x$ and by the user selecting a value on the $y$-axis, related to the value of $f(x)$. The mean and confidence bounds of the surrogate function were shown to the user to facilitate the user in learning a mental model of the system. Figure \[fig:user\_interface\] shows a screenshot of the user interface. A questionnaire about background information, including an item about knowledge of Bayesian optimization, was filled in after the optimization tasks, thus avoiding biasing the participants. ![ Humans performed significantly better than the Bayesian Optimization (BO) baseline. Average score over iterations for all participants (red) and for baseline (blue) for the session with 10 and 5 iterations. The lighter-colored bands around the average lines represent the standard error of the mean. []{data-label="fig:average_scores"}](final_performance_users_baseline.pdf){width="0.6\columnwidth"} #### Data and analysis Each response by the participant to the systems’ query was considered one iteration. The response was converted into a score reflecting how close the optimization process was to the optimum. The highest function value found so far was reported, normalized between 0 and 100, the score restarted between trials. As baseline we used a standard Bayesian optimization on the same functions that the users optimized, using the same randomized initial query point. In order to avoid fluctuations due to random effects in the optimization, we averaged the score over 25 runs of the same optimization for each trial. We tested the following hypotheses, investigating the impact of human collaborator on the score. **H1.** Human participants achieve higher scores doing optimization than the baseline Bayesian optimizer. **H2.** Human participants achieve higher scores faster than the baseline. **H3.** Participants with knowledge of Bayesian optimization perform better than participants without this knowledge. We tested these hypotheses using mixed regression models with *score* as the dependent variable and *agent* (i.e. human or baseline) as an independent variable. H1 was tested on overall performance, aggregated over iterations in each trial. H2 was tested by adding to the model an interaction effect between *agent* and *iteration*. For H3, we used only the subset of the data that contained human trials, and compared the performance of knowledgeable (responses to the relevant questionnaire item between 4–5) and naive participants (responses of 3 and below). Finally, for all tests, we added the participant (*user ID*) as a random intercept into the model. We report the results using the `lme4`[@lme4] package in R, with Satterthwaite approximations for degrees of freedom. Results ------- Compared to the baseline Bayesian optimization, human participants generally performed better, as seen in Figure \[fig:average\_scores\]. As a trial progresses and iterations increase, humans achieve higher scores than the baseline. The overall performance was statistically significantly higher for humans (H1), $t(599) = 4.1, p < 0.001$. Further, human scores increased faster than the baseline (H2), $t(596) = 2.2, p = 0.031$. Figure \[fig:improvement\_per\_user\] illustrates individual score improvement compared to the baseline, by iteration, aggregated over all trials. Finally, with the human-only subset of the data, we tested whether users with prior knowledge about Bayesian optimization obtain higher scores than naive users (H3). Here, the main effect was not statistically significant, $t(51) = -0.4, ns.$, but we did observe a statistically significant interaction effect between iteration and experience, $t(289) = 2.0, p = 0.042$. This could mean that although both experienced and inexperienced users achieve similar scores in the end, the experienced users can do this with fewer iterations. This result is shown in Figure \[fig:gp\_steer\], on the left. We defined the *steering amplitude* as a value between 0 (no steering at all) to 100 (greatest steering, limited by the user interface height). The average steering was at 19.7 ($SD = 26.13$, $\text{skewness}=1.75$) with most of the user responses far from the actual function values. The performance of the users, grouped by different levels of steering, is shown in Figure \[fig:gp\_steer\], on the right. ![ Most users performed better than baseline. Each line shows the average difference of the score between one user and the baseline. The difference is computed on the same trial. Value 0 means no difference, while values greater than 0 mean that users performed better than the baseline. We looked at the post questionnaire responses of the two users that performed worse than the baseline (blue and green lines): one reported not understanding the task, while the other mentioned providing random responses in some experimental sessions to explore the system behaviour. []{data-label="fig:improvement_per_user"}](final_user_difference_from_baseline_split_5_10.pdf){width="0.6\columnwidth"} Discussion and Conclusion ========================= We designed a 1-dimensional function optimization setting to study how humans interact with an interactive intelligent system, here a Bayesian optimization algorithm. Our hypothesis was that while interacting with an intelligent system, humans do not passively provide inputs to the required query but rather they design their input to strategically steer the system toward their own goals. Our results indicate that in case the goals of the human and the algorithm are the same (here finding the maximum of the function faster), human steering behaviour can significantly improve the results. This underlines the importance of developing systems that can understand the mental model of their users [@peltola2019teaching; @mert2019interactive]. In fact, this strategic behaviour of the user can also leak information about the user’s goal which the system could capture to further improve the optimization [@brooks2019building]. Our study was designed with the aim of making the intelligent system’s behaviour transparent to the user. For this purpose, we visualized the history of interactions and the algorithm’s state (GP mean and its confidence bounds) to the user. In a small pilot we observed that without these elements the users’ steering ability was much lower. This suggests that intelligent systems need to be transparent and learnable for the users to take advantage of such steering behaviour. Systems could also infer the user’s mental model to render this understanding easier [@explanation_soliloquy]. In conclusion, users strategically steer intelligent systems to control them, and can achieve performance improvements by doing so. This steering behavior could be exploited by the next generation of intelligent systems to further improve performance, but this requires user models capable of anticipating the steering behaviour. Meanwhile, this paper’s work could be extended to other application cases such as personalized recommender systems, where the underlying function to be maximized is the user’s preference over items. ![ Left: users with different understanding of Gaussian Processes have different levels of performance. Right: amplitude of steering influences the average performance. The figure suggests that moderate steering results in improved performance. Similar results were archived in the session with 5 iterations. []{data-label="fig:gp_steer"}](final_gp_and_steering.pdf){width="0.6\columnwidth"} ### ACKNOWLEDGMENTS {#acknowledgments .unnumbered} We thank Mustafa Mert Çelikok, Tomi Peltola, Antti Keurulainen, Petrus Mikkola, Kashyap Todi for helpful discussions. This work was supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence, FCAI; grants 310947, 319264, 292334, 313195; project BAD: grant 318559). AO was additionally supported by HumaneAI (761758) and the European Research Council StG project COMPUTED. We acknowledge the computational resources provided by the Aalto Science-IT Project. [29]{} \#1 \#1[\#1]{}\#1 \#1 \#1 \#1 \#1[\#1]{} \#1[\#1]{} [afrabandpey2019human]{} . . In . , , . <https://doi.org/10.24963/ijcai.2019/271> [amershi2014power]{} . . , (), . [lme4]{} . . , (), . <https://doi.org/10.18637/jss.v067.i01> [borji2013bayesian]{} . . In , (Eds.). , , . <http://papers.nips.cc/paper/4952-bayesian-optimization-explains-human-active-search.pdf> [brochu2010tutorial]{} . . (). [brooks2019building]{} . . (). [mert2019interactive]{} . . [explanation\_soliloquy]{} . . (). [tango]{} . . (). [daee2017knowledge]{} . . , (), . [Daee2016]{} . . In *()*. , , . <https://doi.org/10.1145/2856767.2856803> [elahi2016survey]{} . . (), . [frazier2018tutorial]{} . . (). [frazier2016bayesian]{} . . In , (Eds.). , , . <https://doi.org/10.1007/978-3-319-23871-5_3> [griffiths2009modeling]{} . . In , (Eds.). , , . <http://papers.nips.cc/paper/3529-modeling-human-function-learning-with-gaussian-processes.pdf> [hoffman2014correlation]{} . . In *()*, (Eds.), Vol. . , , . <http://proceedings.mlr.press/v33/hoffman14.html> [Holzinger2016]{} . , ( ), . <https://doi.org/10.1007/s40708-016-0042-6> [steeringclassification]{} . . In *()*. , , . <https://doi.org/10.1145/1753326.1753529> [peltola2019teaching]{} . . In . , , . <http://papers.nips.cc/paper/9299-machine-teaching-of-active-sequential-learners.pdf> [portugal2018use]{} . . (), . [williams2006gaussian]{} . . , . [ruotsalo2015interactive]{} . . , (), . [sacha2017you]{} . . (), . [BO\_review]{} . . , ( ), . [UCBoriginal]{} . . In *()*. , , . [Iiris\_precision\_medicine]{} . . , ( ), . <https://doi.org/10.1093/bioinformatics/bty257> [BOSS2019]{} . . , (), . [Williams\_chi]{} . . In *()*. , , Article ,  pages. <https://doi.org/10.1145/3290605.3300677> [wu2018generalization]{} . . , (), . [^1]: This is the pre-print version. The paper is published in the proceedings of *UMAP 2020* conference. Definitive version DOI: <https://doi.org/10.1145/3340631.3394883>. [^2]: Source code is available at <https://github.com/fcole90/interactive_bayesian_optimization>.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a higher codimension generalization of the DGP scenario which, unlike previous attempts, is free of ghost instabilities. The 4D propagator is made regular by embedding our visible 3-brane within a 4-brane, each with their own induced gravity terms, in a flat 6D bulk. The model is ghost-free if the tension on the 3-brane is larger than a certain critical value, while the induced metric remains flat. The gravitational force law “cascades” from a 6D behavior at the largest distances followed by a 5D and finally a 4D regime at the shortest scales.' address: - '$^a$Perimeter Institute for Theoretical Physics, 31 Caroline St. N., Waterloo, ON, N2L 2Y5, Canada' - '$^b$Dept. of Physics & Astronomy, McMaster University, Hamilton ON, L8S 4M1, Canada' - '$^c$CERN, Theory Division, CH-1211 Geneva 23, Switzerland' - '$^d$Center for Cosmology and Particle Physics, New York University, New York, NY 10003 USA' - '$^e$NORDITA, Roslagstullsbacken 23, 106 91 Stockholm, Sweden' - '$^f$Institut de Théorie des Phénomènes Physiques, EPFL, CH-1015, Lausanne, Switzerland' author: - 'Claudia de Rham$^{a,b}$, Gia Dvali$^{c,d}$, Stefan Hofmann$^{a,e}$, Justin Khoury$^a$, Oriol Pujolàs$^d$, Michele Redi$^{d,f}$ and Andrew J. Tolley$^a$' title: Cascading DGP --- The DGP model [@DGP] provides a simple mechanism to modify gravity at large distances by adding a localized graviton kinetic term on a codimension 1 brane in a flat 5D bulk. The natural generalization to higher codimension, however, is not so straightforward. On one hand these models require some regularization due to the divergent behavior of the Green’s functions in higher codimension. More seriously, most constructions seem to be plagued by ghost instabilities [@dubov; @gregshif] (see [@Kaloper] for related work). The purpose of this letter is to show that both pathologies can be resolved by embedding a succession of higher-codimension DGP branes into each other. Scalar ====== We shall focus on the codimension 2 case. As a warm-up exercise, we consider a real scalar field with action, $$S=\frac 1 2 \int \phi \left[M_6^4 \square_6 + M_5^3 \square_5 \delta(y)+ M_4^2 \square_4 \delta(y)\delta(z)\right]\phi \nonumber$$ describing a codimension 2 kinetic term embedded into a codimension 1 one in 6D. We will impose throughout the paper $Z_2 \times Z_2$ orbifold projection identifying $y \to -y$ and $z\to -z$. The model possesses the two mass scales, $$m_5=\frac {M_5^3} {M_4^2} \qquad {\rm and } \qquad m_6=\frac { M_6^4} {M_5^3}~.$$ In absence of the 4D kinetic term the propagator on the codimension 1 brane (4-brane) is the DGP propagator [@DGP], $$G^0(y-y')=\frac 1 {M_5^3} \int \frac {dq} {2\pi} \frac {e^{i q (y-y')}}{p^2+q^2+2 m_6 \sqrt{p^2+q^2}}~,$$ where $p$ is 4D momentum and $y$ the coordinate orthogonal to the codimension 2 brane (3-brane). To find the exact 5D propagator we can treat the 4D kinetic term (located at $y=0$) as a perturbation and then sum the series. One finds, $$\begin{aligned} &G^{exact}= G_0(y-y') - M_4^2 G^0(y) p^2 G^0(-y') \cr &~~~~~~~~~+ M_4^4 G^0(y) p^4 G^0(0) G^0(-y')+\dots \cr &=G^0(y-y')- \frac {M_4^2 p^2} {1 + M_4^2 p^2 G^0(0)} G^0(y) G^0(-y')~.\end{aligned}$$ In particular the 4D brane-to-brane propagator is determined in terms of the integral of the higher dimensional Green’s function, $$G_4^{exact}= \frac {G^0(0)} {M_4^2 G^0(0) p^2+1}~.$$ For the case at hand, $$\begin{aligned} &&G^0(0)=\frac 1 {M_5^3}\int_{-\infty}^{\infty} \frac{d q}{2 \pi} \frac{1}{p^2+q^2+2 m_6 \sqrt{p^2+q^2}}\nonumber\\ &=& \frac{2}{\pi M_5^3}\frac{1}{\sqrt{4 m_6^2-p^2}} \tanh^{-1} \,\left(\sqrt{\frac{2 m_6-p}{2 m_6+p}}\,\right)~. $$ For $p>2m_6$, the analytic continuation of this expression is understood. Remarkably, the 5D kinetic term makes the 4D propagator finite, thereby regularizing the logarithmic divergence characteristic of pure codimension 2 branes. In particular, when $M_5$ goes to zero one has $G_4^{exact} \to M_6^{-4}\log(p/m_6)$, reproducing the codimension 2 Green’s function with physical cutoff given by $m_6$. The corresponding 4D Newtonian potential scales as $1/r^3$ at the largest distances, showing that the theory becomes six dimensional, and reduces to the usual $1/r$ on the shortest scales. Its behavior at intermediate distances, however, depends on $m_{5,6}$. If $m_5>m_6$ there is an intermediate 5D regime; otherwise the potential directly turns 6D at a distance of order $(m_5 m_6)^{-1/2}\log(m_5/m_6)$.\ Gravity ======= Let us now turn to gravity. In analogy with the scalar we consider the action, $$S={M_6^4\over2}\int \sqrt{-g_6} R_6+{M_5^3\over2}\int \sqrt{-g_5} R_5 + {M_4^2\over2}\int \sqrt{-g_4} R_4\nonumber$$ where each term represents the intrinsic curvature. This guarantees that the model is fully 6D general covariant. To find the propagator it is convenient to follow the same procedure as for the scalar and sum the diagrams with insertion of the lower dimensional kinetic term, i.e. the Einstein’s tensor ${\cal E}$. For our purpose we only compute the propagator on the 3-brane. Given the higher dimensional propagator, the brane-to-brane propagator due to the insertion of a codimension 1 term is in compact form, $$\begin{aligned} G_{\mu\nu\alpha\beta}^{exact}&=& G^0 \sum_{n=0}^{\infty} (M_4^2 {\cal E} G^0)^n= G^0 [1-M_4^2 {\cal E} G^0]^{-1} \nonumber \\ &=& G^0_{\mu\nu\gamma\delta} H^{\gamma\delta}_{\alpha\beta}~, \label{fullgravity}\end{aligned}$$ where $G^0_{\mu\nu\gamma\delta}$ is the 4D part of the higher dimensional Green’s function evaluated at zero. The tensor $H^{\mu\nu}_{\alpha\beta}$ satisfies by definition, $$[1-M_4^2 {\cal E} G^0]^{\mu\nu}_{\gamma\delta} H^{\gamma\delta}_{\alpha\beta}=\frac 1 2 \left( \delta_\mu^\alpha\delta_\nu^\beta+\delta_\mu^\beta\delta_\nu^\alpha\right). \label{inverse}$$ To find $H$ one can write the most general Lorentz covariant structure compatible with the symmetries, $$\begin{aligned} H^{\gamma\delta}_{\alpha\beta} &=& a (\delta^\gamma_\alpha \delta^\delta_\beta+ \delta^\delta_\alpha \delta^\gamma_\beta)+ b \eta^{\gamma\delta} \eta_{\alpha\beta} \nonumber \\ &+& c ( p^\gamma p_\alpha \delta^\delta_\beta+p^\delta p_\alpha \delta^\gamma_\beta+ p^\gamma p_\beta \delta^\delta_\alpha+p^\delta p_\beta \delta^\gamma_\alpha)\cr &+&d\, p^\gamma p^\delta \eta_{\alpha\beta}+e\, \eta^{\gamma\delta} p_\alpha p_\beta \nonumber + f\, p^\gamma p^\delta p_\alpha p_\beta~.\end{aligned}$$ Requiring that this satisfies Eq. (\[inverse\]) leads to a system of linear equations whose solution determines the coefficients $a, b, c, d, e, f$. Using this information one then reconstructs the exact propagator from Eq. (\[fullgravity\]). It is straightforward to apply this technique to the cascading DGP. Starting from 6D, the propagator on the 4-brane is [@gregorygia], $$\begin{aligned} G_{MNPQ}&=&\frac 1 {M_5^3}\frac 1 {p_5^2+2 m_6 p_5}\times \nonumber \\ &\times& \left(\frac 1 2 \tilde{\eta}_{MP}\tilde{\eta}_{NQ}+\frac 1 2 \tilde{\eta}_{MQ}\tilde\eta_{NP}-\frac 1 4 \tilde{\eta}_{MN}\tilde{\eta}_{PQ}\right) \,,\nonumber \\ \tilde{\eta}_{MN}&=&\eta_{MN}+ \frac {p_M p_N} {2 m_6 p_5} ~, \label{dgppropagator}\end{aligned}$$ where $M, N \dots$ are 5D indices and $p_5^2=p_M p^M$. $G_{\mu\nu\alpha\beta}^0$ is obtained by integrating the 5D propagator with respect to the extra-momentum. To compute the propagator on the 3-brane, then, we determine the coefficients $a, b, c, d, e, f$ through the system of linear equations (\[inverse\]). One finds, $$\begin{aligned} a &=&- \frac 1 {2 (I_1 p^2+ 1) } \nonumber \\ b&=& \frac {I_1 p^2}{(I_1 p^2+1)(I_1 p^2-2)}\nonumber \\ c&=& -\frac {I_1}{2 (I_1 p^2+ 1)}\nonumber \\ d&=& -\frac {I_1}{(I_1 p^2+1)(I_1 p^2-2)}\nonumber \\ e &=& \frac {1}{3 (I_1 p^2+1)}- \frac {4 I_1 + 3 I_2 p^2}{3 (I_1 p^2-2)}\nonumber \\ f &=& \frac {I_2+2 I_1^2+ I_1 I_2 p^2}{(I_1 p^2+1)(I_1 p^2-2)}\end{aligned}$$ where $$\begin{aligned} I_1&=&\frac 1 {m_5} \int \frac {dq} {2\pi} \frac 1 {p^2+q^2+2 m_6 \sqrt{p^2+q^2}}\nonumber \\ I_2&=&\frac 1 {2 m_6m_5} \int \frac {dq} {2\pi} \frac 1 {\sqrt{p^2+q^2}(p^2+q^2+2 m_6 \sqrt{p^2+q^2})} \nonumber\end{aligned}$$ All these coefficients are finite showing that the regularization is also effective for the spin 2 case. Having determined the coefficients of the tensor $H$ the full propagator is given by Eq. (\[fullgravity\]). To linear order the amplitude between two conserved sources on the brane is rather simple, $$-\frac {1}{M_4^2} \frac {I_1}{I_1p^2+1}\left(T_{\mu\nu}T'^{\mu\nu}-\frac {I_1 p^2-1}{2 I_1 p^2-4} T T'\right)~, \label{amplitude2}$$ and only depends on the first integral $I_1$. The coefficient in front of the amplitude is exactly as for the scalar however there is a non-trivial tensor structure. One worrisome feature of this amplitude is that the relative coefficient of $T_{\mu\nu}T'^{\mu\nu}$ and $T T'$ interpolates between $-1/4$ in the IR and $-1/2$ in the UV. The $-1/4$ in the IR gives the correct tensor structure of gravity in 6D and is unavoidable because at large distances the physics is dominated by 6D Einstein term. From the 4D point of view this can be understood as the exchange of massive gravitons and an extra-scalar. The $-1/2$ in the UV on the other hand signals the presence of a ghost. This agrees with previous results [@gregshif; @dubov] which used a different regularization. From the 4D point of view the theory decomposes into massive spin 2 fields and scalars. Since the massive spin 2 gives an amplitude with relative coefficient $-1/3$ the extra repulsion must be provided by a scalar with wrong sign kinetic term. Separating from Eq. (\[amplitude2\]) the massive spin two contribution, we identify the scalar propagator as $$G_{ghost}= \frac{1}{6M_4^2} \frac {I_1} {I_1 p^2-2 }~.$$ This propagator has a pole with negative residue therefore it contains a localized (tachyonic) ghost mode in addition to a continuum of healthy modes.\ Ghost free theory ================= To clarify the origin of the ghost it is illuminating to consider the decoupling limit studied in [@lpr]. This will allow us to show how a healthy theory can be obtained by simply introducing tension on the lower dimensional brane while retaining the intrinsic geometry flat. In the 6D case the decoupling limit [@lpr] corresponds to taking $M_5,M_6 \to \infty$ with $\Lambda_s\equiv (m_6^2 M_5^{3/2})^{2/7}=(M_6^{16}/M_5^9 )^{1/7}$ finite. In this limit, the physics on the 4-brane admits a local 5D description, where only the non-linearities in the helicity 0 part of the metric are kept, and are suppressed by the scale $\Lambda_s$. The effective 5D lagrangian is given by $$\begin{aligned} \label{5Daction} L_5 &=& {M_5^3\over4} \, h^{MN} ({{\cal E}}h)_{MN} - 3 M_5^3 (\partial \pi)^2 {\left}( 1 + {9\over 32m_6^2} \,\Box_5\pi {\right}) \cr &+&\delta(z){\left}( {M_4^2\over4} \, {\widetilde}h^{\mu\nu} ({{\cal E}}{\widetilde}h)_{\mu\nu} + {\widetilde}h^{\mu\nu} T_{\mu\nu} {\right}) ~,\end{aligned}$$ where $({{\cal E}}h)_{MN}=\Box h_{MN}+\dots$ is the linearized Einstein tensor, $M, N \dots$ are 5D indices and $\mu,\nu\dots$ are four dimensional. We have rescaled $\pi$ and $h_{\mu\nu}$ so that they are dimensionless, and the physical 5D metric is $$\label{physical} \widetilde h_{MN}=h_{MN}+\pi\; \eta_{MN}~.$$ The first line of is the 5D version of the ‘$\pi$ Lagrangian’ introduced in [@lpr] for the DGP model. In addition to this, we have the localized curvature term on the 3-brane, which depends on 4D physical metric ${\widetilde}h_{\mu\nu}$. This introduces a kinetic mixing between $\pi$ and the 5D metric. We now take a further step and compute the boundary effective action valid on the 3-brane. At the quadratic order by integrating out the fifth dimension the 5D kinetic term of $\pi$ produces a 4D “mass term” $\sim M_5^3 \sqrt{-\Box_4}$ while the Einstein tensor gives rise to Pauli-Fierz (PF) structure for $h_{\mu\nu}$ on the boundary[^1], $$\begin{aligned} \label{boundaryEffAct} L_4&=& - {M_5^3\over2}\; h^{\mu\nu}\sqrt{-\Box_4}{\left}( h_{\mu\nu} - h \eta_{\mu\nu} {\right}) - 6 M_5^3\, \pi \sqrt{-\Box_4} \,\pi \cr &+& {M_4^2 \over4}\; {\widetilde}h^{\mu\nu}({{\cal E}}\widetilde h)_{\mu\nu} + {\widetilde}h^{\mu\nu} \, T_{\mu\nu} ~,\end{aligned}$$ where $h_{\mu\nu}$ and $\pi$ now denote the 5D fields evaluated at the 3-brane location. In terms of the physical metric, takes the form, $$\begin{aligned} \label{boundaryEffAct2} L_4&= & - {M_5^3\over2}\; \widetilde h^{\mu\nu}\sqrt{-\Box_4}{\left}( \widetilde h_{\mu\nu} - \widetilde h \eta_{\mu\nu} {\right}) - 3 M_5^3\, \pi \sqrt{-\Box_4} \, \widetilde h \cr & +&{M_4^2\over4} \; \widetilde h^{\mu\nu}({{\cal E}}\widetilde h)_{\mu\nu} + \widetilde h^{\mu\nu} \, T_{\mu\nu} ~.\end{aligned}$$ Note that the kinetic term for $\pi$ is completely absorbed by that of ${\widetilde}h_{\mu\nu}$ and only a cross term between $\pi$ and ${\widetilde}h_\mu^\mu$ remains. From this it is straightforward to show the presence of a ghost. The scalar longitudinal component of $h_{\mu\nu}$ acquires a positive kinetic term by mixing with the graviton [@nima; @lpr]. By taking $$\widetilde{h}_{\mu\nu}=\widehat{h}_{\mu\nu}+\phi \, \eta_{\mu\nu}+ \frac {\partial_{\mu}\partial_{\nu} } {m_5 \sqrt{-\Box_4}}\,\phi~,$$ one finds that there are in fact two 4D scalar modes whose kinetic matrix in the UV is $${3\over2} M_4^2 \; \Box_4 \, \left(\begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right) \label{matrix}$$ which has obviously a negative eigenvalue corresponding to a ghost. Having understood the origin of the ghost we are now ready to show how to cure it. To achieve this we clearly need to introduce a positive localized kinetic term for $\pi$. This can arise from extrinsic curvature contributions. The simplest and most natural choice is to put a tension $\Lambda$ on the 3-brane. This produces extrinsic curvature while leaving the metric on the brane flat since the tension only creates a deficit angle. The solution to the 5D equations following from for a 3-brane with tension $\Lambda$ is [@dws] $$\label{background} \pi^{(0)} = {\Lambda\over6M_5^3} \, |z|\;,\quad\quad h^{(0)}_{\mu\nu}= -{\Lambda\over6M_5^3}\, |z| \, \eta_{\mu\nu}~.$$ This is an exact solution including the non-linear terms for $\pi$ – they vanish identically for this profile. The background corresponds to a locally flat 6D bulk with deficit angle $\Lambda/M_6^4$ and with flat 4D sections. The crucial point is that on this background the $\pi$ lagrangian acquires contributions from the non-linear terms. These can be found considering the perturbations, $$\begin{aligned} \label{fluct} \pi&=\pi^{(0)}(z)+\delta\pi(z,x^\mu)\,, \cr h_{\mu\nu}&=h^{(0)}_{\mu\nu}(z)+ \delta h_{\mu\nu}(z,x^\mu)\,, \cr T_{\mu\nu}&=-\Lambda \eta_{\mu\nu} + \delta T_{\mu\nu}~.\end{aligned}$$ Plugging in and dropping $\delta$, one obtains at quadratic order (up to a total derivative), $$\begin{aligned} \delta L_5 &=& {27\over 4}{M_5^3\over m_6^2}{\left}(\partial_M\pi\partial_N\pi\;\partial^M\partial^N\pi^{(0)}- (\partial_M\pi)^2\,\Box_5\pi^{(0)}{\right}) \cr &=& -{9\over4}{\Lambda\over m_6^2}\;\delta(z) \;(\partial_\mu \pi)^2~. \label{addition}\end{aligned}$$ This is a localized kinetic term for $\pi$ that contributes to the 4D effective action with a healthy sign when $\Lambda>0$. Therefore for large enough $\Lambda$ the kinetic matrix for the 2 scalars (\[matrix\]) becomes positive and the ghost is absent. This can also be seen by computing the one particle exchange amplitude. With the addition of (\[addition\]), the effective 4D equations are $$\begin{aligned} M_4^2\,({{{\cal E}}{\widetilde}h})_{\mu\nu} &- 2 M_5^3 \sqrt{-\Box_4} {\left}( {\widetilde}h_{\mu\nu} - {\widetilde}h \, \eta_{\mu\nu} {\right})\nonumber \\ & = - 2 \, T_{\mu\nu} + 6 M_5^3 \sqrt{-\Box_4} \,\pi \,\eta_{\mu\nu} ~, \label{einst5} \\ {3 \Lambda \over 2 m_6^2 } \; \Box_4 \pi&= M_5^3\, \sqrt{-\Box_4} \, \widetilde h ~. \label{pi}\end{aligned}$$ Using the Bianchi identities and the conservation of $T_{\mu\nu}$, the double divergence of leads to $$\label{dd} M_5^3 \sqrt{-\Box_4} {\left}( ({{\cal E}}{\widetilde}h)_\mu^\mu + 6 \Box_4 \pi {\right})=0~,$$ where we have used that $({{\cal E}}{\widetilde}h)_\mu^\mu=2(\partial^\mu\partial^\nu {\widetilde}h_{\mu\nu}-\Box_4 {\widetilde}h)$. On the other hand, the trace of in conjunction with and , leads to $$\label{pi4} M_4^2 \, {{\cal O}}_{(\pi)} \; \pi= - 2 T_\mu^\mu ~, $$ where ${{\cal O}}_{(\pi)}\equiv {\left}[9{\left}(\Lambda/ m_6^2 M_4^2{\right})-6{\right}] \Box_4 - 24 \, m_5 \sqrt{-\Box_4} $.\ Combining , and , one derives that the physical metric is, up to pure gauge terms, $$\label{resultTension} \widetilde h_{\mu\nu}= {-2\over M_4^2} \Biggr\{ {1\over{{\cal O}}} {\left}(T_{\mu\nu}-{T\over3} \eta_{\mu\nu}{\right}) +{1\over {{\cal O}}_{(\pi)}} \, T \;\eta_{\mu\nu} \Biggr\} ~, $$ where ${{\cal O}}=\Box_4-2\, m_5 \sqrt{-\Box_4}$. The tensor structure of the amplitude interpolates between $-1/4$ in the IR and $$-{1\over3}+ {1\over6}{\left}({3\over 2}{\Lambda\over M_4^2 m_6^2} - 1{\right})^{-1}$$ in the UV. The amplitude above corresponds to the exchange of massive spin 2 fields and a scalar obeying Eq. (\[pi4\]). The (DGP-like) kinetic term for the scalar is positive as long as, $$\label{lambdaMin} \Lambda > {2\over 3} M_4^2 m_6^2 ~.$$ In this regime we see that the localized ghost disappears and the scalar sector is composed of a healthy resonance. In the limit we are considering the tension required is consistent with having six non compact dimensions. Indeed, requiring that the deficit angle in the bulk is less than $2\pi$ leads to $\Lambda< 2\pi M_6^4$. It follows that, $$\label{window} 3\pi M_5^6 > {M_4^2 \,M_6^4 }~,$$ which is always satisfied in the $5D$ decoupling limit and displays the necessity of the induced term on the codimension 1 brane. Moreover the condition above is equivalent to having $m_6 < m_5$, which suggests that in order to avoid the ghost one should cascade from the highest dimension down to 4D ’step by step’.\ From a phenomenological point of view, observations require that $m_5\lesssim H_0$, the present Hubble scale. The most interesting possibility is when the $6D$ crossover scale is larger but of similar order. Assuming that the formulas above can be extrapolated in this regime for a Planckian $M_4$ this implies that $M_5$ is of order $10 MeV$ and $M_6 \sim m eV$. The latter also sets the scale of $\Lambda$.\ Discussion ========== In this note we have presented a six dimensional DGP model with cascading localized kinetic terms. The model interpolates between a 6D behavior at large distances and a 4D one at short distances with an intermediate 5D regime. The kinetic terms regularize the divergent codimension 2 behavior. The model is ghost-free at least for a certain range of parameters if on the codimension 2 brane there is a large enough tension.\ We have left several questions for future study. At linear-level the tensor structure of the graviton propagator is inconsistent with observations. In the context of DGP this was shown not to be a problem because the non-linearities restore the correct tensor structure [@vainshtein]. A hint of a similar phenomenon in the present model is given by the longitudinal terms of the graviton propagator . These are singular when the mass parameters $m_{5,6}$ vanish and give large contributions to nonlinear diagrams. In fact, we expect a ‘double’ Vainshtein effect. For dense enough sources, the non-linearities should first decouple the extra 5D scalar mode restoring 5D behavior followed by another step to 4D. Another important direction to study is cosmology. The model has the intriguing codimension 2 feature that tension does not curve the space. This obviously of interest for the cosmological constant problem.\ *Acknowledgements* We thank Gregory Gabadadze for useful discussions. This work is supported in part by NSF grant PHY-0245068 and by the David and Lucile Packard Foundation Fellowship for Science and Engineering (GD and MR), by DURSI under grant 2005 BP-A 10131 (OP), and by NSERC and MRI (JK, AJT, CdR and SH). G. R. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B [**484**]{}, 112 (2000) \[arXiv:hep-th/0002190\]. S. L. Dubovsky and V. A. Rubakov, Phys. Rev.  D [**67**]{}, 104014 (2003) \[arXiv:hep-th/0212222\]. G. Gabadadze and M. Shifman, Phys. Rev.  D [**69**]{}, 124032 (2004) \[arXiv:hep-th/0312289\]. N. Kaloper and D. Kiley, JHEP [**0705**]{}, 045 (2007) \[arXiv:hep-th/0703190\]. G. R. Dvali and G. Gabadadze, Phys. Rev.  D [**63**]{}, 065007 (2001) \[arXiv:hep-th/0008054\]. M. A. Luty, M. Porrati and R. Rattazzi, JHEP [**0309**]{} (2003) 029 \[arXiv:hep-th/0303116\]; A. Nicolis and R. Rattazzi, JHEP [**0406**]{} (2004) 059. N. Arkani-Hamed, H. Georgi and M. D. Schwartz, Annals Phys.  [**305**]{}, 96 (2003) \[arXiv:hep-th/0210184\]. G. Dvali, G. Gabadadze, O. Pujolas and R. Rahman, Phys. Rev.  D [**75**]{}, 124013 (2007) \[arXiv:hep-th/0612016\]. C. Deffayet, G.  Dvali, G. Gabadadze and A. Vainshtein, Phys. Rev.  D [**65**]{}, 044026 (2002) \[arXiv:hep-th/0106001\]. [^1]: The massless 5D graviton decomposes in a continuum of massive 4D gravitons. Therefore in the unitary gauge the boundary effective action will have PF structure.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we investigate the relation between Sobolev-type embeddings of Hajłasz-Besov spaces (and also Hajłasz-Triebel-Lizorkin spaces) defined on a metric measure space $(X,d,\mu)$ and lower bound for the measure $\mu.$ We prove that if the measure $\mu$ satisfies $\mu(B(x,r))\geq cr^Q$ for some $Q>0$ and for any ball $B(x,r)\subset X,$ then the Sobolev-type embeddings hold on balls for both these spaces. On the other hand, if the Sobolev-type embeddings hold in a domain $\Omega\subset X,$ then we prove that the domain $\Omega$ satisfies the so-called measure density condition, i.e., $\mu(B(x,r)\cap\Omega)\geq cr^Q$ holds for any ball $B(x,r)\subset X,$ where $X=(X,d,\mu)$ is an Ahlfors $Q$-regular and geodesic metric measure space.' address: 'Discipline of Mathematics, Indian Institute of Technology Indore, Simrol, Indore-453552, India' author: - Nijjwal Karak bibliography: - 'embedding.bib' title: 'Measure density and Embeddings of Hajłasz-Besov and Hajłasz-Triebel-Lizorkin spaces' --- [^1] Keywords: Metric measure space, Hajłasz-Besov space, Hajłasz-Triebel-Lizorkin space, measure density.\ 2010 Mathematics Subject Classification: 46E35, 42B35 . Introduction ============ The most important result of the classical theory of Sobolev spaces is the Sobolev embedding theorem. Embeddings of fractional Sobolev spaces $W^{s,p}(\Omega),$ where $\Omega$ is a domain in $\mathbb{R}^n$ and $0<s<1,$ have been established in [@EGE12] when $p\geq 1$ and in [@Zho15] when $p<1.$ In the metric space setting, especially for Hajłasz-Sobolev space $M^{1,p}(X),$ Hajłasz has been able to find similar embeddings on balls provided that the measure of the balls has a lower bound, see Theorem 8.7 of [@Haj03]. We assume here and throughout the paper that $X=(X,d,\mu)$ is a *metric measure space* equipped with a metric $d$ and a Borel regular measure $\mu$ on $X$ such that all balls defined by $d$ have finite and positive measures. In this paper we have proved similar embeddings on balls for homogeneous Hajłasz-Besov spaces $\dot{N}^s_{p,q}(X)$ and also for homogeneous Hajłasz-Triebel-Lizorkin spaces $\dot{M}^s_{p,q}(X),$ see section 3 and 4. For the definitions of $M^{s,p}(X),$ $\dot{M^{s,p}(X)},$ $M^s_{p,q}(X),$ $\dot{M}^s_{p,q}(X),$ $N^s_{p,q}(X)$ and $\dot{N}^s_{p,q}(X)$ see Section 2. Among several possible definitions of Besov and Triebel-Lizorkin spaces in the metric setting, the pointwise definition introduced in [@KYZ11] appears to be very useful. This approach is based on the definition of Hajłasz-Sobolev space; it leads to the classical Besov and Triebel-Lizorkin spaces in the setting of Euclidean space, [@KYZ11 Theorem 1.2] and it gives a simple way to define these spaces on a measurable subset of $\mathbb{R}^n.$ Let $(X,d)$ be a metric space equipped with a measure $\mu.$ A measurable set $S\subset X$ is said to satisfy a *measure density condition*, if there exists a constant $c_m>0$ such that $$\label{measuredensity} \mu(B(x,r)\cap S)\geq c_m\mu(B(x,r))$$ for all balls $B(x,r)$ with $x\in S$ and $0<r\leq 1.$ Note that sets satisfying such a condition are sometimes called in the literature regular sets. If the measure $\mu$ is doubling, then the upper bound 1 for the radius $r$ can be omitted. If a set $S$ satisfies the measure density condition, then we have $\mu(\overline{S}\setminus S)=0,$ [@Shv07 Lemma 2.1]. Some examples of sets satisfying the measure density condition are cantor-like sets such as Sierpiński carpets of positive measure.\ In [@HKT08b Theorem 1], the authors have proved that if the Sobolev embedding holds in a domain $\Omega\subset\mathbb{R}^n,$ in any of all the possible cases, then $\Omega$ satisfies the measure density condition. Same result for fractional Sobolev spaces was obtained by Zhou, [@Zho15]. In this paper, we have obtained similar results for Hajłasz-Besov spaces $N^s_{p,q}$ and Hajłasz-Triebel-Lizorkin spaces $M^s_{p,q},$ see section 5. The idea of the proof is borrowed from [@HIT16 Theorem 6.1] where the authors showed that an $M^s_{p,q}$-extension domain (or an $N^s_{p,q}$-extension domain) satisfies measure density condition.\ See [@HHL] for geometric characterizations of embedding theorems for these spaces.\ Notation used in this paper is standard. The symbol $c$ or $C$ will be used to designate a general constant which is independent of the main parameters and whose value may change even within a single string of estimate. The symbol $A\lesssim B$ or $B \gtrsim A$ means that $A\leq CB$ for some constant $C.$ If $A\lesssim B$ and $B\lesssim A,$ then we write $A\approx B.$ For any locally integrable function $u$ and $\mu$-measurable set $A,$ we denote by $\dashint_{A}u$ the integral average of $u$ on A, namely, $\dashint_{A}u:=\frac{1}{\mu(A)}\int_Au.$ Definitions and Preliminaries ============================= Besov and Triebel-Lizorkin spaces are certain generalizations of fractional Sobolev spaces. There are several ways to define these spaces in the Euclidean setting and also in the metric setting. For various definitions of in the metric setting, see [@GKS10], [@GKZ13], [@KYZ11] and the references therein. In this paper, we use the approach based on pointwise inequalities, introduced in [@KYZ11]. Let $S\subset X$ be a measurable set and let $0<s<\infty.$ A sequence of nonnegative measurbale functions $(g_k)_{k\in\mathbb{Z}}$ is a fractional $s$-gradient of a measurable function $u:S\rightarrow [-\infty,\infty]$ in $S,$ if there exists a set $E\subset S$ with $\mu(E)=0$ such that $$\label{Hajlasz} \vert u(x)-u(y)\vert\leq d(x,y)^s\left(g_k(x)+g_k(y)\right)$$ for all $k\in\mathbb{Z}$ and for all $x,y\in S\setminus E$ satisfying $2^{k-1}\leq d(x,y)<2^{k}.$ The collection of all fractional $s$-gradient of $u$ is denoted by $\mathbb{D}^s(u).$ Let $S\subset X$ be a measurable set. For $0<p,q\leq \infty$ and a sequence $\vec{f}=(f_k)_{k\in\mathbb{Z}}$ of measurable functions, we define $$\Vert (f_k)_{k\in\mathbb{Z}}\Vert_{L^p(S,l^q)}=\big\Vert \Vert (f_k)_{k\in\mathbb{Z}}\Vert_{l^q} \big\Vert_{L^p(S)}$$ and $$\Vert (f_k)_{k\in\mathbb{Z}}\Vert_{l^q(L^p(S))}=\big\Vert (\Vert (f_k)\Vert_{L^p(S)})_{k\in\mathbb{Z}} \big\Vert_{l^q},$$ where $$\Vert (f_k)_{k\in\mathbb{Z}}\Vert_{l^q}= \begin{cases} (\sum_{k\in\mathbb{Z}}\vert f_k\vert^q)^{1/q},& ~\text{when}~0<q<\infty,\\ \sup_{k\in\mathbb{Z}}\vert f_k\vert,& ~\text{when}~q=\infty. \end{cases}$$ Let $S\subset X$ be a measurable set. Let $0<s<\infty$ and $0<p,q\leq\infty.$ The *homogeneous Hajłasz-Triebel-Lizorkin space* $\dot{M}^s_{p,q}(S)$ consists of measurable functions $u:S\rightarrow [-\infty,\infty],$ for which the (semi)norm $$\Vert u\Vert_{\dot{M}^s_{p,q}(S)}=\inf_{\vec{g}\in\mathbb{D}^s(u)}\Vert\vec{g}\Vert_{L^p(S,l^q)}$$ is finite. The (non-homogeneous) *Hajłasz-Triebel-Lizorkin space* $M^s_{p,q}(S)$ is $\dot{M}^s_{p,q}(S)\cap L^p(S)$ equipped with the norm $$\Vert u\Vert_{M^s_{p,q}(S)}=\Vert u\Vert_{L^p(S)}+\Vert u\Vert_{\dot{M}^s_{p,q}(S)}.$$ Similarly, the *homogeneous Hajłasz-Besov space* $\dot{N}^s_{p,q}(S)$ consists of measurable functions $u:S\rightarrow [-\infty,\infty],$ for which the (semi)norm $$\Vert u\Vert_{\dot{N}^s_{p,q}(S)}=\inf_{\vec{g}\in\mathbb{D}^s(u)}\Vert\vec{g}\Vert_{l^q(L^p(S))}$$ is finite and the (non-homogeneous) *Hajłasz-Besov space* $N^s_{p,q}(S)$ is $\dot{N}^s_{p,q}(S)\cap L^p(S)$ equipped with the norm $$\Vert u\Vert_{N^s_{p,q}(S)}=\Vert u\Vert_{L^p(S)}+\Vert u\Vert_{\dot{N}^s_{p,q}(S)}.$$ The space $M^s_{p,q}(\mathbb{R}^n)$ given by the metric definition coincides with the Triebel-Lizorkin space $F^s_{p,q}(\mathbb{R}^n),$ defined via the Fourier analytic approach, when $0<s<1,$ $n/(n+s)<p<\infty$ and $0<q\leq \infty,$ see [@KYZ11]. Similarly, $N^s_{p,q}(\mathbb{R}^n)$ coincides with Besov space $B^s_{p,q}(\mathbb{R}^n)$ for $0<s<1,$ $n/(n+s)<p<\infty$ and $0<q\leq \infty,$ see [@KYZ11]. For the definitions of $F^s_{p,q}(\mathbb{R}^n)$ and $B^s_{p,q}(\mathbb{R}^n),$ we refer to [@Tri83] and [@Tri92]. Let $S\subset X$ be a measurable set. Let $0<s<\infty$ and $0<p\leq\infty.$ A nonnegative measurable function $g$ is an $s$-gradient of a measurable function $u$ in $S$ if there exists a set $E\subset S$ with $\mu(E)=0$ such that for all $x,y\in S\setminus E,$ $$\vert u(x)-u(y)\vert\leq d(x,y)^s(g(x)+g(y)).$$ The collection of all $s$-gradients of $u$ is denoted by $\mathcal{D}^s(u).$ The *homogeneous Hajłasz-Sobolev space* $\dot{M}^{s,p}(S)$ consists of measurable functions $u$ for which $$\Vert u\Vert_{\dot{M}^{s,p}(S)}=\inf_{g\in\mathcal{D}^s(u)}\Vert g\Vert_{L^p(S)}$$ is finite. The *Hajłasz-Sobolev space* $M^{s,p}(S)$ is $\dot{M}^{s,p}(S)\cap L^p(S)$ equipped with the norm $$\Vert u\Vert_{M^{s,p}(S)}=\Vert u\Vert_{L^p(S)}+\Vert u\Vert_{\dot{M}^{s,p}(S)}.$$ Note that if $0<s<\infty$ and $0<p\leq\infty,$ then $\dot{M}^s_{p,\infty}(X)=\dot{M}^{s,p}(X),$ [@KYZ11 Proposition 2.1].\ Let $(X,d,\mu)$ be a metric measure space. The measure $\mu$ is called *doubling* if there exist a constant $C_{\mu}\geq 1$ such that $$\mu(B(x,2r))\leq C_{\mu}\,\mu(B(x,r))$$ for each $x\in X$ and $r>0.$ We call a triple $(X,d,\mu)$ a *doubling metric measure space* if $\mu$ is a doubling measure on $X.$\ As a special case of doubling spaces we consider $Q$-regular spaces. The space $X$ is said to be $Q$-regular, $Q>1,$ if there is a constant $c_Q\geq 1$ such that $$c_Q^{-1}r^Q\leq \mu(B(x,r))\leq c_Qr^Q$$ for each $x\in X$ and for all $0<r\leq\operatorname{diam}X.$\ A metric space $X$ is said to be *geodesic* if every pair of points in the space can be joined by a curve whose length is equal to the distance between the points.\ We will often use the following elementary inequality, which holds whenever $a_i\geq 0$ for all $i$ and $0<\beta\leq 1,$ $$\label{inequality} \sum_{i\in\mathbb{Z}}a_i\leq\Big(\sum_{i\in\mathbb{Z}}a_i^{\beta}\Big)^{1/\beta}.$$ Hajłasz-Triebel-Lizorkin spaces =============================== We use the idea of Hajłasz from [@Haj03] to prove the following theorem. We will skip the case $q=\infty$ as it is proved in [@Haj03 Thorem 8.7] when $s=1$ and other cases can be derived by modifying the proof of it. \[embedding\] Let $(X,d,\mu)$ be a metric measure space and $B_0$ be a fixed ball of radius $r_0.$ Let us assume that the measure $\mu$ has a lower bound, that is there exist constants $b, Q>0$ such that $\mu(B(x,r))\geq br^Q$ whenever $B(x,r)\subset 2B_0.$ Let $u\in \dot{M}^s_{p,q}(2B_0)$ and $\vec{g}=(g_j)\in \mathbb{D}^s(u)$ where $0<p,q,s<\infty.$ Then there exist constants $C,\,C_1,\,C_2$ and $C_3$ such that\ $1.$ If $0<sp<Q,$ then $u\in L^{p^*}(B_0),$ $p^*=\frac{Qp}{Q-sp}$ and $$\label{embed} \inf_{c\in\mathbb{R}}\left(\dashint_{B_0}\vert u-c\vert^{p^*}\,d\mu\right)^{\frac{1}{p^*}}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.$$ $2.$ If $sp=Q,$ then $$\label{embedb} \dashint_{B_0}\exp\left(C_1b^{1/Q}\frac{\vert u-u_{B_0}\vert}{\Vert \vec{g}\Vert_{L^p(2B_0,l^q)}}\right)\,d\mu\leq C_2.$$ $3.$ If $sp>Q,$ then $$\label{embedc} \Vert u-u_{B_0}\Vert_{L^{\infty}(B_0)}\leq C_3\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.$$ In particular, for $x,y\in B_0,$ we have $$\label{embedc'} \vert u(x)-u(y)\vert\leq cb^{-1/p}d(x,y)^{1-s/p}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.$$ We may assume by selecting an appropriate constant that $\operatorname*{ess\,inf}_E u=0,$ where $E\subset 2B_0$ is any subset of positive measure, since subtracting a constant from $u$ will not affect the inequality . The set $E$ will be chosen later. With a correct choice of $E$ we will prove with $(\dashint_{B_0}\vert u\vert ^{p^*}\,d\mu)^{1/p^*}$ on the left hand side.\ If $\sum_{j=-\infty}^{\infty}g_j^q=0$ a.e., then $g_j=0$ a.e. for all $j,$ which implies that $u$ is constant and hence the theorem follows. Thus we may assume that $\int_{2B_0}(\sum_j g_j^q)^{\frac{p}{q}}\,d\mu>0$. We may also assume that $$\label{lowerbound} \bigg(\sum_{j=-\infty}^{\infty}g_j(x)^q\bigg)^{\frac{1}{q}}\geq 2^{-(1+\frac{1}{p})}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j(x)^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}>0$$ for all $x\in 2B_0$ as otherwise we can replace $\left(\sum g_j(x)^q\right)^{1/q}$ by $$\left(\sum \widetilde{g}_j(x)^q\right)^{1/q}=\left(\sum g_j(x)^q\right)^{1/q}+\left(\dashint_{2B_0}\bigg(\sum g_j(x)^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.$$ Let us define auxiliary sets $$E_k=\bigg\{x\in 2B_0:\Big(\sum_{j=-\infty}^{\infty}g_j(x)^q\Big)^{\frac{1}{q}}\leq 2^k\bigg\},\quad k\in\mathbb{Z}.$$ Clearly $E_k\subset E_{k+1}$ for all $k.$ Observe that $$\label{rhs} \int_{2B_0}\Big(\sum_j g_j^q\Big)^{\frac{p}{q}}\,d\mu\approx\sum_{k=-\infty}^{\infty}2^{kp}\mu(E_k\setminus E_{k-1}).$$ Let $a_k=\sup_{B_0\cap E_k}\vert u\vert.$ Obviously, $a_k\leq a_{k+1}$ and $$\label{lhs} \int_{B_0}\vert u\vert^{p^*}\,d\mu\leq\sum_{k=-\infty}^{\infty}a_k^{p^*}\mu(B_0\cap (E_k\setminus E_{k-1})).$$ By Chebyschev’s inequality, we get an upper bound of the complement of $E_k$ $$\begin{aligned} \label{Chebyschev} \mu(2B_0\setminus E_k) &=& \mu\bigg(\Big\{x\in 2B_0:\Big(\sum_{j=-\infty}^{\infty}g_j(x)^q\Big)^{\frac{1}{q}}>2^k\Big\}\bigg) \nonumber \\ &\leq & 2^{-kp}\int_{2B_0}\Big(\sum_j g_j^q\Big)^{\frac{p}{q}}\,d\mu .\end{aligned}$$ Lower bound implies that $E_k=\emptyset$ for sufficiently small $k.$ On the other hand $\mu(E_k)\rightarrow \mu(2B_0)$ as $k\rightarrow\infty.$ Hence there is $\widetilde{k}_0\in\mathbb{Z}$ such that $$\label{conv} \mu(E_{\widetilde{k}_0-1})<\frac{\mu(2B_0)}{2}\leq\mu(E_{\widetilde{k}_0}).$$ The inequality on the right hand side gives $E_{\widetilde{k}_0}=\neq\emptyset$ and hence according to $$\label{first} 2^{-(1+\frac{1}{p})}\left(\dashint_{2B_0}\Big(\sum_{j=-\infty}^{\infty}g_j(x)^q\Big)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}\leq \Big(\sum_{j=-\infty}^{\infty}g_j(x)^q\Big)^{\frac{1}{q}}\leq 2^{\widetilde{k}_0}$$ for $x\in E_{\widetilde{k}_0}.$ At the same time the inequality on the left hand side of together with imply that $$\label{second} \frac{\mu(2B_0)}{2}<\mu(2B_0\setminus E_{\widetilde{k}_0-1})\leq 2^{-(\widetilde{k}_0-1)p}\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu.$$ Combining the inequalities and we obtain $$\label{combine} 2^{-(1+\frac{1}{p})}\left(\dashint_{2B_0}\Big(\sum_{j=-\infty}^{\infty}g_j(x)^q\Big)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}\leq 2^{\widetilde{k}_0}\leq 2^{(1+\frac{1}{p})}\left(\dashint_{2B_0}\left(\sum_{j=-\infty}^{\infty}g_j(x)^q\right)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}$$ Choose the least integer $\ell\in\mathbb{Z}$ such that $$\label{ell} 2^{\ell}>\max\bigg\{2^{1+1/p}\Big(\frac{2}{1-2^{-p/Q}}\Big)^{Q/p}, 1\bigg\}\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}$$ and set $k_0=\widetilde{k}_0+\ell.$ The reason behind such a choice of $\ell$ and $k_0$ will be understood later. Note that $\ell>0,$ by the lower bound of the measure $\mu,$ and hence yields $\mu(E_{k_0})>0.$ The inequalities in becomes $$\label{final} 2^{k_0}\approx (br_0^Q)^{-1/p}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/p}.$$ Suppose that $\mu(B_0\setminus E_{k_0})>0$ (we will handle the other case at end of the proof). For $k\geq k_0+1,$ set $$\label{radii} t_k:=2b^{-1/Q}\mu(2B_0\setminus E_{k-1})^{1/Q}.$$ Suppose now that $k\geq k_0+1$ is such that $\mu((E_k\setminus E_{k-1})\cap B_0)>0$ (if such a $k$ does not exist, then $\mu(B_0\setminus E_{k_0})=0,$ contradicting our assumption). Then in particular $t_k>0.$ Pick a point $x_k\in (E_k\setminus E_{k-1})\cap B_0$ and assume that $B(x_k,t_k)\subset 2B_0.$ Then $$\mu(B(x_k,t_k))\geq bt_k^Q>\mu(2B_0\setminus E_{k-1})$$ and hence $B(x_k,t_k)\cap E_{k-1}\neq\emptyset.$ Thus there is $x_{k-1}\in E_{k-1}$ such that $$d(x_k,x_{k-1})<t_k\leq 2b^{-1/Q}2^{-(k-1)p/Q}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/Q},$$ by and . Repeating this construction in a similar fashion we obtain for $k\geq k_0+1,$ a sequence of points $$\begin{gathered} x_k \in (E_k\setminus E_{k-1})\cap B_0,\\ x_{k-1} \in E_{k-1}\cap B(x_k,t_k),\\ \vdots \\ x_{k_0}\in E_{k_0}\cap B(x_{k_0+1},t_{k_0+1}), \end{gathered}$$ such that $$\label{distance} d(x_{k-i},x_{k-(i+1)})<t_{k-i}\leq 2b^{-1/Q}2^{-(k-(i+1))p/Q}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/Q},$$ for every $i=0,1\ldots,k-k_0-1.$ Hence $$\begin{aligned} \label{totaldistance} d(x_k,x_{k_0})&<&t_k+t_{k-1}+\cdots +t_{k_0+1}\\ \nonumber &\leq & 2b^{-1/Q} \bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/Q}\sum_{n=k_0}^{k-1}2^{-np/Q}\\ \nonumber &=& 2^{-k_0p/Q}\frac{2b^{-1/Q}}{1-2^{-p/Q}}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/Q}.\end{aligned}$$ This is all true provided $B(x_{k-i},t_{k-i})\subset 2B_0$ for $i=0,1,2,\ldots,k-k_0-1.$ That means we require that the right hand side of is $\leq r_0\leq{{\operatorname{dist}}}(B_0,X\setminus 2B_0).$ Our choice of $k_0,$ and guarantee us this requirement. Indeed, $$\begin{aligned} 2^{k_0}=2^{\widetilde{k}_0+\ell}&\geq & 2^{\ell}2^{-(1+1/p)}\bigg(\dashint_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/p}\\ &\geq & \left(\frac{2}{1-2^{-p/Q}}\right)^{Q/p}(br_0^Q)^{-1/p}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/p}.\\\end{aligned}$$ Then $t_k+t_{k-1}+\cdots +t_{k_0+1}\leq r_0\leq{{\operatorname{dist}}}(B_0,X\setminus 2B_0),$ which implies that $B(x_{k-i},t_{k-i})\subset 2B_0$ for all $i=0,1,2,\ldots,k-k_0-1.$\ Now we would like to get some upper bound for $\vert u(x_k)\vert$ for $k\geq k_0+1.$ Towards this end, we write $$\label{difference} \vert u(x_k)\vert \leq\bigg(\sum_{i=0}^{k-k_0-1}\vert u(x_{k-i})-u(x_{k-(i+1)})\vert\bigg)+\vert u(x_{k_0})\vert$$ Let us first consider the difference $\vert u(x_{k_0+1})-u(x_{k_0})\vert.$ The inequality with $i=k-k_0-1$ gives $$\label{k_0_m_0} d(x_{k_0+1},x_{k_0})< 2^{m_0-k_0p/Q},$$ where $m_0\in\mathbb{Z}$ is such that $$\label{m_0} 2^{m_0-1}\leq 2b^{-1/Q}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/Q}<2^{m_0}.$$ Now using and , we get the following bound for the difference $$\vert u(x_{k_0+1})-u(x_{k_0})\vert\leq \sum_{j=-\infty}^{m_0-k_0p/Q}2^{js}\Big[g_j(x_{k_0+1})+g_j(x_{k_0})\Big].$$ Similarly, we use the fact that $d(x_{k-i},x_{k-(i+1)})<2^{m_0-(k-(i+1))p/Q},$ and obtain $$\vert u(x_{k-i})-u(x_{k-(i+1)})\vert\leq \sum_{j=-\infty}^{m_0-(k-i-1)p/Q}2^{js}\Big[g_j(x_{k-i})+g_j(x_{k-(i+1)})\Big],$$ for all $i=0,1,2,\ldots,k-k_0-1.$ So, the inequality becomes $$\vert u(x_k)\vert \leq\bigg(\sum_{i=0}^{k-k_0-1}\sum_{j=-\infty}^{m_0-(k-i-1)p/Q}2^{js}\Big[g_j(x_{k-i})+g_j(x_{k-i-1})\Big]\bigg)+\vert u(x_{k_0})\vert.$$ Use Hölder inequality when $q>1$ and the inequality when $q\leq 1$ and also use the facts that $x_{k-i-1}\in E_{k-i-1}\subset E_{k-i},$ $x_{k-i}\in E_{k-i}$ to obtain $$\begin{aligned} \vert u(x_k)\vert &\leq &\sum_{i=0}^{k-k_0-1}2^{m_0s-\frac{(k-i-1)ps}{Q}}\bigg(\sum_{j=-\infty}^{m_0-(k-i-1)p/Q}\Big[g_j(x_{k-i})+g_j(x_{k-i-1})\Big]^q\bigg)^{1/q}+\vert u(x_{k_0})\vert\\ &\leq &C2^{m_0s}\sum_{i=0}^{k-k_0-1}2^{-\frac{(k-i-1)ps}{Q}}2^{k-i}+\vert u(x_{k_0})\vert.\end{aligned}$$ Hence with , upon taking supremum over $x_k\in E_k\cap B_0,$ yields $$a_k\leq Cb^{-\frac{s}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{\frac{s}{Q}}\sum_{n=k_0}^{k-1}2^{n(1-\frac{sp}{Q})}+\sup_{E_{k_0}\cap 2B_0}\vert u\vert.$$ To estimate the last term $\sup_{E_{k_0}\cap 2B_0}\vert u\vert,$ we can assume that $\operatorname*{ess\,inf}_{E_{k_0}\cap 2B_0}\vert u\vert=0,$ by the discussion in the beginning of the proof and the fact that $\mu(E_{k_0})>0.$ That means there is a sequence $y_i\in E_{k_0}$ such that $u(y_i)\rightarrow 0$ as $i\rightarrow\infty.$ Therefore, for $x\in E_{k_0}\cap 2B_0$ we have $$\label{lastterm} \vert u(x)\vert=\lim_{i\rightarrow\infty}\vert u(x)-u(y_i)\vert\leq C'r_0^{s}2^{k_0}.$$ So, for $k>k_0$ we conclude that $$\label{conclusion} a_k\leq Cb^{-\frac{s}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{\frac{s}{Q}}\sum_{n=k_0}^{k-1}2^{n(1-\frac{sp}{Q})}+C'r_0^{s}2^{k_0}.$$ For $k\leq k_0,$ we will use the estimate $a_k\leq a_{k_0}\leq C'r_0^s2^{k_0}.$\ **Case I:** $0<sp<Q.$ For every $k\in\mathbb{Z},$ we have $$\begin{aligned} a_k &\leq & Cb^{-\frac{s}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{\frac{s}{Q}}\sum_{n=-\infty}^k2^{n(1-\frac{sp}{Q})}+C'r_0^{s}2^{k_0}\\ &=& Cb^{-\frac{s}{Q}}2^{k(1-\frac{sp}{Q})}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{\frac{s}{Q}}+C'r_0^{s}2^{k_0}.\end{aligned}$$ Applying , and we get $$\begin{aligned} \int_{B_0}\vert u\vert^{p^*}\,d\mu &\leq \sum_{k=-\infty}^{\infty}a_k^{p^*}\mu(B_0\cap (E_k\setminus E_{k-1}))\\ &\leq Cb^{-\frac{sp^*}{Q}}\left(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\right)^{\frac{sp^*}{Q}}\sum_{k=-\infty}^{\infty}2^{kp}\mu(E_k\setminus E_{k-1})\\ &\qquad+C'r_0^{sp^*}2^{k_0p^*}\mu(B_0)\\ &\leq C\left(1+\frac{\mu(B_0)}{br_0^Q}\right)b^{-\frac{sp^*}{Q}}\left(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\right)^{p^*/p}.\end{aligned}$$ Using the fact that $1+\mu(B_0)/br_0^Q\leq 2\mu(B_0)/br_0^Q,$ we get inequality .\ Suppose now that $\mu(B_0\setminus E_{k_0})=0.$ In this case, we use the fact that $\int_{B_0}\vert u\vert^{p^*}\,d\mu=\int_{E_{k_0}}\vert u\vert^{p^*}\,d\mu$ and use inequality to obtain inequality .\ **Case II:** $sp=Q.$ It follows from Jensen’s inequality that $$\label{simplification} \left(\dashint_{B_0}\exp\left(C_1b^{1/Q}\frac{\vert u-u_{B_0}\vert}{\Vert \vec{g}\Vert_{L^p(2B_0,l^q)}}\right)\,d\mu\right)^{\frac{1}{2}}\leq \dashint_{B_0}\exp\left(C_1b^{1/Q}\frac{\vert u\vert}{\Vert \vec{g}\Vert_{L^p(2B_0,l^q)}}\right)\,d\mu$$ and hence it is enough to estimate the integral on the right hand side of . It follows from and that $$\label{zero} a_{k_0}\leq C'r_0^s2^{k_0}\leq C''b^{-1/p}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/p}.$$ Hence from we obtain, for $k>k_0,$ $$\label{greaterzero} a_k\leq \tilde{C}b^{-1/p}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{1/p}(k-k_0).$$ We split the integral on the right hand side of into two parts: we estimate the integrals over $B_0\cap E_{k_0}$ and $B_0\setminus E_{k_0}$ separately. For the first part, we have $$\begin{aligned} \frac{1}{\mu(B_0)}\int_{B_0\cap E_{k_0}}\exp\left(C_1b^{1/Q}\frac{\vert u\vert}{\Vert \vec{g}\Vert_{L^p(2B_0,l^q)}}\right)\,d\mu &\leq & \frac{\mu(B_0\cap E_{k_0})}{\mu(B_0)}\exp\left(C_1b^{1/Q}\frac{a_{k_0}}{\Vert \vec{g}\Vert_{L^p(2B_0,l^q)}}\right)\\ &\leq &\exp(C_1C''),\end{aligned}$$ where the last inequality follows from . The second part is estimated using inequality as follows $$\begin{aligned} & &\frac{1}{\mu(B_0)}\int_{B_0\setminus E_{k_0}}\exp\left(C_1b^{1/Q}\frac{\vert u\vert}{\Vert \vec{g}\Vert_{L^p(2B_0,l^q)}}\right)\,d\mu \\ &\leq & \frac{1}{\mu(B_0)}\sum_{k=k_0+1}^{\infty}\exp\left(C_1b^{1/Q}\frac{a_{k_0}}{\Vert \vec{g}\Vert_{L^p(2B_0,l^q)}}\right)\mu(B_0\cap(E_k\setminus E_{k-1}))\\ &\leq & \frac{1}{\mu(B_0)}\sum_{k=k_0+1}^{\infty}\exp\left(C_1\tilde{C}(k-k_0)\right)\mu(E_k\setminus E_{k-1})\\ &\leq & \frac{2^{-k_0Q}}{\mu(B_0)}\sum_{k=-\infty}^{\infty}2^{kQ}\mu(E_k\setminus E_{k-1})\leq C_3,\end{aligned}$$ where we have chosen $C_1$ so that $\exp(C_1\tilde{C})=2^Q$ and also we have made use of the inequalities , and the measure density condition .\ **Case III:** $sp>Q.$ It follows from and , for $k>k_0,$ that $$\begin{aligned} a_k &\leq & Cb^{-\frac{s}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j}g_j^q\Big)^{\frac{p}{q}}\,d\mu\bigg)^{\frac{s}{Q}}\sum_{n=k_0}^{\infty}2^{n(1-\frac{sp}{Q})}+C'r_0^{s}2^{k_0}\\ &\leq & C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.\end{aligned}$$ For $k\leq k_0,$ we have $$a_k\leq a_{k_0}\leq C'r_0^s2^{k_0}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.$$ Therefore $$\Vert u-u_{B_0}\Vert_{L^{\infty}(B_0)}\leq 2\Vert u\Vert_{L^{\infty}(B_0)}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.$$ To prove , let $x,y\in B_0$ such that $d(x,y)\leq r_0/4.$ Let us take another ball $B_1=B(x,2d(x,y)).$ Then $2B_1\subset 2B_0$ and hence yields $$\vert u(x)-u(y)\vert\leq 2\Vert u-u_{B_1}\Vert_{L^{\infty}(B_1)}\leq Cb^{-1/p}d(x,y)^{1-s/p}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{\infty}g_j^q\bigg)^{\frac{p}{q}}\,d\mu\right)^{\frac{1}{p}}.$$ If $d(x,y)>r_0/4,$ then upper bound for $\vert u(x)-u(y)\vert$ follows directly from applied on $B_0.$ The proof is complete. Hajłasz-Besov spaces ==================== From [@Tri06 Theorem 1.73], we know that, if $p>n/(n+s)$ and $q\leq p^*,$ then $\Vert u\Vert_{L^*(\mathbb{R}^n)}\leq C\Vert u\Vert_{B^s_{p,q}(\mathbb{R}^n)}.$ In the following theorem we have proved embeddings for $N^s_{p,q}(B_0)$ when $0<p<\infty,$ $q\leq p$ and $B_0$ is a fixed ball in a metric space $X.$ Let $(X,d,\mu)$ be a metric measure space and $B_0$ be a fixed ball of radius $r_0.$ Let us assume that the measure $\mu$ has a lower bound, that is there exist constants $b, Q>0$ such that $\mu(B(x,r))\geq br^Q$ whenever $B(x,r)\subset 2B_0.$ Let $u\in \dot{N}^s_{p,q}(2B_0)$ and $(g_j)\in \mathbb{D}^s(u)$ where $0<p,q,s<\infty,$ $q\leq p.$ Then there exist constants $C, C_2, C_3$ such that\ $1.$ If $0<sp<Q,$ then $u\in L^{p^*}(B_0),$ $p^*=\frac{Qp}{Q-sp}$ and $$\label{embed2} \inf_{c\in\mathbb{R}}\left(\dashint_{B_0}\vert u-c\vert^{p^*}\,d\mu\right)^{\frac{1}{p^*}}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\sum_{j=-\infty}^{\infty}\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{q}}.$$ $2.$ If $sp=Q,$ then $$\label{embedb2} \dashint_{B_0}\exp\left(C_1b^{1/Q}\frac{\vert u-u_{B_0}\vert}{\Vert \vec{g}\Vert_{l^q(2B_0,L^p)}}\right)\,d\mu\leq C_2.$$ $3.$ If $sp>Q,$ then $$\label{embedc2} \Vert u-u_{B_0}\Vert_{L^{\infty}(B_0)}\leq C_3\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\sum_{j=-\infty}^{\infty}\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{q}}.$$ We would first like to prove the inequality $$\label{embed2'} \inf_{c\in\mathbb{R}}\left(\dashint_{B_0}\vert u-c\vert^{p^*}\,d\mu\right)^{\frac{1}{p^*}}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\sum_{j=-\infty}^{\infty}\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{1}{p}}.$$ Once this is proved, the inequality will immediately follow from the inequality , since $q\leq p.$\ We may assume by selecting an appropriate constant that $\operatorname*{ess\,inf}_E u=0,$ where $E\subset 2B_0$ is any subset of positive measure, since subtracting a constant from $u$ will not affect the inequality . The set $E$ will be chosen later. With a correct choice of $E$ we will prove with $(\dashint_{B_0}\vert u\vert ^{p^*}\,d\mu)^{1/p^*}$ on the left hand side.\ If $g_j=0$ a.e. for all $j,$ then $u$ is constant and hence the theorem follows. Thus we may assume that $\int_{2B_0}g_j^p\,d\mu>0$ for all $j.$ We may also assume that $$\label{lowerbound2} g_j(x)\geq 2^{-(1+\frac{1}{p})}\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{1}{p}}>0$$ for all $x\in 2B_0$ and all $j\in\mathbb{Z},$ as otherwise we can replace $g_j$ by $$\widetilde{g}_j(x)=g_j(x)+\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{1}{p}}.$$ Let us define auxiliary sets $$E_k=\bigg\{x\in 2B_0:\Big(\sum_{j=-\infty}^{\infty}g_j(x)^p\Big)^{\frac{1}{p}}\leq 2^k\bigg\},\quad k\in\mathbb{Z}.$$ Clearly $E_k\subset E_{k+1}$ for all $k.$ Observe that $$\label{rhs2} \int_{2B_0}\Big(\sum_j g_j^p\Big)\,d\mu\approx\sum_{k=-\infty}^{\infty}2^{kp}\mu(E_k\setminus E_{k-1}).$$ Let $a_k=\sup_{B_0\cap E_k}\vert u\vert.$ Obviously, $a_k\leq a_{k+1}$ and $$\label{lhs2} \int_{B_0}\vert u\vert^{p^*}\,d\mu\leq\sum_{k=-\infty}^{\infty}a_k^{p^*}\mu(B_0\cap (E_k\setminus E_{k-1})).$$ Using Chebyschev’s inequality, we get an upper bound for the measure of the complement of $E_{k}$ $$\begin{aligned} \label{Chebyschev2} \mu(2B_0\setminus E_k) &=& \mu\bigg(\Big\{x\in 2B_0:\Big(\sum_{j=-\infty}^{\infty}g_j(x)^p\Big)^{\frac{1}{p}}>2^k\Big\}\bigg) \nonumber \\ &\leq & 2^{-kp}\int_{2B_0}\Big(\sum_j g_j^p\Big)\,d\mu \nonumber\\ &=& 2^{-kp}\sum_j\Big(\int_{2B_0}g_j^p\,d\mu\Big) \nonumber . $$ Lower bound implies that $E_k=\emptyset$ for sufficiently small $k,$ since $$\label{lowerbound2'} \Big(\sum_{j=-\infty}^{\infty}g_j(x)^p\Big)^{\frac{1}{p}}\geq 2^{-(1+\frac{1}{p})}\bigg(\sum_j\dashint_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{p}}>0$$ On the other hand $\mu(E_k)\rightarrow \mu(2B_0)$ as $k\rightarrow\infty.$ Hence there is $\widetilde{k}_0\in\mathbb{Z}$ such that $$\label{conv2} \mu(E_{\widetilde{k}_0-1})<\frac{\mu(2B_0)}{2}\leq\mu(E_{\widetilde{k}_0}).$$ The inequality on the right hand side gives $E_{\widetilde{k}_0}\neq\emptyset$ and hence according to $$\label{first2} 2^{-(1+\frac{1}{p})}\bigg(\sum_j\dashint_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{p}}\leq \Big(\sum_{j=-\infty}^{\infty}g_j(x)^p\Big)^{\frac{1}{p}}\leq 2^{\widetilde{k}_0}$$ for $x\in E_{\widetilde{k}_0}.$ At the same time the inequality on the left hand side of together with imply that $$\label{second2} \frac{\mu(2B_0)}{2}<\mu(2B_0\setminus E_{\widetilde{k}_0-1})\leq 2^{-(\widetilde{k}_0-1)p}\sum_j\int_{2B_0}g_j^p\,d\mu.$$ Combining the inequalities and , we obtain $$\label{combine2} 2^{-(1+\frac{1}{p})}\bigg(\sum_j\dashint_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{p}}\leq 2^{\widetilde{k}_0}\leq 2^{(1+\frac{1}{p})}\bigg(\sum_j\dashint_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{p}}.$$ Choose the least integer $\ell\in\mathbb{Z}$ such that $$\label{ell2} 2^{\ell}>\max\bigg\{2^{1+1/p}\Big(\frac{2}{1-2^{-p/Q}}\Big)^{Q/p}, 1\bigg\}\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}$$ and set $k_0=\widetilde{k}_0+\ell.$ The reason behind such a choice of $\ell$ and $k_0$ will be understood later. Note that $\ell>0,$ by the lower bound of the measure $\mu,$ and hence yields $\mu(E_{k_0})>0.$ The inequalities in becomes $$\label{final2} 2^{k_0}\approx (br_0^Q)^{-1/p}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{p}}.$$ Suppose that $\mu(B_0\setminus E_{k_0})>0$ (we will handle the other case at end of the proof). For $k\geq k_0+1,$ set $$\label{radii2} t_k:=2b^{-1/Q}\mu(2B_0\setminus E_{k-1})^{1/Q}.$$ Suppose now that $k\geq k_0+1$ is such that $\mu((E_k\setminus E_{k-1})\cap B_0)>0$ (if such a $k$ does not exist, then $\mu(B_0\setminus E_{k_0})=0,$ contradicting our assumption). Then in particular $t_k>0.$ Pick a point $x_k\in (E_k\setminus E_{k-1})\cap B_0$ and assume that $B(x_k,t_k)\subset 2B_0.$ Then $$\mu(B(x_k,t_k))\geq bt_k^Q>\mu(2B_0\setminus E_{k-1})$$ and hence $B(x_k,t_k)\cap E_{k-1}\neq\emptyset.$ Thus there is $x_{k-1}\in E_{k-1}$ such that $$d(x_k,x_{k-1})<t_k\leq 2b^{-1/Q}2^{-(k-1)\frac{p}{Q}}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{Q}},$$ by and . Repeating this construction in a similar fashion we obtain for $k\geq k_0+1,$ a sequence of points $$\begin{gathered} x_k \in (E_k\setminus E_{k-1})\cap B_0,\\ x_{k-1} \in E_{k-1}\cap B(x_k,t_k),\\ \vdots \\ x_{k_0}\in E_{k_0}\cap B(x_{k_0+1},t_{k_0+1}), \end{gathered}$$ such that $$\label{distance2} d(x_{k-i},x_{k-(i+1)})<t_{k-i}\leq 2b^{-1/Q}2^{-(k-(i+1))\frac{1}{Q}}\bigg(\sum_j\Big(\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{Q}},$$ for every $i=0,1\ldots,k-k_0-1.$ Hence $$\begin{aligned} \label{totaldistance2} d(x_k,x_{k_0})&<&t_k+t_{k-1}+\cdots +t_{k_0+1}\\ \nonumber &\leq & 2b^{-1/Q} \bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{Q}}\\ \nonumber &=& 2^{-k_0p/Q}\frac{2b^{-1/Q}}{1-2^{-p/Q}}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{Q}}.\end{aligned}$$ This is all true provided $B(x_{k-i},t_{k-i})\subset 2B_0$ for $i=0,1,2,\ldots,k-k_0-1.$ That means we require that the right hand side of is $\leq r_0\leq{{\operatorname{dist}}}(B_0,X\setminus 2B_0).$ Our choice of $k_0,$ and guarantee us this requirement.\ Now we would like to get some upper bound for $\vert u(x_k)\vert$ for $k\geq k_0+1.$ Towards this end, we write $$\label{difference2} \vert u(x_k)\vert \leq\bigg(\sum_{i=0}^{k-k_0-1}\vert u(x_{k-i})-u(x_{k-(i+1)})\vert\bigg)+\vert u(x_{k_0})\vert$$ Let us first consider the difference $\vert u(x_{k_0+1})-u(x_{k_0})\vert.$ The inequality with $i=k-k_0-1$ gives $$\label{k_0_m_02} d(x_{k_0+1},x_{k_0})< 2^{m_0-k_0p/Q},$$ where $m_0\in\mathbb{Z}$ is such that $$\label{m_02} 2^{m_0-1}\leq 2b^{-1/Q}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{1}{Q}}<2^{m_0}.$$ Now using and , we get the following bound for the difference $$\vert u(x_{k_0+1})-u(x_{k_0})\vert\leq \sum_{j=-\infty}^{m_0-k_0p/Q}2^{js}\Big[g_j(x_{k_0+1})+g_j(x_{k_0})\Big].$$ Similarly, we use the fact that $d(x_{k-i},x_{k-(i+1)})<2^{m_0-(k-(i+1))p/Q},$ and obtain $$\vert u(x_{k-i})-u(x_{k-(i+1)})\vert\leq \sum_{j=-\infty}^{m_0-(k-i-1)p/Q}2^{js}\Big[g_j(x_{k-i})+g_j(x_{k-(i+1)})\Big],$$ for all $i=0,1,2,\ldots,k-k_0-1.$ So, the inequality becomes $$\vert u(x_k)\vert \leq\bigg(\sum_{i=0}^{k-k_0-1}\sum_{j=-\infty}^{m_0-(k-i-1)p/Q}2^{js}\Big[g_j(x_{k-i})+g_j(x_{k-i-1})\Big]\bigg)+\vert u(x_{k_0})\vert.$$ Use Hölder inequality when $p>1$ and the inequality when $p\leq 1$and also use the facts that $x_{k-i-1}\in E_{k-i-1}\subset E_{k-i},$ $x_{k-i}\in E_{k-i}$ to obtain $$\begin{aligned} \vert u(x_k)\vert &\leq &\sum_{i=0}^{k-k_0-1}2^{m_0s-\frac{(k-i-1)ps}{Q}}\bigg(\sum_{j=-\infty}^{m_0-(k-i-1)p/Q}\Big[g_j(x_{k-i})+g_j(x_{k-i-1})\Big]^p\bigg)^{1/p}+\vert u(x_{k_0})\vert\\ &\leq &C2^{m_0s}\sum_{i=0}^{k-k_0-1}2^{-\frac{(k-i-1)ps}{Q}}2^{k-i}+\vert u(x_{k_0})\vert.\end{aligned}$$ Hence with , upon taking supremum over $x_k\in E_k\cap B_0,$ yields $$a_k\leq Cb^{-\frac{s}{Q}}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{s}{Q}}\sum_{n=k_0}^{k-1}2^{n(1-\frac{sp}{Q})}+\sup_{E_{k_0}\cap 2B_0}\vert u\vert.$$ To estimate the last term $\sup_{E_{k_0}\cap 2B_0}\vert u\vert,$ we can assume that $\operatorname*{ess\,inf}_{E_{k_0}\cap 2B_0}\vert u\vert=0,$ by the discussion in the beginning of the proof and the fact that $\mu(E_{k_0})>0.$ That means there is a sequence $y_i\in E_{k_0}$ such that $u(y_i)\rightarrow 0$ as $i\rightarrow\infty.$ Therefore, for $x\in E_{k_0}\cap 2B_0$ we have $$\label{lastterm2} \vert u(x)\vert=\lim_{i\rightarrow\infty}\vert u(x)-u(y_i)\vert\leq C'r_0^{s}2^{k_0}.$$ So, for $k>k_0$ we conclude that $$\label{conclusion2} a_k\leq Cb^{-\frac{s}{Q}}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{s}{Q}}\sum_{n=k_0}^{k-1}2^{n(1-\frac{sp}{Q})}+C'r_0^{s}2^{k_0}.$$ For $k\leq k_0,$ we will use the estimate $a_k\leq a_{k_0}\leq C'r_0^s2^{k_0}.$\ **Case I:**. For every $k\in\mathbb{Z},$ we have $$a_k \leq Cb^{-\frac{s}{Q}}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{s}{Q}}2^{k(1-\frac{sp}{Q})}+C'r_0^{s}2^{k_0}$$ Applying , and and the measure density condition we get $$\begin{aligned} \int_{B_0}\vert u\vert^{p^*}\,d\mu &\leq \sum_{k=-\infty}^{\infty}a_k^{p^*}\mu(B_0\cap (E_k\setminus E_{k-1}))\\ &\leq Cb^{-\frac{sp^*}{Q}}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{sp^*}{Q}}\sum_{k=-\infty}^{\infty}2^{kp}\mu(E_k\setminus E_{k-1})\\ &\qquad+C'r_0^{sp^*}2^{k_0p^*}\mu(B_0)\\ &\leq C\left(1+\frac{\mu(B_0)}{br_0^Q}\right)b^{-\frac{sp^*}{Q}}\bigg(\sum_j\int_{2B_0}g_j^p\,d\mu\bigg)^{\frac{p^*}{p}}.\\\end{aligned}$$ Using the fact that $1+\mu(B_0)/br_0^Q\leq 2\mu(B_0)/br_0^Q,$ we get $$\left(\dashint_{B_0}\vert u\vert^{p^*}\,d\mu\right)^{\frac{1}{p^*}}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\sum_{j=-\infty}^{\infty}\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{1}{p}}.$$ Suppose now that $\mu(B_0\setminus E_{k_0})=0.$ In this case, we use the fact that $\int_{B_0}\vert u\vert^{p^*}\,d\mu=\int_{E_{k_0}}\vert u\vert^{p^*}\,d\mu$ and use inequality to obtain inequality . This finishes the proof in this case.\ **Case II:**: $sp=Q.$ The proof in this case follows exactly in the same way as the proof of Theorem \[embedding\] with replacing by $$\label{zero2} a_{k_0}\leq C'r_0^s2^{k_0}\leq C''b^{-1/p}\left(\sum_{j=-\infty}^{\infty}\int_{2B_0}g_j^p\,d\mu\right)^{\frac{1}{p}}$$ and replacing by $$\label{greaterzero2} a_k\leq \tilde{C}b^{-1/p}\left(\sum_{j=-\infty}^{\infty}\int_{2B_0}g_j^p\,d\mu\right)^{\frac{1}{p}}(k-k_0)$$ and also using inequality as we have $q\leq p.$\ The case when $sp>Q$ also follows in a similar fashion. This completes the proof. We do not know if one can get the same result as above for $q\leq p^*,$ at least for the non-homogeneous space and when $p>Q/(Q+s).$ In the next theorem, we have relaxed the assumption on $q$ and still have been able to find the same result but with an exponent $p'$ slightly smaller than $p^*$ and this result seems to be new even in $\mathbb{R}^n.$ Let $(X,d,\mu)$ be a metric measure space and $B_0$ be a fixed ball of radius $r_0$ with $2^{l-1}\leq r_0<2^l$ for some integer $l.$ Let us assume that the measure $\mu$ has a lower bound, that is there exist constants $b, Q>0$ such that $\mu(B(x,r))\geq br^Q$ whenever $B(x,r)\subset 2B_0.$ Let $u\in \dot{N}^s_{p,q}(2B_0)$ and $(g_j)\in \mathbb{D}^s(u)$ where $0<p,q,s<\infty.$ Then there exist constants $C,\,C_1,\,C_2$ and $C_3$ such that\ If $0<sp<Q,$ then $u\in L^{p'}(B_0),$ $p'=\frac{Qp}{Q-(s-s')p}$ for any $0<s'<s.$ Moreover, we have the following inequality: $$\label{embed3} \inf_{c\in\mathbb{R}}\left(\dashint_{B_0}\vert u-c\vert^{p'}\,d\mu\right)^{\frac{1}{p'}}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s-s'}M,$$ where $M:=\left(\sum_{j=-\infty}^{l-2}2^{s'jp}\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{p}}.$\ If $sp=Q,$ then $$\label{embed3b} \dashint_{B_0}\exp\bigg(C_1b^{1/Q}\frac{\vert u-u_{B_0}\vert}{M}\bigg)\,d\mu\leq C_2.$$ If $sp>Q,$ then $$\label{embed3c} \Vert u-u_{B_0}\Vert_{L^{\infty}(B_0)}\leq C_3\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s-s'}M.$$ First, we would like to establish the following inequality: $$\label{embed3'} \inf_{c\in\mathbb{R}}\left(\dashint_{B_0}\vert u-c\vert^{p'}\,d\mu\right)^{\frac{1}{p'}}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/q}r_0^{s-s'}\left(\dashint_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{p}}.$$ Once this is proved, one can interchange the summation and integration and use the Hölder inequality or inequality to prove .\ Note that it is enough to prove with $(\dashint_{B_0}\vert u\vert ^{p'}\,d\mu)^{1/p'}$ on the left hand side. If we have $\sum_{j=-\infty}^{l-2}2^{s'jp}g_j^p=0$ a.e., then $g_j^p=2^{-s'jp}$ a.e. for all $j\leq l-2,$ and hence the theorem follows trivially. Thus we may assume that $\int_{2B_0}\sum_{j\leq l-2}2^{s'jp}g_j^p\,d\mu>0$. We may also assume that $$\label{lowerbound3} \sum_{j=-\infty}^{l-2}2^{s'jp}g_j(x)^p\geq \frac{1}{2}\left(\dashint_{2B_0}\bigg(\sum_{j=-\infty}^{l-2}2^{s'jp}g_j(x)^p\bigg)\,d\mu\right)>0$$ for all $x\in 2B_0.$ Let us define auxiliary sets $$E_k=\bigg\{x\in 2B_0:\Big(\sum_{j=-\infty}^{l-2}2^{s'jp}g_j(x)^p\Big)^{\frac{1}{p}}\leq 2^k\bigg\},\quad k\in\mathbb{Z}.$$ Clearly $E_k\subset E_{k+1}$ for all $k.$ Observe that $$\label{rhs3} \int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\approx\sum_{k=-\infty}^{\infty}2^{kp}\mu(E_k\setminus E_{k-1}).$$ Let $a_k=\sup_{B_0\cap E_k}\vert u\vert.$ Obviously, $a_k\leq a_{k+1}$ and $$\label{lhs3} \int_{B_0}\vert u\vert^{p'}\,d\mu\leq\sum_{k=-\infty}^{\infty}a_k^{p'}\mu(B_0\cap (E_k\setminus E_{k-1})).$$ By Chebyschev’s inequality, we get an upper bound of the complement of $E_k$ $$\begin{aligned} \label{Chebyschev3} \mu(2B_0\setminus E_k) \leq 2^{-kp}\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu.\end{aligned}$$ Lower bound implies that $E_k=\emptyset$ for sufficiently small $k.$ On the other hand $\mu(E_k)\rightarrow \mu(2B_0)$ as $k\rightarrow\infty.$ Hence there is $\widetilde{k}_0\in\mathbb{Z}$ such that $$\label{conv3} \mu(E_{\widetilde{k}_0-1})<\frac{\mu(2B_0)}{2}\leq\mu(E_{\widetilde{k}_0}).$$ The inequality on the right hand side gives $E_{\widetilde{k}_0}\neq\emptyset$ and hence according to $$\label{first3} 2^{-\frac{1}{p}}\left(\dashint_{2B_0}\sum_{j=-\infty}^{l-2}2^{s'jp}g_j(x)^p\,d\mu\right)^{\frac{1}{p}}\leq \Big(\sum_{j=-\infty}^{l-2}2^{s'jp}g_j(x)^p\Big)^{\frac{1}{p}}\leq 2^{\widetilde{k}_0}$$ for $x\in E_{\widetilde{k}_0}.$ At the same time the inequality on the left hand side of together with imply that $$\label{second3} \frac{\mu(2B_0)}{2}<\mu(2B_0\setminus E_{\widetilde{k}_0-1})\leq 2^{-(\widetilde{k}_0-1)p}\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu.$$ Combining the inequalities and we obtain $$\label{combine3} 2^{-(1+\frac{1}{p})}\left(\dashint_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{p}}\leq 2^{\widetilde{k}_0}\leq 2^{(1+\frac{1}{p})}\left(\dashint_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{p}}.$$ Choose the least integer $\ell\in\mathbb{Z}$ such that $$\label{ell3} 2^{\ell}>\max\bigg\{2^{1+1/p}\Big(\frac{2}{1-2^{-p/Q}}\Big)^{Q/p}, 1\bigg\}\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}$$ and set $k_0=\widetilde{k}_0+\ell.$ The reason behind such a choice of $\ell$ and $k_0$ will be understood later. Note that $\ell>0,$ by the lower bound of the measure $\mu,$ and hence yields $\mu(E_{k_0})>0.$ The inequalities in becomes $$\begin{aligned} \label{final3} 2^{k_0} &\approx & (br_0^Q)^{-1/p}\left(\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{p}}\\ &=& (br_0^Q)^{-1/p}\left(\sum_{j\leq l-2} 2^{s'jp}\int_{2B_0}g_j^p\,d\mu\right)^{\frac{1}{p}}.\label{final32}\end{aligned}$$ Suppose that $\mu(B_0\setminus E_{k_0})>0$ (we will handle the other case at end of the proof). For $k\geq k_0+1,$ set $$\label{radii3} t_k:=2b^{-1/Q}\mu(2B_0\setminus E_{k-1})^{1/Q}.$$ Suppose now that $k\geq k_0+1$ is such that $\mu((E_k\setminus E_{k-1})\cap B_0)>0$ (if such a $k$ does not exist, then $\mu(B_0\setminus E_{k_0})=0,$ contradicting our assumption). Then in particular $t_k>0.$ Pick a point $x_k\in (E_k\setminus E_{k-1})\cap B_0$ and assume that $B(x_k,t_k)\subset 2B_0.$ Then $$\mu(B(x_k,t_k))\geq bt_k^Q>\mu(2B_0\setminus E_{k-1})$$ and hence $B(x_k,t_k)\cap E_{k-1}\neq\emptyset.$ Thus there is $x_{k-1}\in E_{k-1}$ such that $$d(x_k,x_{k-1})<t_k\leq 2b^{-1/Q}2^{-(k-1)p/Q}\left(\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{Q}},$$ by and . Repeating this construction in a similar fashion we obtain for $k\geq k_0+1,$ a sequence of points $$\begin{gathered} x_k \in (E_k\setminus E_{k-1})\cap B_0,\\ x_{k-1} \in E_{k-1}\cap B(x_k,t_k),\\ \vdots \\ x_{k_0}\in E_{k_0}\cap B(x_{k_0+1},t_{k_0+1}), \end{gathered}$$ such that $$\label{distance3} d(x_{k-i},x_{k-(i+1)})<t_{k-i}\leq 2b^{-1/Q}2^{-(k-(i+1))p/Q}\left(\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{Q}},$$ for every $i=0,1\ldots,k-k_0-1.$ Hence $$\begin{aligned} \label{totaldistance3} d(x_k,x_{k_0})&<&t_k+t_{k-1}+\cdots +t_{k_0+1}\\ \nonumber &\leq & 2b^{-1/Q} \left(\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{Q}}\sum_{n=k_0}^{k-1}2^{-np/Q}\\ \nonumber &=& 2^{-k_0p/Q}\frac{2b^{-1/Q}}{1-2^{-p/Q}}\left(\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{Q}}.\end{aligned}$$ This is all true provided $B(x_{k-i},t_{k-i})\subset 2B_0$ for $i=0,1,2,\ldots,k-k_0-1.$ That means we require that the right hand side of is $\leq r_0\leq{{\operatorname{dist}}}(B_0,X\setminus 2B_0).$ Our choice of $k_0,$ and guarantee us this requirement. Indeed, $$\begin{aligned} 2^{k_0}=2^{\widetilde{k}_0+\ell}&\geq & 2^{\ell}2^{-(1+1/p)}\bigg(\dashint_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\bigg)^{1/p}\\ &\geq & \left(\frac{2}{1-2^{-p/Q}}\right)^{Q/p}(br_0^Q)^{-1/p}\bigg(\int_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\bigg)^{1/p}.\\\end{aligned}$$ Then $t_k+t_{k-1}+\cdots +t_{k_0+1}\leq r_0\leq{{\operatorname{dist}}}(B_0,X\setminus 2B_0),$ which implies that $B(x_{k-i},t_{k-i})\subset 2B_0$ for all $i=0,1,2,\ldots,k-k_0-1.$\ Now we would like to get some upper bound for $\vert u(x_k)\vert$ for $k\geq k_0+1.$ Towards this end, we write $$\label{difference3} \vert u(x_k)\vert \leq\bigg(\sum_{i=0}^{k-k_0-1}\vert u(x_{k-i})-u(x_{k-(i+1)})\vert\bigg)+\vert u(x_{k_0})\vert$$ Let us first consider the difference $\vert u(x_{k_0+1})-u(x_{k_0})\vert.$ The inequality with $i=k-k_0-1$ gives $$\label{k_0_m_03} d(x_{k_0+1},x_{k_0})< 2^{m_0-k_0p/Q},$$ where $m_0\in\mathbb{Z}$ is such that $$\label{m_03} 2^{m_0-1}\leq 2b^{-1/Q}\left(\int_{2B_0}\Big(\sum_{j\leq l-2} 2^{s'jp}g_j^p\Big)\,d\mu\right)^{\frac{1}{Q}}<2^{m_0}.$$ Now using and , we get the following bound for the difference $$\vert u(x_{k_0+1})-u(x_{k_0})\vert\leq \sum_{j=-\infty}^{m_0-k_0p/Q}2^{js}\Big[g_j(x_{k_0+1})+g_j(x_{k_0})\Big].$$ Similarly, we use the fact that $d(x_{k-i},x_{k-(i+1)})<2^{m_0-(k-(i+1))p/Q},$ and obtain $$\vert u(x_{k-i})-u(x_{k-(i+1)})\vert\leq \sum_{j=-\infty}^{m_0-(k-i-1)p/Q}2^{js}\Big[g_j(x_{k-i})+g_j(x_{k-(i+1)})\Big],$$ for all $i=0,1,2,\ldots,k-k_0-1.$ So, the inequality becomes $$\vert u(x_k)\vert \leq\bigg(\sum_{i=0}^{k-k_0-1}\sum_{j=-\infty}^{m_0-(k-i-1)p/Q}2^{js}\Big[g_j(x_{k-i})+g_j(x_{k-i-1})\Big]\bigg)+\vert u(x_{k_0})\vert.$$ Use Hölder inequality when $p>1$ and the inequality when $p\leq 1$ and also use the facts that $x_{k-i-1}\in E_{k-i-1}\subset E_{k-i},$ $x_{k-i}\in E_{k-i}$ to obtain $$\begin{aligned} \begin{split} \vert u(x_k)\vert &\leq \sum_{i=0}^{k-k_0-1} 2^{m_0(s-s')-\frac{(k-i-1)p(s-s')}{Q}}\bigg(\sum_{j=-\infty}^{m_0-(k-i-1)p/Q}2^{js'p}\Big[g_j(x_{k-i})+g_j(x_{k-i-1})\Big]^p\bigg)^{1/p}\\ &\qquad+\vert u(x_{k_0})\vert\\ &\leq C2^{m_0(s-s')}\sum_{i=0}^{k-k_0-1}2^{-\frac{(k-i-1)p(s-s')}{Q}}2^{k-i}+\vert u(x_{k_0})\vert. \end{split}\end{aligned}$$ Hence with , upon taking supremum over $x_k\in E_k\cap B_0,$ yields $$a_k\leq Cb^{-\frac{s-s'}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\bigg)^{\frac{s-s'}{Q}}\sum_{n=k_0}^{k-1}2^{n\left(1-(s-s')\frac{p}{Q}\right)}+\sup_{E_{k_0}\cap 2B_0}\vert u\vert.$$ To estimate the last term $\sup_{E_{k_0}\cap 2B_0}\vert u\vert,$ we can assume that $\operatorname*{ess\,inf}_{E_{k_0}\cap 2B_0}\vert u\vert=0,$ by the discussion in the beginning of the proof and the fact that $\mu(E_{k_0})>0.$ That means there is a sequence $y_i\in E_{k_0}$ such that $u(y_i)\rightarrow 0$ as $i\rightarrow\infty.$ Therefore, for $x\in E_{k_0}\cap 2B_0$ we have $$\label{lastterm3} \vert u(x)\vert=\lim_{i\rightarrow\infty}\vert u(x)-u(y_i)\vert\leq C'r_0^{s-s'}2^{k_0}.$$ So, for $k>k_0$ we conclude that $$\label{conclusion3} a_k\leq Cb^{-\frac{s-s'}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\bigg)^{\frac{s-s'}{Q}}\sum_{n=k_0}^{k-1}2^{n\left(1-(s-s')\frac{p}{Q}\right)}+C'r_0^{s-s'}2^{k_0}.$$ For $k\leq k_0,$ we will use the estimate $a_k\leq a_{k_0}\leq C'r_0^{s-s'}2^{k_0}.$\ **Case I:** $0<sp<Q.$ Therefore, for every $k\in\mathbb{Z},$ we have $$a_k\leq Cb^{-\frac{s-s'}{Q}}2^{k\left(1-(s-s')\frac{p}{Q}\right)}\bigg(\int_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\bigg)^{\frac{s-s'}{Q}}+C'r_0^{s-s'}2^{k_0}.$$ Applying , and we get $$\begin{aligned} \int_{B_0}\vert u\vert^{p'}\,d\mu &\leq \sum_{k=-\infty}^{\infty}a_k^{p'}\mu(B_0\cap (E_k\setminus E_{k-1}))\\ &\leq Cb^{-\frac{(s-s')p'}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\bigg)^{\frac{(s-s')p'}{Q}}\sum_{k=-\infty}^{\infty}2^{kp}\mu(E_k\setminus E_{k-1})\\ &\qquad+C'r_0^{(s-s')p'}2^{k_0p'}\mu(B_0)\\ &\leq C\left(1+\frac{\mu(B_0)}{br_0^Q}\right)b^{-\frac{(s-s')p'}{Q}}\bigg(\int_{2B_0}\Big(\sum_{j\leq l-2}2^{s'jp}g_j^p\Big)\,d\mu\bigg)^{\frac{p'}{p}}.\end{aligned}$$ Using the fact that $1+\mu(B_0)/br_0^Q\leq 2\mu(B_0)/br_0^Q,$ we get inequality .\ Suppose now that $\mu(B_0\setminus E_{k_0})=0.$ In this case, we use the fact that $\int_{B_0}\vert u\vert^{p'}\,d\mu=\int_{E_{k_0}}\vert u\vert^{p'}\,d\mu$ and use inequality to obtain inequality .\ **Case II:** $sp=Q.$ Similar to the proof of Theorem \[embedding\], it is enough to prove, after using Jensen’s inequality, the inequality with $\vert u-u_{B_0}\vert$ replaced by $\vert u\vert$ in the left hand side of it. It follows from , and Hölder inequality (or the inequality ) that $$\label{zero3} a_{k_0}\leq C'r_0^s2^{k_0}\leq C''b^{-1/p}\left(\sum_{j=-\infty}^{l-2}2^{s'jp}\left(\int_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{p}}.$$ Hence from and Hölder inequality (or the inequality ) we obtain, for $k>k_0,$ $$\label{greaterzero3} a_k\leq \tilde{C}b^{-1/p}\left(\sum_{j=-\infty}^{l-2}2^{s'jp}\left(\int_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{p}}(k-k_0).$$ Again we split the integral into two parts: we estimate the integral over $B_0\cap E_{k_0}$ and $B_0\setminus E_{k_0}$ separately. For the first part, we have $$\begin{aligned} \frac{1}{\mu(B_0)}\int_{B_0\cap E_{k_0}}\exp\left(\frac{C_1b^{1/Q}\vert u\vert}{M}\right)\,d\mu &\leq & \frac{\mu(B_0\cap E_{k_0})}{\mu(B_0)}\exp\left(\frac{C_1b^{1/Q}a_{k_0}}{M}\right)\\ &\leq &\exp(C_1C''),\end{aligned}$$ where the last inequality follows from . The second part is estimated using inequality as follows $$\begin{aligned} & & \frac{1}{\mu(B_0)}\int_{B_0\setminus E_{k_0}}\exp\left(\frac{C_1b^{1/Q}\vert u\vert}{M}\right)\,d\mu \\ &\leq & \frac{1}{\mu(B_0)}\sum_{k=k_0+1}^{\infty}\exp\left(\frac{C_1b^{1/Q}a_{k_0}}{M}\right)\mu(B_0\cap(E_k\setminus E_{k-1}))\\ &\leq & \frac{1}{\mu(B_0)}\sum_{k=k_0+1}^{\infty}\exp\left(C_1\tilde{C}(k-k_0)\right)\mu(E_k\setminus E_{k-1})\\ &\leq & \frac{2^{-k_0Q}}{\mu(B_0)}\sum_{k=-\infty}^{\infty}2^{kQ}\mu(E_k\setminus E_{k-1})\leq C_3,\end{aligned}$$ where we have chosen $C_1$ so that $\exp(C_1\tilde{C})=2^Q$ and also we have made use of the inequalities , and the measure density condition .\ **Case III:** $sp>Q.$ It follows from and , for $k>k_0,$ that $$\begin{aligned} a_k &\leq & Cb^{-\frac{s}{Q}}\bigg(\sum_{j=-\infty}^{l-2}2^{s'jp}\left(\int_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\bigg)^{\frac{s}{Q}}\sum_{n=k_0}^{\infty}2^{n(1-\frac{sp}{Q})}+C'r_0^{s}2^{k_0}\\ &\leq & C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\sum_{j=-\infty}^{l-2}2^{s'jp}\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{p}}.\end{aligned}$$ For $k\leq k_0,$ we have $$a_k\leq a_{k_0}\leq C'r_0^s2^{k_0}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\sum_{j=-\infty}^{l-2}2^{s'jp}\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{p}}.$$ Therefore $$\Vert u-u_{B_0}\Vert_{L^{\infty}(B_0)}\leq 2\Vert u\Vert_{L^{\infty}(B_0)}\leq C\left(\frac{\mu(2B_0)}{br_0^Q}\right)^{1/p}r_0^{s}\left(\sum_{j=-\infty}^{l-2}2^{s'jp}\left(\dashint_{2B_0}g_j^p\,d\mu\right)^{\frac{q}{p}}\right)^{\frac{1}{p}}.$$ Measure density from embedding ============================== The next theorem shows that, if the space $X$ is $Q$-regular and geodesic, then the measure density condition of a domain $\Omega$ is a necessary condition for the embeddings of both $M^s_{p,q}(\Omega)$ and $N^s_{p,q}(\Omega).$ The proof is inspired by the proof of Theorem 6.1 of [@HIT16], where the measure density condition was derived from extension domains for these spaces. Let $X$ be a $Q$-regular, geodesic metric measure space and let $\Omega\subset X$ be a domain. Let $0<s<1,$ $0<p<\infty$ and $0<q\leq\infty.$\ $(i)$ When $sp<Q,$ if there exists a constant $C$ such that for all $f\in M^s_{p,q}(\Omega),$ we have $\Vert f\Vert_{L^{p^*}(\Omega)}\leq C\Vert f\Vert_{M^s_{p,q}(\Omega)},$ where $p^*=\frac{Qp}{Q-sp},$ then $\Omega$ satisfies .\ $(ii)$ When $sp=Q,$ if there exist constants $C_1, C_2$ such that for all $f\in M^s_{p,q}(\Omega)$ and for all balls $B,$ we have $$\int_{B\cap\Omega}\exp\left(C_1\frac{\vert u-u_{B(x,r)}\vert}{\Vert f\Vert_{M^s{p,q}(\Omega)}}\right)\,d\mu\leq C_2\mu(B),$$ then $\Omega$ satisfies .\ $(iii)$ When $sp>Q,$ if there exists a constant $C_3$ such that for all $f\in M^s_{p,q}(\Omega),$ and for every $x,y\in\Omega,$ we have $\vert f(x)-f(y)\vert\leq C_3\Vert f\Vert_{M^s_{p,q}(\Omega)}d(x,y)^{s-Q/p},$ then $\Omega$ satisfies . The claims also hold with $M^s_{p,q}(\Omega)$ replaced by $N^s_{p,q}(\Omega).$ To show that the measure density condition holds, let $x\in \Omega$ and $0<r\leq 1$ and let $B=B(x,r).$ We may assume that $\Omega\setminus B(x,r)\neq\emptyset,$ otherwise the measure density condition is obviously satisfied. We split the proof into three different cases depending on the size of $sp.$\ **Case 1:** $0<sp<Q.$ By the proof of [@HKT08 Proposition 13], the geodesity of $X$ implies that $\mu(\partial B(x,R))=0$ for every $R>0.$ Hence there exist a unique $0<\tilde{r}<r$ such that $$\mu(B(x,\tilde{r})\cap\Omega)=\frac{1}{2}\mu(B(x,r)\cap\Omega).$$ Define $u:\Omega\rightarrow [0,1]$ by $$\label{bump} u(y)= \begin{cases} 1& \text{if $y\in B(x,\tilde{r})\cap\Omega$},\\ \frac{r-d(x,y)}{r-\tilde{r}} & \text{if $y\in B(x,r)\setminus B(x,\tilde{r})\cap\Omega$},\\ 0& \text{if $y\in \Omega\setminus B(x,r)$}. \end{cases}$$ Note that $$\Vert u\Vert_{L^{p^*}(\Omega)}\geq\mu(B(x,\tilde{r})\cap\Omega)^{1/p^*}.$$ Since the function $u$ is $1/(r-\tilde{r})$-Lipschitz and $\Vert u\Vert_{\infty}\leq 1,$ by [@HIT16 Corollary 3.12] and the fact that $0<r-\tilde{r}<1,$ we have $$\label{corolarry3.12} \Vert u\Vert_{M^s_{p,q}(\Omega)}\leq C\mu(B(x,r)\cap\Omega)^{1/p}(r-\tilde{r})^{-s}.$$ Since $\Vert u\Vert_{L^{p^*}(\Omega)}\leq \Vert u\Vert_{M^s_{p,q}(\Omega)}$ by our assumption, we further have $$\mu(B(x,\tilde{r})\cap\Omega)^{1/p^*}\lesssim \mu(B(x,r)\cap\Omega)^{1/p}(r-\tilde{r})^{-s},$$ which yields $r-\tilde{r}\lesssim\mu(B(x,r)\cap\Omega)^{1/Q}.$ Now let us define a sequence $r_0>r_1>r_2>\cdots>0$ by induction: $$r_0=r, \qquad\text{and}\qquad r_{j+1}=\tilde{r_j}.$$ Clearly $$\label{decreasing} \mu(B(x,r_j)\cap\Omega)=2^{-j}\mu(B(x,r)\cap\Omega).$$ Therefore $r_j\rightarrow 0$ as $j\rightarrow\infty,$ and hence $$\begin{aligned} r &=&\sum_{j=0}^{\infty}(r_j-r_{j+1})\\ &\lesssim & \sum_{j=0}^{\infty}2^{-j/Q}\mu(B(x,r)\cap\Omega)^{1/Q}\\ &\leq & \mu(B(x,r)\cap\Omega)^{1/Q}\end{aligned}$$ as desired.\ **Case 2:** $sp=Q.$ Again for $x\in\Omega$ and $0<r\leq 1,$ we will have $0<\tilde{\tilde{r}}<\tilde{r}<r$ such that $$\label{rtilde} \mu(B(x,\tilde{\tilde{r}})\cap\Omega)=\frac{1}{2}\mu(B(x,\tilde{r})\cap\Omega)=\frac{1}{4}\mu(B(x,r)\cap\Omega).$$ Considering the function $u$ associated to $x,\tilde{r},\tilde{\tilde{r}}$ as in and using (the proof of) [@HIT16 Corollary 3.12] we have $$\int_{B(x,r)\cap\Omega}\exp\left(C_1\frac{\vert u-u_{B(x,r)}\vert(\tilde{r}-\tilde{\tilde{r}})^s}{\mu(B(x,\tilde{r})\cap\Omega)}\right)\,d\mu\leq C_2r^Q.$$ Since $u=v=1$ on $B(x,\tilde{\tilde{r}})\cap\Omega$ and $u=v=0$ on $(B(x,r)\setminus B(x,\tilde{r}))\cap\Omega,$ we have that $\vert u-u_{B(x,r)}\vert\geq 1/2$ on at least one of the sets $B(x,\tilde{\tilde{r}})\Omega$ and $(B(x,r)\setminus B(x,\tilde{r}))\cap\Omega.$ Since the measure of these two sets are comparable to the measure of $B(x,\tilde{r})\cap\Omega,$ we have $$\mu(B(x,\tilde{r})\cap\Omega)\exp\big(C_1(\tilde{r}-\tilde{\tilde{r}})^s\mu(B(x,\tilde{r})\cap\Omega)\big)\leq C_2r^Q,$$ which can be written in the form $$\label{maininequality} \tilde{r}-\tilde{\tilde{r}}\leq C_1\mu(B(x,\tilde{r})\cap\Omega)^{1/Q}\left[\log\left(\frac{C_2r^Q}{\mu(B(x,\tilde{r})\cap\Omega)}\right)\right]^{1/s}.$$ Now let us state a lemma from [@HKT08b], which will help us to relax the range of $0<r\leq 1$ to $0<r\leq 10\tilde{r}.$ If the measure density condition holds for all $x\in\Omega$ and all $r\leq 1$ such that $r\leq 10\tilde{r},$ where $\tilde{r}$ is defined by , then holds fro all $x\in\Omega$ and all $r\leq 1.$ Now let us define a sequence by setting $$r_0=r, \qquad \text{and}\qquad r_{j+1}=\tilde{r_j}.$$ Inequality together with the fact that $$\mu(B(x,r_{j+1})\cap\Omega)=2^{-j}\mu(B(x,\tilde{r})\cap\Omega)$$ gives $$\begin{aligned} \tilde{r}=\sum_{j=0}^{\infty}(r_j-r_{j+1}) &\leq & \sum_{j=0}^{\infty}C_1\mu(B(x,r_j)\cap\Omega)^{1/Q}\left[\log\left(\frac{C_2r^Q}{\mu(B(x,r_j)\cap\Omega)}\right)\right]^{1/s}\\ &\leq & C_1\mu(B(x,\tilde{r})\cap\Omega)^{1/Q}\sum_{j\in\mathbb{N}}2^{-j/Q}\left[\log\left(\frac{C_22^jr^Q}{\mu(B(x,\tilde{r})\cap\Omega)}\right)\right]^{1/s}.\end{aligned}$$ The sum on the right-hand side is bounded from above (up to a constant) by $$\sum_{j=0}^{\infty}2^{-j/Q}j^{1/s}(\log 2)^{1/s}+\left(\sum_{j=0}^{\infty}2^{-j/Q}\right)\left[\log\left(\frac{C_2r^Q}{\mu(B(x,\tilde{r})\cap\Omega)}\right)\right]^{1/s}.$$ The two sums in the above expression converge to some constants depending on $Q$ and $s$ only and hence we obtain $$\label{sumoftwo} \tilde{r}\leq C\mu(B(x,\tilde{r})\cap\Omega)^{1/Q}\left[1+\left[\log\left(\frac{C_2r^Q}{\mu(B(x,\tilde{r})\cap\Omega)}\right)\right]^{1/s}\right].$$ Let us write $\mu(B(x,\tilde{r})\cap\Omega)=\epsilon\tilde{r}^Q.$ Since $$\mu(B(x,r)\cap\Omega)=2\mu(B(x,\tilde{r})\cap\Omega)=2\epsilon\tilde{r}^Q\geq 2\cdot 10^{-Q}\epsilon r^Q,$$ it suffices to prove that $\epsilon$ is bounded from below by some positive constant. Now, from inequality , we have $$C\epsilon^{1/Q}(1+\log(C_210^Q\epsilon^{-1}))^{1/s}\geq 1.$$ The expression on the left-hand side converges to $0$ if $\epsilon\rightarrow 0,$ and hence $\epsilon$ must be bounded from below by a positive constant.\ **Case 3:** $sp>Q.$ For $x\in\Omega$ and $r\in (0,1],$ take $\tilde{r}\in (0,r/4$ and for such $x,r,\tilde{r}$ set $u$ as in . Then for all $y,z\in \Omega,$ by our assumption together with , we have $$\vert u(y)-u(z)\vert\leq C\Vert u\Vert_{M^s_{p,q}(\Omega)}d(y,z)^{s-Q/p}\lesssim \frac{\mu(B(x,r)\cap\Omega)^{1/p}}{r^s}d(x,y)^{s-Q/p}.$$ In particular, let $y\in B(x,r)\cap\Omega$ and $z\in (B(x,r+r/2)\cap\Omega)\setminus B(x,r).$ Then $d(y,x)\leq r/4,$ $r\leq d(z,x)\leq 3r/2$ and hence $r/2\leq d(y,z)\leq 2r.$ Therefore, $\mu(B(x,r)\cap\Omega)\gtrsim r^Q.$ This ends the proof of the theorem. Note that in the previous theorem, we have restricted $s$ to be strictly less than one. For $s=1,$ we refer to a recent result of Górka for Hajłasz-Sobolev space, [@Gor17]. [^1]: I would like to thank Professor Pekka Koskela for introducing me with the problem and for his fruitful suggestions. This work was supported by DST-SERB (Grant no. PDF/2016/000328).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Finite volume corrections to higher moments are important observable quantities. They make possible to differentiate between different statistical ensembles even in the thermodynamic limit. It is shown that this property is a universal one. The classical grand canonical distribution is compared to the canonical distribution in the rigorous procedure of approaching the thermodynamic limit.' author: - Ludwik Turko date: 'February 23, 2007' title: 'FLUCTUATIONS, CORRELATIONS AND FINITE VOLUME EFFECTS IN HEAVY ION COLLISION' --- Introduction ============ Fluctuations and correlations measured in heavy ion collision processes give better insight into dynamical and kinematical properties of the dense hadronic medium created in ultrarelativistic heavy ion collisions. Particle production yields are astonishingly well reproduced by thermal models, based on the assumption of noninteracting gas of hadronic resonances [@brs]. Systems under considerations are in fact so close to the thermodynamic limit that final volume effects can be neglected — at least when productions yields are considered. The aim of the paper is to show that finite volume effects become more and more important when higher moments, *e.g.* correlations and fluctuations are considered. The basic physical characterization of the system described by means of the thermal model are underlying probability densities that given physical observables of the system have specified values. The only way to reproduce those probability distribution is by means of higher and higher probability moments. Those moments are in fact the only quantities which are phenomenologically available and can be used for the verification of theoretical predictions. Finite volume effects are also important for the lattice QCD calculations. Particle yields in heavy ion collision are the first moments, so they lead to rather crude comparisons with the model. Fluctuations and correlations are second moments so they allow for the better understanding of physical processes in the thermal equilibrium. A preliminary analysis of the increasing volume effects was given in [@crt1; @crt2]. It has been rigorously shown an influence of $\mathcal{O}(1/V)$ terms for a new class physical observables — semi-intensive quantities [@crt2]. Those results completely explained also ambiguities noted in [@begun], related to ”spurious non-equivalence” of different statistical ensembles used in the description of heavy ion collision processes. This paper is devoted to a further analysis of $\mathcal{O}(1/V)$ terms. It is shown that those terms are not specific for systems with subsidiary internal symmetries but appear also in the simplest ”classical“ problems of statistical physics. Choice of variables =================== In the thermodynamical limit the relevant probabilities distributions are those related to densities. These distributions are expressed by moments calculated for densities — not for particles. In the practice, however, we measure particles — not densities as we do not know related volumes. Fortunately, volumes can be omitted by taking corresponding ratios. Let us consider *e.g.* the density variance $\Delta n^2$. This can be written as $$\Delta n^2=\langle n^2\rangle - \langle n\rangle^2 = \frac{\langle N^2\rangle - \langle N\rangle^2}{V^2}\,.$$By taking the relative variance $$\frac{\Delta n^2}{\langle n\rangle^2}=\frac{\langle N^2\rangle - \langle N\rangle^2}{\langle N\rangle^2}\,,$$ volume-dependence vanishes. Semi-intensive variables ------------------------ A special care should be taken for calculations of ratios of particles moments. Although moments are extensive variables their ratios can be finite in the thermodynamic limit. These ratios are examples of semi-intensive variables. They are finite in the thermodynamic limit but those limits depend on volume terms in density probability distributions. One can say that semi-intensive variables ”keep memory” where the thermodynamic limit is realized from. Let consider as an example the scaled particle variance $$\frac{\langle N^2\rangle - \langle N\rangle^2}{\langle N\rangle}= V\frac{\langle n^2\rangle - \langle n\rangle^2}{\langle n\rangle}\,.$$ The term $$\frac{\langle n^2\rangle - \langle n\rangle^2}{\langle n\rangle}\,.$$ tends to zero in the thermodynamic limit as $\mathcal{O}(V^{-1})$. So a behavior of the scaled particle variance depends on the $\mathcal{O}(V^{-1})$ term in the scaled density variance. A more detailed analysis of semi-intensive variables is given in [@crt2]. To clarify this approach let us consider a well known classical problem of Poisson distribution but taken in the thermodynamic limit. Grand canonical and canonical ensembles ======================================= Poisson distribution in the thermodynamic limit ----------------------------------------------- Let us consider the grand canonical ensemble of noninteracting gas. A corresponding statistical operator is $$\label{stat operator GC-P} \hat{D}=\frac{\,e^{-\beta\hat H+\gamma\hat N}}{\,\text{Tr}{\,e^{-\beta\hat H+\gamma\hat N}}}$$ This leads to the partition function $$\label{GC-P part fn} \mathcal{Z}(V,T,\gamma)=\,e^{z\,e^\gamma}\,.$$ where $z$ is one-particle partition function $$z(T,V)=\frac{V}{(2\pi)^3}\int d^3p\,\,e^{-\beta E(p)} \equiv V z_0(T)\,,$$ A $\gamma$ parameter ($=\beta\mu$) is such to provide the given value of the average particle number $\langle N\rangle=V\langle n\rangle$. This means that $$\label{particle factor} \,e^{\gamma}= \frac{\langle n\rangle}{z_0}\,.$$ Particle moments can be written as $$\label{particle moments} \langle N^k\rangle= \frac{1}{\mathcal{Z}}\frac{\partial^k\mathcal{Z}} {\partial\gamma^k}\,.$$ The parameter $\gamma$ is taken in final formulae as a function $\gamma(\langle n\rangle,z_0)$ from Eq . The resulting probability distribution to obtain $N$ particles under condition that the average number of particles is $\langle N\rangle$ is equal to Poisson distribution $$P_{\langle N\rangle}(N)=\frac{{\langle N\rangle}^N}{N!}\,e^{-\langle N\rangle}\,.$$ We introduce corresponding probability distribution $\mathcal{P}$ for the particle number density $n=N/V$ $$\label{probab dens} \mathcal{P}_{\langle n\rangle}(n;V)= V P_{V\langle n\rangle}(V n)=V\frac{(V\langle n\rangle)^{V n}}{\Gamma(V n+1)} \,e^{-V\langle n\rangle}\,.$$ For large $V n$ we are using an asymptotic form of Gamma function $$\Gamma(V n+1)\sim\sqrt{2\pi}(V n)^{V n-1/2}\,e^{-V n}\left\{1+\frac{1}{12 V n}+ \mathcal{O}(V^{-2})\right\}\,.$$ This gives $$\label{prob dens as1} \mathcal{P}_{\langle n\rangle}( n;V)\sim V^{1/2}\frac{1}{\sqrt{2\pi n}} \left(\frac{\langle n\rangle}{ n}\right)^{V n} \,e^{V( n-\langle n\rangle)}\left\{1-\frac{1}{12 V n}+\mathcal{O}(V^{-2})\right\}$$ This expression in singular in the $V\to\infty$ limit. To estimate a large volume behavior of the probability distribution one should take into account a generalized function limit. So we are going to calculate an expression $$\langle G\rangle_V=\int dn\, G( n)\mathcal{P}_{\langle n\rangle}(n;V)\,,$$ where $\mathcal{P}_{\langle n\rangle}(n;V)$ is replaced by the asymptotic form from Eq . In the next to leading order in $1/V$ one should calculate $$\label{Poiss t lim} V^{1/2}\frac{1}{\sqrt{2\pi}}\int d n\frac{G( n)}{ n^{1/2}}\,e^{V S( n)} - V^{-1/2}\frac{1}{12\sqrt{2\pi}}\int d n\frac{G( n)}{ n^{3/2}}\,e^{V S( n)}\,.$$ where $$S( n)= n\ln\langle n\rangle - n\ln n + n - \langle n\rangle\,.$$ An asymptotic expansion of the function $\langle G\rangle_V$ is given by the classical Watson-Laplace theorem Let $I=[a,b]$ be the finite interval such that 1. $\max\limits_{x\in I} S(x)$ is reached in the single point $x=x_0$, . 2. $f(x),S(x)\in C(I)$. 3. $f(x), S(x)\in C^\infty$ in the vicinity of $x_0$, and $S^{''}(x_0)\neq 0$. Then, for $\lambda\to\infty,\ \lambda\in S_\epsilon$, there is an asymptotic expansion \[laplace\] $$\begin{aligned} F[\lambda]&\thicksim &\,e^{\lambda S(x_0)}\sum\limits_{k=0}^\infty c_k\lambda^{-k-1/2}\,, \label{laplace main}\\ c_k &=&\frac{\Gamma(k+1/2)}{(2k)!}\left(\frac{d}{dx}\right)^{2k} \left.\left[f(x)\left(\frac{S(x_0)-S(x)}{(x-x_0)^2}\right)^{-k-1/2}\right]\right\vert_{x=x_0}\,. \label{laplace coeff}\end{aligned}$$ $S_\epsilon$ is here a segment $|\arg z|\leqslant\frac{\pi}{2}-\epsilon<\frac{\pi}{2}$ in the complex $z$-plane. To obtain $\mathcal{O}(1/V)$ formula the first term in should be calculated till the next to leading order term in $1/V$. For the second term it is enough to perform calculations in the leading order only. The first term gives the contribution $$\label{first} V^{1/2}\frac{1}{\sqrt{2\pi}}\int d n\frac{G( n)}{ n^{1/2}}\,e^{V S( n)} = G(\langle n\rangle)+\frac{1}{12\langle n\rangle V} G(\langle n\rangle)+\frac{\langle n\rangle}{2 V}G^{''}(\langle n\rangle)\,,$$ and the second term gives $$\label{second} V^{-1/2}\frac{1}{12\sqrt{2\pi}}\int d n\frac{G( n)}{ n^{3/2}}\,e^{V S( n)} = \frac{1}{12\langle n\rangle V} G(\langle n\rangle)\,,$$ So we have eventually $$\label{t lim 2} \langle G\rangle_V = G(\langle n\rangle) + \frac{\langle n\rangle}{2V}G^{''}(\langle n\rangle) + \mathcal{O}(V^{-2})\,,$$ for any function $G$. This gives us the exact expression for the density distribution in the large volume limit $$\label{poison t lim 2} \mathcal{P}_{\langle n\rangle}(n;V)\sim\delta( n-\langle n\rangle)+\frac{\langle n\rangle}{2 V}\,\delta^{''}( n-\langle n\rangle)+\mathcal{O}(V^{-2})\,.$$ We are now able to obtain arbitrary density moments up to $\mathcal{O}(V^{-2})$ terms. $$\label{moments} \langle n^k \rangle_V = \int dn\, n^k \mathcal{P}_{\langle n\rangle}(n;V) = \langle n\rangle^k +\frac{k(k-1)}{2V}\langle n\rangle^{k-1}+\mathcal{O}(V^{-2})\,.$$ We have for the second moment (intensive variable!) $$\langle n^2 \rangle_V = \langle n\rangle^2 + \frac{\langle n\rangle}{V}+\mathcal{O}(V^{-2})\,.$$ This means $$\label{density limit} \Delta n^2=\frac{\langle n\rangle}{V}\to 0\,.$$ as expected in the thermodynamic limit. The particle number and its density are fixed in the canonical ensemble so corresponding variances are always equal to zero. The result can be seen as an example of the equivalence of the canonical and grand canonical distribution in the thermodynamic limit. This equivalence is clearly visible from the Eq where the delta function in the first term can be considered as the particle number density distribution in the canonical ensemble. A more involved situation appears for particle number moments (extensive variable!). Eq translated to the particle number gives $$\label{particle moments 2} \langle N^k\rangle = V^k\langle n\rangle^k + V^{k-1}\frac{k(k-1)}{2}\langle n\rangle^{k-1}+\mathcal{O}(V^{k-2})\,,$$ One gets for the scaled variance (semi-intensive variable!) $$\label{scaled variance} \frac{\Delta N^2}{\langle N\rangle}=1\,,$$ what should be compared with zero obtained for the canonical distribution. The mechanism for such a seemingly unexpected behavior is quite obvious. The grand canonical and the canonical density probability distributions tend to the same thermodynamic limit. There are different however for any finite volume. Semi-intensive variables depend on coefficients at those finite volume terms so they are different also in the thermodynamic limit. Energy distribution ------------------- It is interesting to perform similar calculation for the energy distribution in both ensembles. Energy moments and an average energy density can be written as $$\label{energy moments} \langle E^k\rangle = (-1)^k\frac{1}{\mathcal{Z}}\frac{\partial^k\mathcal{Z}} {\partial\beta^k}\,;\qquad \langle \epsilon\rangle=-\frac{d z_0}{d\beta}\,e^\gamma\,.$$ One gets from Eq $$\label{en moments GC-P} \langle E^k\rangle= V^k\langle\epsilon\rangle^k+V^{k-1}\frac{k(k-1)}{2}\langle\epsilon\rangle^{k-2} \frac{\langle n\rangle}{z_0}\frac{d^2 z_0}{d\beta^2}+\mathcal{O}(V^{k-2})\,.$$ The grand canonical energy density distribution follows $$\label{energ probab} \mathbf{P}(\epsilon|\langle n\rangle,\langle\epsilon\rangle)= \delta\left(\epsilon-\langle\epsilon\rangle\right)+ \frac{\langle n\rangle}{2 V}\, \mathcal{R}^{GC}\left(\frac{\langle\epsilon\rangle}{\langle n\rangle}\right) \delta^{''}(\epsilon -\langle \epsilon\rangle) +\mathcal{O}(V^{-2})\,.$$ $\mathcal{R}^{GC}$ is given here as $$\mathcal{R}^{GC}\left(\frac{\langle\epsilon\rangle}{\langle n\rangle}\right)= \left.\frac{1}{z_0}\frac{d^2 z_0}{d\beta^2}\right|_{\beta=\beta(\langle\epsilon\rangle/\langle n\rangle)}\,.$$ For the canonical distribution a corresponding statistical operator is $$\label{stat operator C-P} \hat{D}=\frac{\,e^{-\beta\hat H}}{\,\text{Tr}{\,e^{-\beta\hat H}}}$$ This leads to the partition function $$\label{C-P part fn} \mathcal{Z}(V,T)=\frac{z^N}{N!}=\frac{\,e^{Vn\log z}}{N!}\,.$$ Internal energy moments are given by Eq . In particular $$\label{av energy C-P} \langle \epsilon\rangle=-\frac{n}{z_0}\frac{d z_0}{d\beta}\,.$$ For the energy moments one gets now $$\label{en moments C-P} \langle E^k\rangle=V^k\langle \epsilon\rangle^k + V^{k-1}\frac{k(k-1)}{2}\langle \epsilon\rangle^{k-2}n\frac{\partial}{\partial\beta}\left(\frac{1}{z_0} \frac{\partial z_0}{\partial\beta}\right)+\mathcal{O}(V^{k-2})\,.$$ A corresponding probability distribution is $$\label{energ probab C-P} \mathbf{P}(\epsilon|n,\langle\epsilon\rangle)= \delta\left(\epsilon-\langle\epsilon\rangle\right)+ \frac{n}{2 V}\, \mathcal{R}^{C}\left(\frac{\langle\epsilon\rangle}{n}\right) \delta^{''}(\epsilon -\langle \epsilon\rangle) +\mathcal{O}(V^{-2})\,,$$ where $\mathcal{R}^{C}$ is given here as $$\mathcal{R}^{C}\left(\frac{\langle\epsilon\rangle}{n}\right)= \left.\frac{\partial}{\partial\beta}\left(\frac{1}{z_0} \frac{\partial z_0}{\partial\beta}\right) \right|_{\beta=\beta(\langle\epsilon\rangle/n)}\,.$$ [0]{} For a review see, *e.g.*, P. Braun-Munzinger, K. Redlich and J. Stachel: *Quark Gluon Plasma 3* eds. R. C. Hwa and X. N. Wang (World Scientific, Singapore 2004) 491-599; A. Andronic and P. Braun-Munzinger: Lect. Notes Phys. **652** 35 (2004) J. Cleymans, K. Redlich and L. Turko: Phys. Rev. C **71** 047902 (2005) J. Cleymans, K. Redlich and L. Turko: J. Phys. G **31** 1421 (2005) V. V. Begun, M. Gazdzicki, M. I. Gorenstein and O. S. Zozulya: Phys. Rev. C **70** 034901 (2004); V. V. Begun, M. I. Gorenstein, A. P. Kostyuk and O. S. Zozulya: Phys. Rev. C **71** 054904 (2005); V. V. Begun, M. I. Gorenstein and O. S. Zozula: Phys. Rev. C **72** 014902 (2005); A. Keränen, F. Becattini, V.V. Begun, M.I. Gorenstein, O.S. Zozulya, J. Phys. G **31** S1095 (2005)
{ "pile_set_name": "ArXiv" }
--- abstract: 'During the lifetime of sun-like or low mass stars a significant amount of angular momentum is removed through magnetised stellar winds. This process is often assumed to be governed by the dipolar component of the magnetic field. However, observed magnetic fields can host strong quadrupolar and/or octupolar components, which may influence the resulting spin-down torque on the star. In Paper I, we used the MHD code PLUTO to compute steady state solutions for stellar winds containing a mixture of dipole and quadrupole geometries. We showed the combined winds to be more complex than a simple sum of winds with these individual components. This work follows the same method as Paper I, including the octupole geometry which increases the field complexity but also, more fundamentally, looks for the first time at combining the same symmetry family of fields, with the field polarity of the dipole and octupole geometries reversing over the equator (unlike the symmetric quadrupole). We show, as in Paper I, that the lowest order component typically dominates the spin down torque. Specifically, the dipole component is the most significant in governing the spin down torque for mixed geometries and under most conditions for real stars. We present a general torque formulation that includes the effects of complex, mixed fields, which predicts the torque for all the simulations to within $20\%$ precision, and the majority to within $\approx5\%$. This can be used as an input for rotational evolution calculations in cases where the individual magnetic components are known.' author: - 'Adam J. Finley\* & Sean P. Matt' bibliography: - 'Paper2.bib' title: | The Effect of Combined Magnetic Geometries on Thermally Driven Winds II:\ Dipolar, Quadrupolar and Octupolar Topologies --- Introduction ============ Cool stars are observed to host global magnetic fields which are embedded within their outer convection zones [@reiners2012observations]. Stellar magnetism is driven by an internal dynamo which is controlled by the convection and stellar rotation rate, the exact physics of which is still not fully understood (see review by [@brun2017magnetism]). As observed for the Sun, plasma escapes the stellar surface, interacting with this magnetic field and forming a magnetised stellar wind that permeates the environment surrounding the star [@cranmer2017origins].Young main sequence stars show a large spread in rotation rates for a given mass. As a given star ages on the main sequence, their stellar wind removes angular momentum, slowing the rotation of the star [@schatzman1962theory; @weber1967angular; @mestel1968magnetic]. This in turn reduces the strength of the magnetic dynamo process, feeding back into the strength of the applied stellar wind torque. This relationship leads to a convergence of the spin rates towards a tight mass-rotation relationship at late ages, as stars with faster rotation incur larger spin down torques and vice versa for slow rotators. This is observed to produce a simple relation between rotation period and stellar age [$\Omega_*\propto t^{-0.5},$ @skumanich1972time], which is approximately followed, on average [@soderblom1983rotational] over long timescales. With the growing number of observed rotation periods [@irwin2009ages; @agueros2011factory; @meibom2011color; @mcquillan2013measuring; @bouvier2014angular; @stauffer2016rotation; @2017ApJ...835...16D], an increased effort has been channelled into correctly modelling the spin down process [e.g. @reiners2012radius; @gallet2013improved; @van2013fast; @brown2014metastable; @matt2015mass; @gallet2015improved; @amard2016rotating; @blackman2016minimalist; @see2017open], as it is able to test our understanding of basic stellar physics and also date observed stellar populations. The process of generating stellar ages from rotation is referred to as Gyrochronology, whereby a cluster’s age can be estimated from the distribution of observed rotation periods [@barnes2003rotational; @meibom2009stellar; @barnes2010simple; @delorme2011stellar; @van2013fast]. This requires an accurate prescription of the spin down torques experienced by stars due to their stellar wind, along with their internal structure and properties of the stellar dynamo. Based on results from magnetohydrodynamic (MHD) simulations, parametrised relations for the stellar wind torque are formulated using the stellar magnetic field strength, mass loss rate and basic stellar parameters ([@mestel1984angular]; [@kawaler1988angular]; [@matt2008accretion]; [@matt2012magnetic]; [@ud2009dynamical]; [@pinto2011coupling]; [@reville2015effect]). The present work focusses on improving the modelled torque on these stars due to their magnetised stellar winds, by including the effects of combined magnetic geometries. Magnetic field detections from stars, other than the Sun, are reported over 30 years ago via Zeeman broadening observations [@robinson1980observations; @marcy1984observations; @gray1984measurements], which has since been used on a multitude of stars [e.g. @saar1990magnetic; @johns2000measurements]. This technique, however, only allows for an average line of sight estimate of the unsigned magnetic flux and provides no information about the geometry of the stellar magnetic field (see review by [@reiners2012observations]). More recently, the use of Zeeman Doppler Imaging (ZDI), a tomographic technique capable of providing information about the photospheric magnetic field of a given star, enables the observed field to be broken down into individual spherical harmonic contributions [e.g. @hussain2002coronal; @donati2006large; @donati2008magnetic; @morin2008stable; @morin2008large; @petit2008toroidal; @fares2009magnetic; @morgenthaler2011direct; @vidotto2014stellar; @jeffers2014e; @see2015energy; @saikia2016solar; @see2016connection; @folsom2016evolution; @hebrard2016modelling; @see2016studying; @kochukhov2017surface]. This allows the 3D magnetic geometry to be recovered, typically using a combination of field extrapolation and MHD modelling [e.g. @vidotto2011understanding; @cohen2011dynamics; @garraffo2016space; @reville2016age; @alvarado2016simulating; @nicholson2016temporal; @do2016magnetic]. Pre-main sequence stars, observed with ZDI, show a variety of multipolar components, typically dependent on the internal structure of the host star [@gregory2012can; @hussain2013role]. Many of these objects show an overall dipolar geometry with an accompanying octupole component [e.g. @donati2007magnetic; @gregory2012can]. The addition of dipole and octupole fields has been explored analytically, for these stars, and is shown to impact the disk truncation radius along with the topology and field strength of accretion funnels [@gregory2011analytic; @gregory2016multipolar]. For main sequence stellar winds, the behaviour of combined magnetic geometries has yet to be systematically explored. Our closest star, the Sun, hosts a significant quadrupolar contribution during the solar activity cycle maximum which dominates the large scale magnetic field geometry along with a small dipole component [@derosa2012solar; @brun2013rotation]. The impact of these mixed geometry fields on the spin down torque generated from magnetised stellar winds remains uncertain. It is known that the magnetic field stored in the lowest order geometries, e.g. dipole, quadrupole & octupole, has the slowest radial decay and therfore governs the strength of the magnetic field at the Alfvén surface (and thus it’s size and shape). With the cylindrical extent of the Alfvén surface being directly related to the efficiency of the magnetic braking mechanism, it is this global field strength and geometry that is required to compute accurate braking torques in MHD simulations [@reville2015effect; @reville2016age]. However, the effect of the higher order components on the acceleration of the wind close in to the star may not be non-negligible [@cranmer2005generation; @cohen2009effect]. Additionally, the small scale surface features described by these higher order geometries (e.g. star spots and active regions) will play a vital role in modulating the chromospheric activity [e.g. @testa2004density; @aschwanden2006physics; @gudel2007sun; @garraffo2013effect], which is often assumed to be decoupled from the open field regions producing the stellar wind. Models such as the AWESOM [@van2014alfven] include this energy dissipation in the lower corona, and are able to match observed solar parameters well. Work by [@pantolmos2017magnetic], shows how this additional acceleration can be accounted for globally within their semi-analytic formulations. Previous works have aimed to understand the impact of more complex magnetic geometries on the rotational evolution of sun-like stars. [@holzwarth2005impact] examined the effect of non-uniform flux distributions on the magnetic braking torque, investigating the latitudinal dependence of the stellar wind produced within their MHD simulations. Similarly, [@garraffo2016missing] included magnetic spots at differing latitudes and examined the resulting changes to mass loss rate and spin down torque. The effectiveness of the magnetic braking from a stellar wind is found to be reduced for higher order magnetic geometries [@garraffo2015dependence]. This is explained in [@reville2015effect] as a reduction to the average Alfvén radius, which acts mathematically as a lever arm for the applied braking torque. [@finley2017dipquad], hereafter Paper I, continue this work by discussing the morphology and braking torque generated from combined dipolar and quadrupolar field geometries using ideal MHD simulations of thermally driven stellar winds. In this current work, we continue this mixed field investigation by including combinations with an octupole component. Section 2 introduces the simulations and the numerical methods used, along with our parametrisation of the magnetic field geometries and derived simulation properties. Section 3 explores the resulting relationship of the average Alfvén radius with increasing magnetic field strength for pure fields, and generic combinations of axisymmetric dipole, quadrupole or octupole geometries. Section 4 uses the decay of the unsigned magnetic flux with distance to explain observed behaviours in our Alfvén radii relations, analysis of the open magnetic flux in our wind solutions follows with a singular relation for predicting the average Alfvén radius based on the open flux. Conclusions and thoughts for future work can be found in Section 5. Simulation Method and Numerical Setup ===================================== As in Paper I, we use the PLUTO MHD code [@mignone2007pluto; @mignone2009pluto] with a spherical geometry to compute 2.5D (two dimensions, $r$, $\theta$, and three vector components, $r$, $\theta$, and $\phi$) steady state wind solutions for a range of magnetic geometries. The full set of ideal MHD equations are solved, including the energy equation and a closing equation of state. The internal energy density $\epsilon$ is given by $\rho\epsilon=p/(\gamma-1)$, where $\gamma$ is the ratio of specific heats. This general set of equations is capable of capturing non-adiabatic processes, such as shocks, however the solutions found for our steady-state winds generally do not contain these. For a gas comprised of protons and electrons $\gamma$ should take a value of 5/3, however we decrease this value to 1.05 in order to reproduce the observed near isothermal nature of the solar corona [@steinolfson1988density] and a terminal speed consistent with the solar wind. This is done, such that on large scales the wind follows the polytropic approximation, i.e. the wind pressure and density are related as, $p\propto \rho^{\gamma}$ [@parker1965dynamical; @keppens1999numerical]. The reduced value of $\gamma$ has the effect of artificially heating the wind as it expands, without an explicit heating term in our equations. We adopt the numerics used in Paper I, except that we modify the radial discretisation of the computational mesh. Instead of a geometrically stretched radial grid as before, we now employ a stepping ($dr$) that grows logarithmically. The domain extent remains unchanged, from one stellar radius ($R_*$) to 60$R_*$, containing $N_r\times N_{\theta}=256\times512$ grid cells. This modification produces a more consistent aspect ratio between $dr$ and $rd\theta$ over the whole domain, which marginally increases our numerical accuracy and stability. Characteristic speeds such as the surface escape speed and Keplerian speed, $v_{\text{esc}}$, $v_{\text{kep}}$, the equatorial rotation speed, $v_{\text{rot}}$, along with the surface adiabatic sound speed, $c_{\text{s}}$, and Alfvén speed, $v_{\text{A}}$, are given, $$v_{\text{esc}}=\sqrt{\frac{2GM_*}{R_*}}=\sqrt{2}v_{\text{kep}},$$ where, $G$ is the gravitational constant, $R_*$ is the stellar radius and $M_*$ is the stellar mass, $$v_{\text{rot}}=\Omega_* R_*,$$ where $\Omega_*$ is the angular stellar rotation rate (which is assumed to be in solid body rotation), $$c_{\text{s}}=\sqrt{\frac{\gamma p_*}{\rho_*}}, \label{polytropic}$$ where $\gamma$ is the polytropic index, $p_*$ and $\rho_*$ are the gas pressure and mass density at the stellar surface respectively, $$v_{\text{A}}=\frac{B_*}{\sqrt{4\pi\rho_*}},$$ where $B_*$ is the characteristic polar magnetic field strength (see Section 2.1). Parameter Value Description ------------------------------- ---------- ----------------------------------- $\gamma$ 1.05 Polytropic Index $c_{\text{s}}/v_{\text{esc}}$ 0.25 Surface Sound Speed/ Escape Speed $f$ 4.46E-03 Fraction of Break-up Rotation : Fixed Simulation Parameters[]{data-label="Constants"} We set an initial wind speed within the domain using a spherically symmetric Parker wind solution [@parker1965dynamical], with the ratio of the surface sound speed to the escape speed $c_{\text{s}}/v_{\text{esc}}$ setting the base wind temperature in such a way as to represent a group of solutions for differing gravitational field strengths. The same normalisation is applied to the surface magnetic field strength with $v_{\text{A}}/v_{\text{esc}}$, and the surface rotation rate using $f=v_{\text{rot}}/v_{\text{kep}}$, such that each wind solution represents a family of solutions that can be applied to a range of stellar masses. The same system of input parameters are used by many previous authors [e.g. @matt2008accretion; @matt2012magnetic; @reville2015effect; @pantolmos2017magnetic]. For this study we fix the wind temperature and stellar rotation at the values tabulated in Table \[Constants\]. A background field corresponding to our chosen potential magnetic field configuration (see Section \[Magconfig\]) is imposed over the initial wind solution and then all quantities are evolved to a steady state solution by the PLUTO code. The boundary conditions are enforced, as in Paper I, at the inner radial boundary (stellar surface) which are appropriate to give a self consistent wind solution for a rotating magnetised star. A fixed surface magnetic geometry is therefore maintained along with solid body rotation. The use of a polytropic wind produces solutions which are far more isotropic than observed for the Sun [@vidotto2009three]. The velocity structure of the solar wind is known to be largely bimodal, having a slow and fast component which originate under different circumstances [@fisk1998slow; @feldman2005sources; @riley2006comparison]. This work and previous studies using a polytropic assumption aim to model the globally averaged wind which can be more generally applied to the variety of observed stellar masses and rotation periods. More complex wind driving and heating physics are needed in order to reproduce the observed velocity structure of the solar wind, however they are far harder to generalise for other stars [@cranmer2007self; @pinto2016flux]. Magnetic Field Configurations {#Magconfig} ----------------------------- The magnetic geometries considered in this work include dipole, quadrupole and octupole combinations, with different field strengths and in some cases relative orientations. As in Paper I, we describe the mixing of different field geometries using the ratio of the polar field strength in a given component to the total field strength. Care is taken to parametrise the field combinations due to the behaviour of the two equatorially antisymmetric components, dipole and octupole, at the poles. We generalise the ratio defined within Paper I for each component such that, $$\mathcal{R}_x=\frac{B_*^{l=x}}{|B_*^{l=1}|+|B_*^{l=2}|+|B_*^{l=3}|}=\frac{B_*^{l=x}}{B_*}, \label{rvalue}$$ where in this work, $l$ is the principle spherical harmonic number and $x$ can value 1, 2 or 3 for dipole, quadrupole or octupole fields. The polar field strength of a given component is written as $B_*^{l=x}$ and the $B_*=|B_*^{l=1}|+|B_*^{l=2}|+|B_*^{l=3}|$ is a characteristic field strength. The polar field strengths in the denominator are given with absolute values, because we are interested in the characteristic strength of the combined components, which are the same for aligned and anti-aligned fields. Therefore summing the absolute value of the ratios produces unity, $$\sum_{l=1}^3|\mathcal{R}_l| = 1,$$ which allows the individual values of $\mathcal{R}_{\text{dip}},\mathcal{R}_{\text{quad}}$ and $\mathcal{R}_{\text{oct}}$ ($\equiv \mathcal{R}_{1},\mathcal{R}_{2}$ and $\mathcal{R}_{3}$) to range from 1 to -1 (north pole positive or negative), with the absolute total remaining constant. We define the magnetic field components using these ratios and the Legendre polynomials $P_{lm}$, which for the axisymmetric ($m=0$) field components can be written, $$\begin{aligned} B_r(r,\theta)&=&B_*\sum_{l=1}^3\mathcal{R}_l P_{l0}(cos\theta)\bigg(\frac{R_*}{r}\bigg)^{l+2},\\ B_{\theta}(r,\theta)&=&B_*\sum_{l=1}^3\frac{1}{l+1}\mathcal{R}_l P_{l1}(cos\theta)\bigg(\frac{R_*}{r}\bigg)^{l+2}.\end{aligned}$$ The northern polar magnetic field strengths for each components are given by, $$B_*^{l=1}=\mathcal{R}_{\text{dip}}B_*,\; B_*^{l=2}=\mathcal{R}_{\text{quad}}B_*,\; B_*^{l=3}=\mathcal{R}_{\text{oct}}B_*,$$ The relative orientation of the magnetic components is controlled throughout this work by setting the dipole and quadrupole fields ($B_*^{l=1}$ and $B_*^{l=2}$) to be positive at the northern stellar pole. The octupole component ($B_*^{l=3}$) is then combined with the dipolar and quadruplar components using either a positive or negative strength on the north pole, which we define as the aligned and anti-aligned cases respectively. [ccccccc|ccccccc]{} Case & $\mathcal{R}_{\text{dip}}|\mathcal{R}_{\text{quad}}|\mathcal{R}_{\text{oct}}$ & $v_{\text{A}}/v_{\text{esc}}$ & $\langle R_{\text{A}}\rangle/R_*$ & $\Upsilon$ & $\Upsilon_{\text{open}}$ & $\langle v(R_{\text{A}})\rangle/v_{\text{esc}} $ & Case & $\mathcal{R}_{\text{dip}}|\mathcal{R}_{\text{quad}}|\mathcal{R}_{\text{oct}}$ & $v_{\text{A}}/v_{\text{esc}}$ & $\langle R_{\text{A}}\rangle/R_*$ & $\Upsilon$ & $\Upsilon_{\text{open}}$ & $\langle v(R_{\text{A}})\rangle/v_{\text{esc}} $\ 1 & $1.0|0.0|0.0$ & 0.5 & 5.0 & 185 & 1460 & 0.22 & 65 & $0.5|0.0|0.5$ & 0.5 & 3.8 & 203 & 648 & 0.17\ 2 & $1.0|0.0|0.0$ & 1.0 & 6.9 & 735 & 3540 & 0.29 & 66 & $0.5|0.0|0.5$ & 1.0 & 4.9 & 705 & 1380 & 0.22\ 3 & $1.0|0.0|0.0$ & 1.5 & 8.5 & 1790 & 6440 & 0.34 & 67 & $0.5|0.0|0.5$ & 1.5 & 5.8 & 1580 & 2300 & 0.26\ 4 & $1.0|0.0|0.0$ & 2.0 & 9.9 & 3380 & 9710 & 0.37 & 68 & $0.5|0.0|0.5$ & 2.0 & 6.7 & 2860 & 3420 & 0.29\ 5 & $1.0|0.0|0.0$ & 3.0 & 12.3 & 8330 & 17100 & 0.42 & 69 & $0.5|0.0|0.5$ & 3.0 & 8.3 & 6830 & 6300 & 0.34\ 6 & $1.0|0.0|0.0$ & 6.0 & 17.5 & 36500 & 43200 & 0.49 & 70 & $0.5|0.0|0.5$ & 6.0 & 11.7 & 29800 & 16200 & 0.42\ 7 & $1.0|0.0|0.0$ & 12.0 & 22.6 & 134000 & 85300 & 0.54 & 71 & $0.5|0.0|0.5$ & 12.0 & 15.1 & 110000 & 33800 & 0.49\ 8 & $1.0|0.0|0.0$ & 20.0 & 28.1 & 353000 & 156000 & 0.60 & 72 & $0.5|0.0|0.5$ & 20.0 & 18.7 & 299000 & 61000 & 0.50\ 9 & $0.0|1.0|0.0$ & 0.5 & 3.4 & 179 & 409 & 0.14 & 73 & $0.3|0.0|0.7$ & 0.5 & 3.4 & 159 & 451 & 0.12\ 10 & $0.0|1.0|0.0$ & 1.0 & 4.0 & 689 & 733 & 0.18 & 74 & $0.3|0.0|0.7$ & 1.0 & 4.3 & 607 & 977 & 0.20\ [Note: Reduced table shown, full data available as supplemental. ]{} The addition of dipole and quadrupole components was explored in Paper I. We showed the fields to add in one hemisphere and subtract in the other. Similar to the dipole, the octupole component belongs in the “primary” symmetry family having anti-symmetric field polarity about the equator [@mcfadden1991reversals]. Addition of any primary geometries with any “secondary” family quadrupole (equatorially symmetric) would be expected to behave qualitatively similar. A different behaviour is expected from the addition of the two primary geometries (dipole-octupole). Here the field addition and subtraction is primarily governed by the relative orientations of the field with respect to one another. Aligned fields will combine constructively over the pole and subtract from one another in the equatorial region. Anti-aligned primary fields, conversely, will subtract on the pole and add over the equator. Including the results from Paper I, this work includes combinations of all the possible permutations of the axisymmetric dipole, quadrupole and octupole magnetic geometries. Table \[Parameters\] contains a complete list of stellar parameters for the cases computed within this work. Parameters for the dipole-quadrupole combined field cases are available in Table 1 of Paper I. It is noted that in the course of the current work, the pure dipolar and quadrupole cases are re-simulated, see Table \[Parameters\]. Derived Stellar Wind Properties ------------------------------- The simulations produce steady state solutions for density, $\rho$, pressure, $p$, velocity, $\bf v$, and magnetic field strength, $\bf B$, for each stellar wind case. From these results, the behaviour of the spin down torque is ascertained. The torque on the star, $\tau$, due to the loss of angular momentum in the stellar wind is calculated, $$\tau=\int_{\text{A}}\Lambda\rho{\bf v} \cdot d{\bf A},$$ where the angular momentum flux, given by ${\bf F_{\text{AM}}}=\Lambda\rho{\bf v}$ [@keppens2000stellar], is integrated over spherical shells of area $A$ (outside the closed field regions). $\Lambda$ is given by, $$\Lambda(r,\theta)=rsin\theta\bigg(v_{\phi}-\frac{B_{\phi}}{\rho}\frac{|{\bf B_p}|^2}{{\bf v_p \cdot B_p}}\bigg).$$ Similarly, the mass loss rate from our wind solutions is calculated, $$\dot{M}=\int_{\text{A}}\rho{\bf v} \cdot d{\bf A}.$$ An average Alfvén radius is then defined, in terms of the torque, mass loss rate, $\dot{M}$ and rotation rate, $\Omega_*$, $$\langle R_{\text{A}}\rangle\equiv\sqrt{\frac{\tau}{\dot{M}\Omega_*}}, \label{averageAlfven}$$ In this formulation, $\langle R_{\text{A}}\rangle/R_*$ is defined as a dimensionless efficiency factor, by which the magnetised wind carries angular momentum from the star, i.e. a larger average Alfvén radius produces a larger torque for a fixed rotation rate and mass loss rate, $$\tau=\dot{M}\Omega_*R_*^2\bigg(\frac{\langle R_{\text{A}}\rangle}{R_*}\bigg)^2. \label{torque}$$ In ideal MHD, $\langle R_{\text{A}}\rangle$ is associated with a cylindrical Alfvén radius, which acts like a “lever arm” for the spin-down torque on the star. The methodology of this work follows closely that of Paper I, in which we produce semi-analytic formulations for $\langle R_{\text{A}} \rangle$ in terms of the wind magnetisation, $\Upsilon$, as defined in previous works [@matt2008accretion; @matt2012magnetic; @reville2015effect; @pantolmos2017magnetic], $$\Upsilon=\frac{B_*^2R_*^2}{\dot{M}v_{\text{esc}}}, \label{upsilon}$$ where $B_*$ is now the characteristic polar field; which is split amongst the different geometries using the ratios, $\mathcal{R}_{\text{dip}}$, $\mathcal{R}_{\text{quad}}$ and $\mathcal{R}_{\text{oct}}$. The values of $\Upsilon$ produced from the steady state solutions are indirectly controlled by increasing the value of $v_{\text{A}}/v_{\text{esc}}$. This increases the polar magnetic field strength for a given density normalisation. The mass loss rate is similarly uncontrolled and evolves to steady state, depending mostly on our choice of Parker wind parameters, but is also adjusted self-consistently by the magnetic field. The values of $\Upsilon$ are tabulated in Table \[Parameters\], along with $\mathcal{R}_l$ values, magnetic field strengths given by $v_{\text{A}}/v_{\text{esc}}$, and the average Alfvén radii for each case simulated. Results for combined dipole-quadrupole cases are available in Table 1 of Paper I. Figure \[Up\_crit\] shows the parameter space of simulations with their value of $\Upsilon$ against the different ratios for either quadrupole-octupole or dipole-octupole cases, with the lower order geometry ratio labelling the cases ($\mathcal{R}_{\text{quad}}$ and $\mathcal{R}_{\text{dip}}$ respectively). ![image](f1.eps){width="\textwidth"} Wind Solutions and $\langle R_{\text{A}}\rangle$ Scaling Relations ================================================================== Single Geometry Winds --------------------- Topology($l)$ $K_{\text{s}}$ $m_{\text{s}}$ ------------------ ---------------- ----------------- Dipole ($1$) $1.53\pm0.03$ $0.229\pm0.002$ Quadrupole ($2$) $1.70\pm0.02$ $0.134\pm0.002$ Octupole ($3$) $1.80\pm0.01$ $0.087\pm0.001$ : Single Component Fit Parameters to equation (\[single\_mode\])[]{data-label="fitValues"} [Note: Fit values deviate slightly from those presented in Paper I due to the more accurate numerical results found with logarithmic grid spacing, used here. ]{} ![Average Alfvén radius vs the wind magnetisation, $\Upsilon$ (equation \[upsilon\]) in our simulations with single geometries (points). Different scaling relations are shown for each pure geometry (solid lines). Higher $l$ order geometries produce a smaller Alfvén radius and thus smaller spin-down torque for a given polar field strength and mass loss rate. A similar result was first shown by [@reville2015effect].[]{data-label="Upsilon_puremodes"}](f2.pdf){width="47.00000%"} For single magnetic geometries, increasing the complexity of the field decreases the effectiveness of the magnetic braking process by reducing the average Alfvén radius (braking lever arm) for a given field strength [@garraffo2015dependence]. The impact of changing field geometries on the scaling of the Alfvén radius for thermally driven winds was shown by [@reville2015effect] for the dipole, quadrupole and octupole geometries. We repeat the result of [@reville2015effect] for a slightly hotter coronal temperature wind, $c_{\text{s}}=0.25$ in our cases, compared to $c_{\text{s}}=0.222$. This temperature more reasonably approximates the solar wind terminal velocity, typically resulting in a wind speed of $\approx500$km/s at 1AU for solar parameters. For each magnetic geometry, we simulate 8 different field strengths changing the input value of $v_{\text{A}}/v_{\text{esc}}$ as tabulated in Table \[Parameters\] (cases 1-24). Each wind solution gives a value for the Alfvén radius, $\langle R_{\text{A}} \rangle$, and the wind magnetisation, $\Upsilon$. These values are represented in Figure \[Upsilon\_puremodes\] as coloured dots, and their scaling can be described using the Alfvén radius relation from [@matt2008accretion], with three precise power law relations for the different magnetic geometries, as found previously in the work of [@reville2015effect]. $$\frac{\langle R_{\text{A}} \rangle}{R_*}=K_{\text{s}}\Upsilon^{m_{\text{s}}}, \label{single_mode}$$ where $K_{\text{s}}$ and $m_{\text{s}}$ are fit parameters for this relation, which utilises the surface field strength. Best fit parameters for each geometry tabulated in Table \[fitValues\]. With increasing $l$ values, the higher order geometries produce increasingly shallow slopes with wind magnetisation, such that they approach a purely hydrodynamical lever arm i.e. the wind carries away angular momentum corresponding to the surface rotation alone, with the torque efficiency equal to the average cylindrical radius of the stellar surface from the rotation axis, $\langle R_{\text{A}} \rangle/R_*=(2/3)^{1/2}$ [@mestel1968magnetic]. Any significant magnetic braking in sun-like stars will therefore be predominantly mediated by the lowest order components. Combined Magnetic Geometries ---------------------------- Based on work performed in Paper I, we anticipate the behaviour of the average Alfvén radius for magnetic field geometries which contain, dipole, quadrupole and octupole components. The dipole component, having the slowest radial decay, is expected to govern the field strength at large distances, then the field should scale like the quadrupole at intermediate distances and finally, close to the star, the field should scale like the octupole geometry. The Alfvén radius formulation therefore takes the form of a twice broken power law, $$\frac{\langle R_{\text{A}} \rangle}{R_*}=\max\Bigg\{ \begin{array}{@{}ll@{}} K_{\text{s,dip}}[\mathcal{R}_{\text{dip}}^2\Upsilon]^{m_{\text{s,dip}}}, \\ K_{\text{s,quad}}[(|\mathcal{R}_{\text{dip}}|+|\mathcal{R}_{\text{quad}}|)^2\Upsilon]^{m_{\text{s,quad}}}, \\ K_{\text{s,oct}}[(|\mathcal{R}_{\text{dip}}|+|\mathcal{R}_{\text{quad}}|+|\mathcal{R}_{\text{oct}}|)^2\Upsilon]^{m_{\text{s,oct}}}, \end{array} \label{DQO_law}$$ which approximates the simulated values of the average Alfvén radius. Note $|\mathcal{R}_{\text{dip}}|+|\mathcal{R}_{\text{quad}}|+|\mathcal{R}_{\text{oct}}|=1$, such that the final scaling depends purely on the total $\Upsilon$. Here we present simulation results from combinations of each field, sampling a range of mixing fractions and field strengths. These are used to validate this semi-analytic prescription for predicting the spin-down torque on a star, due to a given combination of axisymmetric magnetic fields. ### Dipole Combined with Quadrupole The regime of dipole and quadrupole combined geometries is presented in Paper I. We briefly reiterate the results here displaying values from that study in Figure \[Upsilon\_DQ\]. ![Average Alfvén radius scaling with wind magnetisation, $\Upsilon$, for the different combinations of dipole and quadrupole, from the study in Paper I (points). Solid lines show scaling for pure dipole and quadrupole. The deviation from single power laws shows how the combination of dipole and quadrupole fields modifies the Alfvén radius scaling, compared to single geometries. The scaling predicted by only considering the fractional dipole component is plotted with multiple dashed coloured lines corresponding to the different $\mathcal{R}_{\text{dip}}$ values. This shows that $\langle R_{\text{A}}\rangle/R_*$ scales with the dipole component only, unless the quadrupole is dominant at a distance of $\approx R_{\text{A}}$.[]{data-label="Upsilon_DQ"}](f3.pdf){width="50.00000%"} These fields belong to different symmetry families, primary and secondary. As such their addition creates a globally asymmetric field about the equator, with the north pole in this case being stronger than the south. The relative fraction of the two components alters the location of the current sheet/streamers, which appear to resemble the dominant global geometry. ![image](f4.pdf){width="90.00000%"} It is shown in Paper I that the quadrupole component has a faster radial decay than the dipole, and therefore at large distances only the dipole component of the field influences the location of the Alfvén radius. Closer to the star, the total field decays radially like the quadrupole, with the dipole component adding its strength, so near to the star the Alfvén radius scaling depends on the total field strength. Therefore we developed a broken power law to describe the behaviour of the average Alfvén radius scaling with wind magnetisation, which uses the maximum of either the quadrupole slope using the total field strength, as if the combined field decays like a quadrupole, (solid blue line) or the dipolar slope using only the dipole component (shown in colour-coded dashed lines). The dipole component of the wind magnetisation is formulated as, $$\Upsilon_{\text{dip}}=\bigg(\frac{B_*^{l=1}}{B_*}\bigg)^2\frac{B_*^2R_*^2}{\dot{M}v_{\text{esc}}}=\mathcal{R}_{\text{dip}}^2\Upsilon. \label{upsilon_dipole}$$ Mathematically, equation (\[DQO\_law\]) becomes the broken power law from Paper I when $\mathcal{R}_{\text{oct}}=0$, $$\frac{\langle R_{\text{A}} \rangle}{R_*}=\left\{ \begin{array}{@{}ll@{}} K_{\text{s,dip}}[\mathcal{R}_{\text{dip}}^2\Upsilon]^{m_{\text{s,dip}}}, & \text{if}\ \Upsilon>\Upsilon_{crit}(\mathcal{R}_{\text{dip}}), \\ K_{\text{s,quad}}[\Upsilon]^{m_{\text{s,quad}}}, & \text{if}\ \Upsilon\leq\Upsilon_{crit}(\mathcal{R}_{\text{dip}}), \end{array}\right.$$ where the octupolar relation is ignored, and $|\mathcal{R}_{\text{dip}}|+|\mathcal{R}_{\text{quad}}|=1$. Here $\Upsilon_{crit}$ describes the intercept of the dipole component and quadrupole slopes, $$\Upsilon_{crit}(\mathcal{R}_{\text{dip}})=\bigg[\frac{K_{\text{s,dip}}}{K_{\text{s,quad}}}\mathcal{R}_{\text{dip}}^{2m_{\text{s,dip}}} \bigg]^{\frac{1}{m_{\text{s,quad}}-m_{\text{s,dip}}}}.$$ Equation (\[DQO\_law\]) further expands the reasoning above to include any field combination of the axisymmetric dipole, quadrupole and octupole. The following sections test this formulation against simulated combined geometry winds. ### Quadrupole Combined with Octupole Stellar magnetic fields containing both a quadrupole and octupole field component present another example of primary and secondary family fields in combination. As with the axisymmetric dipole-quadrupole addition, the relative orientation of the two components simply determines which regions of magnetic field experience addition and subtraction about the equator, so that the torque and mass loss rate do not depend on their relative orientation. Compared with the dipole component, both fields are less effective in generating a magnetic lever arm to brake rotation at a given value of $\Upsilon$. We test the validity of equation (\[DQO\_law\]), setting $\mathcal{R}_{\text{dip}}=0$, and systematically varying the value of $\mathcal{R}_{\text{quad}}$, with the octupole fraction comprising the remaining field, $\mathcal{R}_{\text{oct}}=1-\mathcal{R}_{\text{quad}}$. Five mixed case values are selected ($\mathcal{R}_{\text{quad}}=0.8, 0.5, 0.3, 0.2, 0.1$) that parametrise the mixing of the two geometries. Steady state wind solutions are displayed in Figure \[QO\_Example\], showing, as with dipole-quadrupole addition, the equatorially asymmetric fields produced. With increasing polar field strength, the streamers are observed shift towards the lowest order geometry morphology (quadrupolar in this case), as was shown for the dipole in Paper I. The average Alfvén radii and wind magnetisation are shown in Figure \[Upsilon\_QO\]. The behaviour of $\langle R_{\text{A}} \rangle$ is quantitatively similar to that of the dipole-quadrupole addition, where combined field cases are scattered between the two pure geometry scaling relations. The range of available $\langle R_{\text{A}} \rangle$ values between the pure quadrupole and octupole scaling relations (solid blue and green respectively) is reduced compared to the previous dipole-quadrupole, due to the weaker dependence of the Alfvén radius with wind magnetisation. ![Average Alfvén radius vs wind magnetisation, $\Upsilon$, for the different combinations of quadrupole and octupole, in a similar format to Figure \[Upsilon\_DQ\]. Colour-coded dashed lines relate to the prediction considering only the quadrupolar component of the field for each $\mathcal{R}_{\text{quad}}$. The combinations shown here behave in a similar manner to dipole-quadrupole combined fields, in a sense that the lower order field (with the lowest $l$) governs the Alfvén radius for large wind magnetisations, $\Upsilon$, and the higher order (large $l$) controlling the low magnetisation scaling. []{data-label="Upsilon_QO"}](f5.pdf){width="50.00000%"} ![Average Alfvén radius vs the quadrupolar component of the wind magnetisation, $\Upsilon_{\text{quad}}$, for cases with mixed quadrupole and octupole components (points). The solid blue line shows the prediction based on the quadrupole component only (equation \[up\_quad\_rel\]). The dashed lines show the octupolar scaling (equation \[oct\_scaling\_re\]). A broken power law composed of the quadrupolar component and the octupolar scaling ($\mathcal{R}_{\text{quad}}$ dependent) can be constructed similarly to work done in Paper I. The quadrupolar geometry dominates the scaling, for all of the $\mathcal{R}_{\text{quad}}$ values simulated here, at $\langle R_{\text{A}}\rangle/R_*\approx 9$. The point at which the quadrupolar geometry dominates for a given $\mathcal{R}_{\text{quad}}$ value can be approximated by considering the strength of the two fields at the Alfvén radii, i.e. the radial distance when the strength of the quadrupole matches or exceeds that of the octupole $B_{\text{quad}}/B_{\text{oct}} =\mathcal{R}_{\text{quad}}/(1-\mathcal{R}_{\text{quad}})(r/R_*)$.[]{data-label="Up_quad_QO"}](f6.pdf){width="50.00000%"} As required by equation (\[DQO\_law\]), with no dipolar component, we introduce the quadrupole component of $\Upsilon$ as, $$\Upsilon_{\text{quad}}=\bigg(\frac{B_*^{l=2}}{B_*}\bigg)^2\frac{B_*^2R_*^2}{\dot{M}v_{\text{esc}}}=\mathcal{R}_{\text{quad}}^2\Upsilon,$$ and the second relation in equation (\[DQO\_law\]) takes the form, $$\frac{\langle R_{\text{A}} \rangle}{R_*}=K_{\text{s,quad}}[\Upsilon_{\text{quad}}]^{m_{\text{s,quad}}}, \label{up_quad_rel}$$ where, $K_{\text{s,quad}}$ and $m_{\text{s,quad}}$ are determined from the pure geometry scaling, see Table \[fitValues\]. The quadrupole component of the wind magnetisation is plotted for different $\mathcal{R}_{\text{quad}}$ values in Figure \[Upsilon\_QO\], showing an identical behaviour to the dipole component in the dipole-quadrupole combined fields. The $\Upsilon_{\text{quad}}$ formulation is shown within Figure \[Up\_quad\_QO\], with the solid blue line described by equation (\[up\_quad\_rel\]). This agrees with a large proportion of the wind solutions, with deviations due to a switch of regime onto the octupole relation, the third relation in equation (\[DQO\_law\]), $$\frac{\langle R_{\text{A}} \rangle}{R_*}=K_{\text{s,oct}}[\Upsilon]^{m_{\text{s,oct}}}=\frac{K_{\text{s,oct}}}{\mathcal{R}_{\text{quad}}^{2m_{\text{s,oct}}}}[\Upsilon_{\text{quad}}]^{m_{\text{s,oct}}}, \label{oct_scaling_re}$$ shown with a solid green line in Figure \[Upsilon\_QO\] and dashed colour-coded lines in Figure \[Up\_quad\_QO\]. As with the dipole-quadrupole addition, a broken power law can be formulated taking the maximum of either the octupole scaling or the quadrupole component scaling, for a given $\mathcal{R}_{\text{quad}}$ value. For the cases simulated, we find a deviation from this broken power law of no greater than $5\%$, with most cases showing a closer agreement. ![image](f7.pdf){width="90.00000%"} ### Dipole Combined with Octupole Unlike the previous field combinations, both the dipole and octupole belong to the primary symmetry family and thus their addition produces two distinct field topologies for aligned or anti-aligned fields. Again, we test equation (\[DQO\_law\]), now with $\mathcal{R}_{\text{quad}}=0$. The field combinations are parametrised using the ratio of dipolar field to total field strength, $\mathcal{R}_{\text{dip}}$, with the remaining field in the octupolar component $\mathcal{R}_{\text{oct}}=1-\mathcal{R}_{\text{dip}}$. The ratio of dipolar field is varied ($\mathcal{R}_{\text{dip}}=0.5, 0.3, 0.2, 0.1$). Additionally we repeat these ratios for both aligned and anti-aligned fields. This produces eight distinct field geometries that cover a range of mixed dipole-octupole fields. Figure \[DO\_Example\] displays the behaviour of both aligned and anti-aligned cases with increasing field strength. The combination of dipolar and octupolar fields produces a complex field topology which is alignment dependent and impacts the local flow properties of the stellar wind. The symmetric property of the global field is maintained about the equator. Aligned combinations have magnetic field addition over the poles which increases the Alfvén speed, producing a larger Alfvén radius over the poles. However, the fields subtract over the equator which reduces the size of the Alfvén radius over the equator; top panel of Figure \[QO\_Example\]. The bottom panel shows anti-aligned mixed cases to exhibit the opposite behaviour, with a larger equatorial Alfvén radius and a reduction to the size of the Alfvén surface at higher latitudes. The torque averaged Alfvén radius is shown by the grey dashed lines in each case, representing the cylindrical Alfvén radius $\langle R_{\text{A}} \rangle$. For the simulations in this work, the anti-aligned cases produce a larger lever arm compared with their aligned counterparts, with a few exceptions. In general, the increased Alfvén radius at the equator for the anti-aligned fields is more effective at increasing the torque averaged Alfvén radius compared with the larger high-latitude Alfvén radius in the aligned fields cases. The location of the current sheets are shown in Figure \[DO\_Example\] using red dashed lines. As noted with the dipole-quadrupole addition in Paper I, the global dipolar geometry is restored with increasing fractions of the dipole component or increased field strength for a given mixed geometry. The latter is shown in Figure \[DO\_Example\] for both aligned and anti-aligned cases. With increased field strength, a single dipolar streamer begins to be recovered over the equator. A key difference between the two field alignments is the asymptotic location of the three streamers. In the case of an aligned octupole component, increasing the total field strength for a given ratio forces the streamers towards the equator at which point they begin to merge into the dipolar streamer. With an anti-aligned octupole component, the opposite is found, with the high latitude streamers forced towards the poles and onto the rotation axis. It is unclear if this effect is significant itself on influencing the global torque. Using equation (\[DQO\_law\]), with no quadrupolar component, we anticipate the dipolar component (first relation) will be the most significant in governing the global torque. Figures \[Upsilon\_DO\] and \[Up\_dip\_DO\] show the dipole-octupole cases following the expected behaviour, as observed for dipole-quadrupole and quadrupole-octupole combinations. We see that the average Alfvén radius either follows the dipole component scaling ($\Upsilon_{\text{dip}}$), or the octupole scaling relation, $$\frac{\langle R_{\text{A}} \rangle}{R_*}=K_{\text{s,oct}}[\Upsilon]^{m_{\text{s,oct}}}=\frac{K_{\text{s,oct}}}{\mathcal{R}_{\text{dip}}^{2m_{\text{s,oct}}}}[\Upsilon_{\text{dip}}]^{m_{\text{s,oct}}}. \label{oct_scaling_re2}$$ However, as evident in both figures, there is a deviation from this scaling, with the strongest variations belonging to low $\mathcal{R}_{\text{dip}}$ cases. Anti-aligned cases follow the behaviour expected from Paper I with a much higher precision than the anti-aligned cases. Figure \[Up\_dip\_DO\] shows the dipole scaling to over-predict the aligned cases compared with the anti-aligned cases. This occurs as equation (\[DQO\_law\]) is a simplified picture of the actual dynamics within our simulations, and as such, it does not encapsulate all of the physical effects. The trends are still obvious for both aligned and anti-aligned cases, and the scatter simply represents a reduction to the precision of our formulation. Despite this deviation from predicted values, Figure \[Up\_dip\_DO\] shows the dipole component again to be the most significant in governing the global torque. With a more complex (higher $l$) secondary component, the dipole dominates the Alfvén radius scaling at a much lower wind magnetisation, when compared with the dipole-quadrupole combinations. For the dipole-octupole cases simulated, the dipole component dominates the majority of the simulated cases. For our dipole and octupole mixed fields the transition between regimes occurs at $\Upsilon_{\text{dip}}\approx100$, such that the $\langle R_{\text{A}} \rangle$ for fields with $\mathcal{R}_{\text{dip}}=0.1$, or higher, and a physically realistic wind magnetisation, will all be governed by the dipole component. ![Average Alfvén radius scaling with wind magnetisation, $\Upsilon$, for the different combinations of dipole and octupole. The fields are either added aligned at the poles (points) or anti-aligned (stars). Dashed lines show the dipole component scaling, colour-coded to match the simulated values of $\mathcal{R}_{\text{dip}}$. The overall behaviour here is similar to the previous mixed combined fields, with the lower order field governing the Alfvén radius for large wind magnetisations. However the different field alignments appear to scatter around the $\Upsilon_{\text{dip}}$ approximation, with the anti-aligned cases typically having larger $R_{\text{A}}$ than the aligned cases, for the same $\Upsilon$.[]{data-label="Upsilon_DO"}](f8.pdf){width="50.00000%"} ![Average Alfvén radius scaling with only the dipolar component of the wind magnetisation, $\Upsilon_{\text{dip}}$, for cases with combined dipole and octupole components. Aligned field are shown with circles, anti-aligned with stars. The parameter space investigated here is well approximated by the dipole component scaling relation (solid red line). Generally the aligned field cases are shown to under-shoot the dipole component approximation whilst the anti-aligned cases match the power law with similar agreement as the previous combined geometries. The qualitative behaviour is again similar to the previous combined cases, however due to the larger difference in radial decay of the field i.e. $B_{\text{dip}}/B_{\text{oct}} =\mathcal{R}_{\text{dip}}/(1-\mathcal{R}_{\text{dip}})(r/R_*)^2$, the dipole dominates at much smaller $R_{\text{A}}\approx3$.[]{data-label="Up_dip_DO"}](f9.pdf){width="50.00000%"} ### Combined Dipole, Quadrupole and Octupole Fields In addition to the quadrupole-octupole and dipole-octupole combinations presented previously, we also perform a small set of simulations containing all three components. Their stellar wind parameters and results are tabulated in Table \[Parameters\_extra\]. We select a regime where the dipole does not dominate ($\mathcal{R}_{\text{dip}}=0.1$), to observe the interplay of the additional quadrupole and octupole components. We also utilise cases 89-96 and 121-128 from this work and cases 51-60 from Paper I, all of which sample varying fractions of quadrupole and octupole with a fixed $\mathcal{R}_{\text{dip}}=0.1$. These are compared against the three component cases, 129-160. [ccccccc]{} Case & $\mathcal{R}_{\text{dip}}|\mathcal{R}_{\text{quad}}|\mathcal{R}_{\text{oct}}$ & $v_{\text{A}}/v_{\text{esc}}$ & $\langle R_{\text{A}}\rangle/R_*$ & $\Upsilon$ & $\Upsilon_{\text{open}}$ & $\langle v(R_{\text{A}})\rangle/v_{\text{esc}} $\ 129 & $0.1|0.6|0.3$ & 0.5 & 3.1 & 181 & 289 & 1.09\ 130 & $0.1|0.6|0.3$ & 1.0 & 3.6 & 698 & 502 & 1.33\ 131 & $0.1|0.6|0.3$ & 1.5 & 4.0 & 1550 & 709 & 1.49\ 132 & $0.1|0.6|0.3$ & 2.0 & 4.4 & 2760 & 923 & 1.61\ 133 & $0.1|0.6|0.3$ & 3.0 & 4.9 & 6320 & 1400 & 1.81\ 134 & $0.1|0.6|0.3$ & 6.0 & 6.3 & 27100 & 3030 & 2.17\ 135 & $0.1|0.6|0.3$ & 12.0 & 7.9 & 111000 & 6430 & 2.65\ 136 & $0.1|0.6|0.3$ & 20.0 & 9.3 & 308000 & 11200 & 3.09\ 137 & $0.1|0.6|0.6$ & 0.5 & 2.7 & 182 & 194 & 0.97\ 138 & $0.1|0.3|0.6$ & 1.0 & 3.1 & 702 & 326 & 1.17\ 139 & $0.1|0.3|0.6$ & 1.5 & 3.4 & 1560 & 451 & 1.29\ 140 & $0.1|0.3|0.6$ & 2.0 & 3.7 & 2760 & 585 & 1.37\ 141 & $0.1|0.3|0.6$ & 3.0 & 4.2 & 6230 & 903 & 1.53\ 142 & $0.1|0.3|0.6$ & 6.0 & 5.5 & 25600 & 2180 & 1.85\ 143 & $0.1|0.3|0.6$ & 12.0 & 7.2 & 97000 & 4850 & 2.25\ 144 & $0.1|0.3|0.6$ & 20.0 & 8.6 & 246000 & 8560 & 2.61\ 145 & $0.1|0.6|-0.3$ & 0.5 & 3.2 & 34 & 312 & 1.13\ 146 & $0.1|0.6|-0.3$ & 1.0 & 3.7 & 119 & 533 & 1.37\ 147 & $0.1|0.6|-0.3$ & 1.5 & 4.1 & 258 & 765 & 1.53\ 148 & $0.1|0.6|-0.3$ & 2.0 & 4.5 & 451 & 1000 & 1.65\ 149 & $0.1|0.6|-0.3$ & 3.0 & 5.1 & 1020 & 1500 & 1.85\ 150 & $0.1|0.6|-0.3$ & 6.0 & 6.5 & 4450 & 3400 & 2.21\ 151 & $0.1|0.6|-0.3$ & 12.0 & 8.2 & 18600 & 7260 & 2.69\ 152 & $0.1|0.6|-0.3$ & 20.0 & 10.1 & 55300 & 13200 & 3.17\ 153 & $0.1|0.3|-0.6$ & 0.5 & 3.0 & 4 & 254 & 1.05\ 154 & $0.1|0.3|-0.6$ & 1.0 & 3.5 & 21 & 430 & 1.25\ 155 & $0.1|0.3|-0.6$ & 1.5 & 3.9 & 49 & 607 & 1.37\ 156 & $0.1|0.3|-0.6$ & 2.0 & 4.2 & 91 & 782 & 1.49\ 157 & $0.1|0.3|-0.6$ & 3.0 & 4.7 & 214 & 1160 & 1.65\ 158 & $0.1|0.3|-0.6$ & 6.0 & 5.9 & 916 & 2440 & 2.01\ 159 & $0.1|0.3|-0.6$ & 12.0 & 7.5 & 3770 & 5360 & 2.41\ 160 & $0.1|0.3|-0.6$ & 20.0 & 9.3 & 11300 & 10200 & 2.85\ ![Comparison of the simulated Alfvén radii vs the predicted Alfvén radii using equation (\[DQO\_law\]), top panel. The line of agreement is shown with a solid black line, and the bounds of $10\%$ deviation from the predicted value are shown with black dashed lines. The bottom panel shows the residual, $(\langle R_{\text{A}}\rangle_{sim}-\langle R_{\text{A}}\rangle_{FM18})/\langle R_{\text{A}}\rangle_{sim}$, and the $10\%$ deviation with dashed lines. Cases 129-135 & 145-152 are coloured purple and cases 137-144 & 153-160 are coloured orange, different from the colour scheme of previous figures. The quadrupole and octupole dominated cases with $\mathcal{R}_{\text{dip}}$=0.1 are shown with their original colouring (blue and green respectively). All other simulations from this work, and Paper I, are shown in grey. Three red squares represent axisymmetric mixed field simulations from [@reville2015effect]. Thirteen magenta squares represent 3D non-axisymmetric simulations with $l_{max}=15$ from [@reville2017global] (the average Alfvén radius is computed differently than equation \[averageAlfven\]).[]{data-label="mixed_cases"}](f10.pdf){width="45.00000%"} Equation (\[DQO\_law\]) is adopted, now using all three components, such that the results from these simulations are expected to scale in magnetisation like a twice broken power law. As noted with the dipole-octupole addition, the inclusion of an octupolar component introduces behaviours which will not be accounted for by this formulation, i.e. equation (\[DQO\_law\]) is independent of field alignments, etc. We aim to characterise this unaccounted for physics in terms of an available precision on the use of equation (\[DQO\_law\]). The simulated Alfvén radii are compared against their predicted values in Figure \[mixed\_cases\], along with the other simulations from this work (shown in white). The three component field combinations have a small dipolar component; therefore the dipolar scaling of the average Alfvén radius is rarely the dominant term in equation (\[DQO\_law\]). The different values of quadrupolar and octupolar field that comprise the remaining field strength govern the average Alfvén radius scaling for the majority of this parameter space. From Figure \[mixed\_cases\], the approximate formulation agrees well with the simulated values with the largest discrepancies emerging at smaller radii and for anti-aligned cases, see the residual plot below. A $10\%$ divergence from our prediction (dashed lines in both the top and bottom panel of Figure \[mixed\_cases\]) is shown to roughly approximate the effects not taken into account by the simple scaling, with the largest deviation to $18.3\%$. Equation (\[DQO\_law\]) is observed to have increasing accuracy as the Alfvén radii become larger in Figure \[mixed\_cases\], this is due to the increasing dominance of the dipolar component at large distances. Quantifying the scatter in our residual, we approximate the distribution of deviations as Gaussian, and calculate a standard deviation of $5.1\%$, when evaluating all 160 of our simulated cases. Considering the 32 three component cases, the standard deviation remains of the same order $5.2\%$, indicating the formulation maintains precision with the inclusion of all three antisymmetric components. The largest deviations from the predicted values belong to the dipole-octupole simulations, and these are observed within Figures \[Upsilon\_DO\] and \[Up\_dip\_DO\]. In both Figures, and the residual, the predicted values are shown to under estimate the simulated values, for small average Alfvén radii, but with increasing field strength begin to over predict. The trends in the residual represents physics not incorporated into our approximate formula, and can be explained. The underestimation at first, is due to the sharpness of regime transition from the broken power law representation, in reality there is a smoother transition which is always larger than the break in power laws. This significantly impacts the dipole-octupole simulations as they most often probe this regime, as can be seen within Figure \[Up\_dip\_DO\]. For the dipole-octupole combinations, we propose this transition must be much more broad to match the deviations in the residual of Figure \[mixed\_cases\]. Equation (\[DQO\_law\]) represents an approximation to the impact of mixed geometry fields on the prediction of the average Alfvén radius. Our mixed cases are found to be well behaved and can all be predicted by this formulation within $\approx\pm20\%$ accuracy for the most deviant, the majority lie within $\approx\pm5\%$ accuracy. Analysis of Previous Mixed Fields --------------------------------- Object $\mathcal{R}_{\text{dip}}|\mathcal{R}_{\text{quad}}|\mathcal{R}_{\text{oct}}$ $\Upsilon$ $R_{\text{A}}/R_*|_{sim}$ $R_{\text{A}}/R_*|_{FM18}$ ---------- ------------------------------------------------------------------------------- ------------ --------------------------- ---------------------------- Sun Min $-0.47|0.03|-0.50$ $812$ $6.7$ $6.74$ Sun Max $0.13|0.73|0.14$ $130$ $3.3$ $3.36$ TYC-0486 $-0.10|0.79|-0.11$ $17600$ $7.7$ $7.10$ [@reville2015effect] presented mixed field simulations containing axisymmetric dipole, quadrupole and octupole components, based on observations of the Sun, at maximum and minimum of magnetic activity, along with a solar-like star TYC-0486. To further test our formulation, we use input parameters and results from Table 3 of [@reville2015effect] and predict values for the average Alfvén radii of the mixed cases produced in their work. We use equation (\[DQO\_law\]) with the fit constants from their lower temperature wind ($c_{\text{s}}/v_{\text{esc}}=0.222$) and manipulate the given field strengths into suitable $\mathcal{R}_l$ values. Results can be found in Table \[reville\_pred\], and are shown in Figure \[mixed\_cases\] with red squares. The predicted values for the Alfvén radii agree to better than $10\%$ precision. The largest deviation, $\approx8\%$, is for TYC-0486, which we accredit to the location of the predicted Alfvén radius falling in between regimes, at the break in the power law (almost governed by the dipole component only), where the broken power law approximation has the biggest error. Recent work by [@reville2017global], presented 13 thermally driven wind simulations, in 3D, for the solar wind, using Wilcox Solar Observatory magnetograms, spanning the years 1989-2001. These simulations use the spherical harmonic coefficients derived from the magnetograms, up to $l=15$, including the non-axisymmetric modes. We predict the values of the average Alfvén radii using equation (\[DQO\_law\]), allowing the strength of any non-axisymmetric component to be added in quadrature with the axisymmetric component to produce representative strengths for the dipole, quadrupole and octupole components. For example, the dipole field strength is computed, $$B_*^{l=1}=\sqrt{(B^{l=1}_{m=-1})^2+(B^{l=1}_{m=0})^2+(B^{l=1}_{m=1})^2}.$$ We obtained the field strengths for the dipole, quadrupole and octupole components of the magnetograms used in the simulations of [@reville2017global], ignoring the higher order field componets (Réville, private communication 2017). The results from this are shown in Figure \[mixed\_cases\] with magenta squares, and show a good agreement in most cases to the simulated values. However, we note that the Alfvén radii tabulated within [@reville2017global] are geometrically-averaged rather than torque-averaged, as used in this work (both scale with wind magnetisation in a similar manner). These values thus represent the average spherical radius for the Alfvén surface in their 3D simulations. The base wind temperature for their simulations is also cooler ($c_{\text{s}}/v_{\text{esc}}\approx0.234$) than in our simulations. Nevertheless, Figure \[mixed\_cases\] shows good agreement with the predicted values, we calculate a standard deviation of $8.4\%$. If we apply an approximate correction to the spherical radii with a factor of 2/3 (due to the angular momentum lever arm being proportional to $r\sin\theta$) and use torque scaling coefficients fit to the lower temperature wind from [@pantolmos2017magnetic], we find that all the magenta simulations fit within the $10\%$ precision, despite the inclusion of the non-axisymmetric components. This suggests equation (\[DQO\_law\]) can be used in cases with non-axisymmetric geometries in combination, but further study is required to test more fully. Analysis Based on Open Flux =========================== Magnetic Flux Profiles ---------------------- ![image](f11.pdf){width="80.00000%"} ![image](f12.pdf){width="\textwidth"} The behaviour of the stellar wind torque, quantified in the previous sections, is similar to the results found in Paper I. Lower order magnetic components decay more slowly with radius than higher order components. Thus, the lower order component typically dominates the dynamics of the global torque. The higher order component can usually be ignored, unless it has a comparable field strength to the lower order component at the Alfvén radius, which requires the higher order field to dominate at the surface. The radial dependence of the magnetic field is best described by the unsigned magnetic flux. To calculate this, we evaluate an integral of the magnetic field threading closed spherical shells with area $A$, this produces the unsigned magnetic flux as a function of radial distance, $$\Phi(r)=\oint_r|{\bf B} \cdot d{\bf A}|. \label{phi}$$ For a potential field, as used in the initial conditions, the magnetic flux decays as a simple power law, $$\Phi(r)=\Phi_*\bigg(\frac{R_*}{r}\bigg)^l$$ where $\Phi_*$ is the surface magnetic flux and $l$ represents the magnetic order of the field, increasing for more complex fields. Thus higher order fields decay radially faster. The radial profiles of the flux in our steady state solutions are shown with thin grey lines in Figures \[QO\_Flux\], \[DO\_Flux\] and \[Mixed\_Flux\]. Each ratio ($\mathcal{R}_{l}$) represent a different combined field geometry with each grey line having a different field strength. In each figure we include the potential field solution for the flux with a solid black line, produced by equation (\[phi\]), showing the initial magnetic field configuration. No longer is a single power law produced; instead the components interact and produce a varying radial decay. In magnetised winds, the magnetic forces balance the thermal and ram pressures close to the star. Therefore the unsigned flux approximately follows the potential solution. Further from the star the pressure of the wind forces the magnetic field into a nearly radial configuration, beyond which, the unsigned flux becomes constant. This constant value is referred to as the open flux, $\Phi_{\text{open}}$ (typically larger field strength produce a smaller fraction of open flux to surface flux). In the cases with quadrupole-octupole mixed fields (Figure \[QO\_Flux\]), the individual potential field quadrupole and octupole components are indicated with thick dashed blue and green lines respectively. As with the previous dipole and quadrupole addition, the broken power law behaviour shown in the Alfvén radius formulation is visible. The quadrupole component often represents the most significant contribution to the total flux, as the dipole did within Paper I. The lower right panel of Figure \[QO\_Flux\] shows the relative decay of all the potential fields. Figure \[DO\_Flux\] shows the radial magnetic flux evolution for the dipole-octupole combinations in a similar format as Figure \[QO\_Flux\]. A quantitatively similar behaviour to the dipole-quadrupole and quadrupole-octupole combinations is shown with the anti-aligned field geometries, seen in the bottom row. This explains why previously the anti-aligned cases provided a better fit to the broken power law approximation than the aligned cases. For the cases with an aligned octupole component, the profile of the flux decay is distinctly different. The smooth transition between the two regimes of the broken power law is replaced with a deviation from the dipole which passes below the dipole component at first, and then asymptotes back. This is caused by the subtraction of the dipole and octupole fields over the equator, which reduces the unsigned flux and has the largest impact at the radial distance where the two components have the equal and opposite field strength. For these two component simulations, the approximate formulation, equation (\[DQO\_law\]), mathematically approximates the radial decay of the magnetic field with two regimes, an octupolar decay close in to the star followed by a sharp transition to the lower order geometry (dipole or quadrupole), which ignores any influence of the octupolar field. The formulation works well when this is a good approximation, which is typically the case for the dipole-quadrupole, quadrupole-octupole and anti-aligned dipole-octupole cases. The inflection of the magnetic flux for aligned cases creates a discrepancy between our simplification and the physics in the simulation, therefore we observe a scatter in our results between the aligned and anti-aligned cases. Our formulation is least precise when the inflection occurs near the Alfvén radius, causing the formula to over predict the average Alfvén radius. However, in Section 3.2.4 we show this to be a systematic and measurable effect, that does not impact the validity of equation (\[DQO\_law\]). ![image](f13.pdf){width="\textwidth"} For the three component simulations, the behaviour of the dipole-octupolar component alignment is shown to oppose the previous dipole-octupole addition. Equation (\[DQO\_law\]) more accurately approximates the mixed field cases with an aligned octupole component, than with an anti-aligned component. To explore this we show the radial evolution of the magnetic flux in Figure \[Mixed\_Flux\]. The top panel displays the aligned cases with increasing octupolar component and decreasing quadrupolar component, moving to the right. The reduction of flux, or inflection in the flux profile, due to the dipole and octupole addition is only seen to be significant for one case, where the octupole fraction is maximised. In the remaining cases the octupolar fraction is too small to produce a strong reduction in the equatorial flux with the dipole. Hence the well behaved relation between the simulated aligned cases and the predicted average Alfvén radii in Figure \[mixed\_cases\]. The poorest fitting cases to equation (\[DQO\_law\]) are the anti-aligned mixed cases shown in Figure \[Mixed\_Flux\] with purple and orange stars. The potential field solutions, shown with solid black lines, sit above the dashed component slopes (most significant for cases 153-160, in orange) showing an increased field strength due to the complex addition of the three components in combination. This is unlike most of the previous combined field cases, which are typically described by either one component or the other, hence the predicted values differ for these cases. This behaviour is difficult to parametrise within our Alfvén radius approximation as it requires knowledge about the magnetic field evolution in the wind. For this work, we simply show why the simulations deviate from equation (\[DQO\_law\]) and suggest care is taken when using such formulations with dipolar and octupolar components. Open Flux Torque Relation ------------------------- Topology($l)$ $K_{\text{o}}$ $m_{\text{o}}$ ---------------------- ---------------- ----------------- Dipole ($1$) $0.33\pm0.03$ $0.371\pm0.003$ Quadrupole ($2$) $0.63\pm0.02$ $0.281\pm0.003$ Octupole ($3$) $0.85\pm0.03$ $0.227\pm0.004$ All Simulations $0.46\pm0.03$ $0.329\pm0.004$ $K_{\text{c}}$ $m_{\text{c}}$ Topology Independent $0.08\pm0.04$ $0.470\pm0.004$ : Open Flux Fit Parameters to equations (\[UP\_OPEN\_OLD\]) & (\[UP\_OPEN\])[]{data-label="fitValues_open"} ![image](f14.pdf){width="\textwidth"} The open flux, $\Phi_{\text{open}}$, remains a key parameter in describing the torque scaling for any magnetic geometry. [@reville2015effect] constructs a semi-analytic formulation for the average Alfvén radius using the open flux wind magnetisation, $$\Upsilon_{\text{open}}=\frac{\Phi_{\text{open}}^2/R_*^2}{\dot M v_{\text{esc}}}. \label{upsilon_open}$$ The dependence of the average Alfvén radius on $\Upsilon_{\text{open}}$ is then parametrised, $$\frac{\langle R_{\text{A}}\rangle}{R_*} = K_{\text{o}}[\Upsilon_{\text{open}}]^{m_{\text{o}}}, \label{UP_OPEN_OLD}$$ where $K_{\text{o}}$ and $m_{\text{o}}$ represent fit parameters to our simulations using this open flux formulation. In Paper I, we show the dependence of these fit parameters on magnetic geometry. We show this again within the left panel of Figure \[Up\_Open\_ALL\]. The scatter in average Alfvén radii values for different field geometries is reduced compared with that seen in the $\Upsilon$ parameter spaces (Figures \[Upsilon\_DQ\], \[Upsilon\_QO\] and \[Upsilon\_DO\]), such that a single power law fit is viable, shown with a solid black line. However, better fits are obtained when considering each pure geometries independently, tabulated in Table \[fitValues\_open\]. Work by [@pantolmos2017magnetic] showed how differing wind acceleration affects the scaling relation by using different base wind temperatures to accelerate their winds. Different magnetic topologies produce slightly different wind acceleration from the stellar surface out to the Alfvén radius, due to the varying degree of super-radial expansion of the magnetic field lines [@velli2010solar; @riley2012interplanetary; @reville2016age]. Thus this causes the distinctly different scaling relations in the left panel of Figure \[Up\_Open\_ALL\]. Using the averaged Alfvén speed $\langle v(R_{\text{A}}) \rangle$ at the Alfvén surface, this difference in wind acceleration can be removed (see [@pantolmos2017magnetic]), and the result is shown in the right panel of Figure \[Up\_Open\_ALL\]. The semi-analytic solution from [@pantolmos2017magnetic] is given by, $$\frac{\langle R_{\text{A}}\rangle}{R_*} = K_{\text{c}}\bigg[\Upsilon_{\text{open}}\frac{ v_{\text{esc}}}{\langle v(R_{\text{A}})\rangle}\bigg]^{m_{\text{c}}}, \label{UP_OPEN}$$ where $K_{\text{c}}$ and $m_{\text{c}}$ are fit parameters to this formulation. The fit relationship from [@pantolmos2017magnetic] and a fit to our simulation data (Table \[fitValues\_open\]), are shown with all our simulated cases (both Paper I & II) in the right panel of Figure \[Up\_Open\_ALL\]. A small geometry dependent scatter remains in the right panel, which is noted in Paper I. The cause of which is an unanswered question, but may relate to systematic numerical errors due to modelling small scale complex field geometries. Our fit agrees well with that from [@pantolmos2017magnetic], with a shallower slope due to the inclusion of the higher order geometries which show this systematic deviation from the dipole simulations. Field Opening Radius -------------------- ![Average Alfvén radius vs opening radius for all cases. Black dashed lines represent $R_{\text{A}}/R_{\text{o}}=3.3$ and $1.5$, which bound all cases. The simulations show a similar behaviour to that discussed in Paper I, namely a geometry dependent separation, with the octupole geometries having the shallowest slope. []{data-label="RAvsRO"}](f15.pdf){width="50.00000%"} As in previous works (e.g. [@pantolmos2017magnetic]; Paper I), we define an opening radius $R_{\text{o}}$ using the value of the open flux. The opening radius is defined as the radial distance at which the potential field for a given geometry matches the value of the open flux, i.e. $\Phi(R_{\text{o}})=\Phi_{\text{open}}$. In this way, given the surface magnetic field geometry and the value of $R_{\text{o}}$, the open flux in the wind is recovered and thus the torque can be predicted. However, producing a single relation for predicting the opening radius, and thus the open flux, for our simulations remains an unsolved problem. In Figures \[QO\_Flux\], \[DO\_Flux\] and \[Mixed\_Flux\], the opening radii for all simulations are marked with grey dots and compared in the final panel (coloured to match the respective $\mathcal{R}_l$ value). With increasing field strength, the simulations produce a larger average Alfvén radius and a larger deadzone/opening radius. The Alfvén and opening radii roughly grow together with increasing wind magnetisation, but their actual behaviour is more complex. The field complexity also has an affect on this relationship, with more complex geometries producing smaller opening radii, as the wind pressure is able to open the magnetic field closer to the star. We compare the average Alfvén radii and opening radii within Figure \[RAvsRO\]. The simulations containing an octupolar component, in general, show a shallower dependence, which continues the trend from dipole to quadrupole presented in Paper I. Interestingly, the aligned dipole-octuople fields are shown to have reduced values of $R_{\text{o}}$ for the Alfvén radii they produce, compared to the aligned cases, which is a consequence of the reduced flux from the field subtraction over the equator. For these cases the wind pressure iable to open the field much closer to the star, compared to the anti-aligned cases. The relationship between the opening radius and the lever arm for magnetic braking torque in our wind simulations is evidently complex and interrelated with magnetic geometry, field strength and mass loss rate. The opening radius, as we define it here, is algebraically related to the source surface radius, $r_{\text{ss}}$, used within the Potential Field Source Surface (PFSS) models. As such the $R_{\text{o}}$ scales with $r_{\text{ss}}$ for a given field geometry, and its behaviour with increasing field strength should be accounted for within future PFSS models. Conclusions =========== This work present results from 160 new MHD simulations, and 50 previously discussed simulations from Paper I, which we use to disentangle the impacts of complex magnetic field geometries on the spin-down torque produced by magnetised stellar winds. Axisymmetric dipole, quadrupole and octupole fields are used to construct differing combined field geometries. We systematically vary the ratios, $\mathcal{R}_{\text{dip}}$, $\mathcal{R}_{\text{quad}}$ and $\mathcal{R}_{\text{oct}}$, of each field geometry with a range of total field strengths. Here we reinforce results from Paper I. With simple estimates using realistic magnetic field topologies (obtained from ZDI observations) and representative field strengths and mass loss rates for main sequence stars, the dipole component dominates the spin-down process, irrespective of the higher order components (Finley et al. in prep). The original formulation from [@matt2012magnetic] remains robust in most cases even for significantly non-dipolar fields. Combined with the work from [@pantolmos2017magnetic], these formulations represent a strong foundation for predicting the stellar wind torques from a variety of cool stars with differing properties. We show the distinctly different changes to topology from our combined primary (dipole, octupole) and secondary (quadrupole) symmetry family fields. “Primary” being antisymmetric about the equator and “secondary” symmetric about the equator [@mcfadden1991reversals; @derosa2012solar]. The addition of a primary and secondary fields produces an asymmetric field about the stellar equator, in contrast to the combination of two primary fields which maintains equatorial symmetry. However, the latter case breaks the degeneracy of the field alignment, producing two different topologies dependent on the relative orientation of the combined geometries. This is not the case for primary and secondary field addition, i.e. dipole-quadrupole and quadrupole-octupole, which produces the same global field reflected about the equator. The magnetic braking torque is shown, as in Paper I, to be largely dependent on the dominant lowest order field component. For observed field geometries this is, in general, dipolar in nature. We parametrise the torque from our mixed fields simulations based on the decay of the magnetic field. The average Alfvén radius, $\langle R_{\text{A}} \rangle$, is defined to represent a lever arm, or efficiency factor, for the torque, equation (\[torque\]). From our simulated cases we produce an approximate formulation for the average Alfvén radius, equation (\[DQO\_law\]), where each $K_{\text{s}}$ and $m_{\text{s}}$ have tabulated values from our simulations in Table \[fitValues\]. These values are temperature dependent, e.g. $\approx1.7$MK for a $1M_{\sun}$ star. In this formulation, the octupole geometry dominates the magnetic field close to the star, then decays radially leaving the quadrupole governing the radial decay of the field and finally the quadrupole decays leaving only the dipole component of the field. In each regime the strength of the field includes any component that is yet to decay away. Using this formula we are able to predict the torque in all of our simulations to $\approx20\%$ accuracy, with the majority predicted to with $\approx5\%$. This is then extended to mixed field simulations presented in [@reville2015effect] and [@reville2017global]. The formulation presented within this work remains an approximation, with a smoother transition from each regime observed with the simulations. This work represents a modification to existing torque formulations, which accounts for combined field geometries in a very general way. A key finding remains that the dipole component is able to account for the majority of the magnetic braking torque, in most cases. Thus previous works based on the assumption of the dipolar component being representative of the global field are validated. It is noted here however that it is the dipole component of the field not the total field strength which enters in the torque formulation, therefore it is import to decompose any observed field correctly to avoid miscalculation. In this study, as in the previous, we do not include the effects of rapid rotation or varying coronal temperatures. Prescriptions for rotational effects on the three pure geometries studied here are available [@matt2012magnetic; @reville2015effect], along with differing coronal temperatures for dipolar geometries [@pantolmos2017magnetic]. In general, differences in wind driving parameters and physics will introduce more deviation from equation (\[DQO\_law\]), however it is expected to remain valid. Work remains in modelling the behaviour of non-axisymmetric components on the stellar wind environments surrounding sun-like and low mass stars, and the associated spin-down torques. Observed fields are shown to host a varied amount of non-axisymmetry [e.g. @see2015energy]. Works including more complex coronal magnetic fields such as the inclusion of magnetic spots [e.g. @cohen2009effect; @garraffo2015dependence], tilted magnetospheres [e.g. @vidotto2010simulations] and using ZDI observations [e.g. @vidotto2011understanding; @vidotto2014stellar; @alvarado2016simulating; @nicholson2016temporal; @garraffo2016space; @reville2016age], have shown the impact of specific cases but have yet to fully parametrise the variety of potential magnetic geometries. The relative orientation of some field combinations shown in this work have produced differences in the braking lever arm; therefore we expect the same to be true for non-axisymmetric geometries in combination. Since equation (\[DQO\_law\]) predicts the Alfvén radii from [@reville2017global] (Section 3.3), this suggest that our approximate formulation holds for non-axisymmetric components (using a quadrature addition of $\pm l$ components), but this remains to be validated. Thanks for helpful discussions and technical advice from Georgios Pantolmos, Victor See, Victor Réville, Sasha Brun and Claudio Zanni. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 682393). We thank Andrea Mignone and others for the development and maintenance of the PLUTO code. Figures within this work are produced using the python package matplotlib [@hunter2007matplotlib].
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we investigate spectral properties of explosive symmetric Markov processes. Under a condition on its $1$-resolvent, we prove the $L^1$-semigroups of Markov processes become compact operators.' address: 'Mathematical Institute, Tohoku University, Aoba, Sendai 980-8578, Japan' author: - Kouhei Matsuura title: Compactness of semigroups of explosive symmetric Markov processes --- [^1] Introduction ============ Let $E$ be a locally compact separable metric space and $\mu$ a positive Radon measure on $E$ with topological full support. Let $X=(\{X_t\}_{t \ge 0}, \{P_x\}_{x \in E}, {\zeta})$ be a $\mu$-symmetric Hunt process on $E$. Here ${\zeta}$ is the life time of $X$. We assume $X$ satisfies the irreducible property, resolvent strong Feller property, in addition, [*tightness*]{} property, namely, for any ${\varepsilon}>0$, there exists a compact subset $K \subset E$ such that $\sup_{x \in E}R_{1}{\mathbf{1}}_{E \setminus K}(x)<{\varepsilon}$. Here $R_1$ is the $1$-resolvent of $X$. The family of symmetric Markov processes with these three properties is called [*Class*]{} (T). In [@T3], the spectral properties of a Markov process in Class (T) are studied. For example, if $\mu$-symmetric Hunt process $X$ belongs to Class (T), the semigroup becomes a compact operator on $L^{2}(E,\mu)$. This implies the corresponding non-positive self-adjoint operator has only discrete spectrum. Furthermore, it is shown that the eigenfunctions have bounded continuous versions. The self-adjoint operator is extended to linear operators $(\mathcal{L}^p,D(\mathcal{L}^p))$ on $L^{p}(E,\mu)$ for any $1\le p\le \infty$. In [@T2], it is shown that the spectral bounds of the operators $(\mathcal{L}^p,D(\mathcal{L}^p))$ are independent of $p \in [1,\infty]$. Then, a question arises: [*if a $\mu$-symmetric Hunt process $X$ belongs to Class (T), the spectra of $(\mathcal{L}^p,D(\mathcal{L}^p))$ are independent of $p \in [1,\infty]$?*]{} In this paper, we answer this question by showing that the semigroup of $X$ becomes a compact operator on $L^{1}(E,\mu)$ under some additional conditions. These include the condition that $\lim_{x \to \partial}R_{1}{\mathbf{1}}_{E}(x)=0$ which are more restrictive than Class (T). However, it will be proved that for the symmetric $\alpha$-stable process $X^D$ on an open subset $D \subset {\mathbb{R}}^d$ the following assertions are equivalent (Theorem \[th:4\]): - for any $1\le p \le \infty$, the semigroup of $X^D$ is a compact operator on $L^{p}(D,m)$; - the semigroup of $X^D$ is a compact operator on $L^{2}(D,m)$; - $\lim_{ |x| \to \infty}E_{x}[\tau_D]=0$; - $\lim_{ |x| \to \infty} \int_{0}^{\infty}e^{-t}P_{x}[\tau_D>t]\,dt=0$. Here, $m$ is the Lebesgue measure on $D$ and $\tau_D=\inf\{t>0 \mid X_t^D \notin D\}$. The above conditions are equivalent to - $\lim_{x \in D,\ |x| \to \infty}E_{x}[\tau_D]=0$ provided $D$ is unbounded. In fact, the assertion (iv) is equivalent to the tightness property of $X$. Thus, for the symmetric $\alpha$-stable process $X^D$ on an open subset $D$, the tightness property is equivalent to all assertions in the Theorem \[th:4\] mentioned above and implies that the spectra are independent of $p \in [1,\infty]$. The key idea is to give an approximate estimate by the semigroup of part processes by employing Dynkin’s formula (Proposition \[prop:on\]). In [@TTT Theorem 4.2], the authors consider the rotationally symmetric $\alpha$-stable process on ${\mathbb{R}}^d$ with a killing potential $V$. Under a suitable condition on $V$, they proved the tightness propety of the killed stable process. In Example \[ex:kill\] below, we will prove the semigroup of the process becomes a compact operator on $L^{1}({\mathbb{R}}^d,m)$ under the assumption on $V$ essentially equivalent to [@TTT Theorem 4.2]. In Example \[ex:tc\] below, we will consider the time-changed process of the rotationally symmetric $\alpha$-stable process on ${\mathbb{R}}^d$ by the additive functional $A_t=\int_{0}^{t}W(X_s)^{-1}\,ds$. Here $\alpha \in (0,2]$ and $W$ is a nonnegative Borel measurable function on ${\mathbb{R}}^d$. The Revuz measure of $A$ is $W^{-1}m$ and the time-changed process $X^W$ becomes a $W^{-1}m$-symmetric Hunt process on ${\mathbb{R}}^d$. The life time of $X^W$ equals to $A_{\infty}$. To investigate the spectral property of $X^W$ is just to investigate the spectral properties of the operator of the form $\mathcal{L}^W=-W(x)(-\Delta)^{\alpha/2}$ on $L^{2}({\mathbb{R}}^d,W^{-1}m)$. When $W(x)=1+|x|^{\beta}$ and $\alpha=2$, it is shown in [@MS Proposition 2.2] that the spectrum of $\mathcal{L}^W$ is discrete in $L^{2}({\mathbb{R}}^d,W^{-1}m)$ if and only if $\beta>2$. When $\alpha \in (0,2)$, $d>\alpha$, and $W(x)=1+|x|^\beta$ with $\beta \ge 0$, it is shown in [@TTT Proposition 3.3] that the supectrum of $\mathcal{L}^W$ in $L^{2}({\mathbb{R}}^d,W^{-1}m)$ is discrete if and only if $\beta>\alpha$. This is equivalent to that the semigroup of $X^W$ is a compact operator on $L^{2}({\mathbb{R}}^d,W^{-1}m)$ if and only if $\beta>\alpha$. In Theorem \[th:tc\] below, we shall prove that if $\beta>\alpha$, the semigroup becomes a compact operator on $L^{1}({\mathbb{R}}^d,W^{-1}m)$. Main results ============ Let $E$ be a locally compact separable metric space and $\mu$ a positive Radon measure on $E$. Let $E_{\partial }$ be the its one-point compactification $E_{\partial }=E \cup \{\partial \}$. A $[-\infty,\infty]$-valued function $u$ on $E$ is extended to a function on $E_{\partial }$ by setting $u(\partial )=0$. Let $X=(\{X_t\}_{t \ge 0}, \{P_x\}_{x \in E}, {\zeta})$ be a $\mu$-symmetric Hunt process on $E$. The semigroup $\{p_t\}_{t>0}$ and the resolvent $\{R_{\alpha}\}_{\alpha \ge 0}$ are defined as follows: $$\begin{aligned} &p_{t}f(x)=E_{x}[f(X_t)]=E_{x}[f(X_t):t<{\zeta}], \\ &R_{\alpha}f(x)=E_{x}\left[\int_{0}^{{\zeta}}\exp(-\alpha t)f(X_t)\,dt \right], \quad f \in \mathcal{B}_{b}(E),\ x \in E.\end{aligned}$$ Here, $\mathcal{B}_{b}(E)$ is the space of bounded Borel mesurable functions on $E$. $E_x$ denotes the expectation with respect to $P_x$. By the symmetry and the Markov property of $\{p_t\}_{t>0}$, $\{p_t\}_{t>0}$ and $\{R_{\alpha}\}_{\alpha>0}$ are canonically extended to operators on $L^{p}(E,\mu)$ for any $1\le p \le \infty$. The extensions are also denoted by $\{p_t\}_{t>0}$ and $\{R_{\alpha}\}_{\alpha>0}$, respectively. For an open subset $U \subset E$, we define $\tau_U$ by $\tau_U=\inf \{t>0 \mid X_t \notin U\}$ with the convention that $\inf \emptyset=\infty$. We denote by $X^U$ the part of $X$ on $U$. Namely, $X^U$ is defined as follows. $$X_t^{U}=\begin{cases} X_t, \quad &t<\tau_U \\ \partial,\quad &t \ge \tau_U. \end{cases}$$ $X^{U}=(\{X_t^{U}\}_{t\ge0}, \{P_{x}\}_{x \in U})$ also becomes a Hunt process on $U$ with life time $\tau_U$. The semigroup $\{p_t^{U}\}_{t>0}$ is identified with $$\begin{aligned} &p_{t}^{U}f(x)=E_{x}[f(X_t^{U})]=E_{x}[f(X_t):t<\tau_U]\end{aligned}$$ $\{p_t^U\}_{t>0}$ is also symmetric with respect to the measure $\mu$ restricted to $U$. $\{p_t^U\}_{t>0}$ and $\{R_\alpha^U\}_{\alpha>0}$ are also extended to operators on $L^{p}(U,\mu)$ for any $1\le p \le \infty$ and the extensions are also denoted by $\{p_t^U\}_{t>0}$ and $\{R_{\alpha}^U\}_{\alpha>0}$, respectively. We now make the following conditions on the symmetric Markov process $X$. 1. ([**Semigroup strong Feller**]{}) For any $t>0$, $p_t(\mathcal{B}_{b}(E)) \subset C_{b}(E)$, where $C_{b}(E)$ is the space of bounded continuous functions on $E$. 2. ([**Tightness property**]{}) $\lim_{x \to \partial }R_{1}{\mathbf{1}}_{E}(x)=0$. 3. ([**Local $L^\infty$-compactness**]{}) For any $t>0$ and open subset $U \subset E$ with $\mu(U)<\infty$, $p_{t}^{U}$ is a compact operator on $L^{\infty}(U,\mu)$. \[re\] - By the condition I, the semigroup kernel of $X$ is absolutely continuous with respcet to $\mu$: $$p_{t}(x,dy)=p_{t}(x,y)\,d\mu(y).$$ Furthermore, the resolvent of $X$ is strong Feller: for any $\alpha>0$, $R_{\alpha}(\mathcal{B}_{b}(E)) \subset C_{b}(E)$. - The conditions I and II lead us to the tightness property in the sense of [@T4; @T3]: for any ${\varepsilon}>0$, there exists a compact subset $K \subset E$ such that $\sup_{x \in E}R_{1}{\mathbf{1}}_{E \setminus K}(x)<{\varepsilon}$. See [@T4 Remark 2.1 (ii)] for details. We denote by $C_{\infty}(E)$ the space of continuous functions on $E$ vanishing at infinity. Under the condition I and the invariance $R_{1}(C_{\infty}(E)) \subset C_{\infty}(E)$ of $X$, the condition II is equivalent to the tightness property in the sense of [@T4; @T3]. See [@T4 Remark 2.1 (iii)] for details. In addition to the conditions I and II, we assume $X$ is irreducible in the sense of [@T4]. Then, by using [@T4 Lemma 2.2 (ii), Lemma 2.6, Corollary 3.8], we can show $\sup_{x \in E}E_{x}[\exp(\lambda {\zeta})]<\infty$ for some $\lambda>0$ and thus $R_{0}{\mathbf{1}}_{E}$ is bounded on $E$. We further see from the strong Feller property and the resolvent equation of $\{R_{\alpha}\}_{\alpha>0}$ that $R_{0}{\mathbf{1}}_{E} \in C_{\infty}(E)$. - The conditions I and II imply $p_{t}(C_{\infty}(E)) \subset C_{\infty}(E)$ for any $t>0$, and thus $X$ is doubly Feller in the sense of [@CK]. This implies that for any $t>0$ and open $U \subset E$, $p_{t}^{U}$ is strong Feller: $p_{t}^{U}(\mathcal{B}_{b}(U)) \subset C_{b}(U)$. See [@CK Theorem 1.4] for the proof. - Let $U \subset E$ be an open subset with $\mu(U)<\infty$. The condition III is satisfied if the semigroup of $X^U$ is ultracontractive: for any $t>0$ and $f \in L^{1}(U,\mu)$, $p_{t}^{U}f$ belongs to $L^{\infty}(U,\mu)$. Indeed, we see from [@Da Theorem 1.6.4] that $p_{t}^{U}$ is a compact operator on $L^{1}(U,\mu)$ and so is on $L^{\infty}(U,\mu)$. In particular, if the semigroup of $X$ is ultracontractive, the condition III is satisfied. We are ready to state the main result of this paper. \[th:1\] Assume $X$ satisfies the conditions from I to III. Then, for any $t>0$, $p_t$ becomes a compact operator on $L^{\infty}(E,\mu)$. By the symmetry of $X$, each $p_{t}:L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)$ is regarded as the dual-operator of $p_{t}:L^{1}(E,\mu) \to L^{1}(E,\mu)$. By using Schauder’s theorem, we obtain the next corollary. \[th:2\] Assume $X$ satisfies the conditions from I to III. Then, for any $t>0$, $p_{t}$ becomes a compact operator on $L^{1}(E,\mu)$. Let $(\mathcal{L}^p,D(\mathcal{L}^p))$ be the generator of $\{p_t\}_{t>0}$ on $L^{p}(E,\mu)$, $1\le p\le \infty$. By using [@Da Theorem 1.6.4], we can show the next theorem. \[th:3\]Assume $X$ satisfies the conditions from I to III. Then, - for any $1\le p \le \infty$ and $t>0$, $p_t$ is a compact operator on $L^{p}(E,\mu)$; - spectra of $(\mathcal{L}^p,D(\mathcal{L}^p))$ are independent of $p \in [1,\infty]$ and the eigenfunctions of $(\mathcal{L}^2,D(\mathcal{L}^2))$ belong to $L^{p}(E,\mu)$ for any $1 \le p \le \infty$. Proof of Theorem \[th:1\] ========================= Since $E$ is a locally compact separable metric space, there exist increasing bounded open subsets $\{U_n\}_{n=1}^{\infty} $ and compact subsets $\{K_n\}_{n=1}^{\infty}$ such that for any $n \in {\mathbb{N}}$, $K_n \subset U_n \subset K_{n+1}$ and $E=\bigcup_{n=1}^{\infty}U_n=\bigcup_{n=1}^{\infty}K_n$. We write $\tau_n$ for $\tau_{U_n}$. The semigroup of the part process of $X$ on $U_n$ is simply denoted by $\{p_{t}^n\}_{t>0}$. The quasi-left continuity of $X$ yields the next lemma. \[lem:ql\] For any $x \in E$, $P_{x}(\lim_{n \to \infty}\tau_{n}={\zeta})=1$. The following formula is called Dynkin’s formula. \[lem:d\] It holds that $$p_{t}f(x)=p_{t}^{U}f(x)+E_{x}[p_{t-\tau_U}f(X_{\tau_U}): \tau_U \le t]$$ for any $x \in E$, $f \in \mathcal{B}_{b}(E)$, $t>0$, and any open subset $U$ of $E$. It is easy to see that $$p_{t}f(x)=p_{t}^{U}f(x)+E_{x}[f(X_t):\tau_U \le t ].\label{eq:eq1}$$ Let $n \in {\mathbb{N}}$. On $\{\tau_U \le t\}$, we define $s_n$ by $$s_n|_{\{(k-1)/2^n \le t-\tau_U <k/2^n\}}=k/2^n,\quad k \in {\mathbb{N}}.$$ We note that $\lim_{n \to \infty}s_n=t-\tau_U$. By the strong Markov property of $X$, $$\begin{aligned} E_{x}[f(X_{\tau_U+s_n}):\tau_U \le t]&=\sum_{k=1}^{\infty}E_{x}[f(X_{\tau_U+k/2^n}):(k-1)/2^n \le t-\tau_U <k/2^n] \notag \\ &=\sum_{k=1}^{\infty}E_{x}[E_{X_{\tau_U}}[f(X_{k/2^n})]:(k-1)/2^n \le t-\tau_U <k/2^n] \notag \\ &=E_{x}[p_{s_n}f(X_{\tau_U}): \tau_U \le t]\label{eq:eqd}.\end{aligned}$$ Letting $n \to \infty$ in , we obtain $$E_{x}[f(X_{t}):\tau_U \le t]=E_{x}[p_{t-\tau_U}f(X_{\tau_U}): \tau_U \le t] \label{eq:eq2}$$ Combining with , we complete the proof. By using Dynkin’s formula and the semigroup strong Feller property, we obtain the next lemma. \[lem:uni\] Let $K$ be a compact subset of $E$. Then, for any $t>0$ and a nonegative $f \in \mathcal{B}_{b}(E)$, $$\lim_{n \to \infty}\sup_{x \in K} E_{x}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n \le t]=0.$$ We may assume $K \subset U_1$. By the condition I and Remark \[re\] (iii), both $p_tf$ and $p_{t}^nf$ are continuous on $K$. Hence, we see from Dynkin’s formula (Lemma \[lem:d\]) that $$E_{x}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n \le t]=p_{t}f(x)-p_{t}^{n}f(x) \label{eq:1}$$ is continuous on $K$. For any $t>0$ and $x \in E$, $p_{t}^{n}f(x) \le p_{t}^{n+1}f(x)$. Hence, (LHS) of is non-increasing in $n$. By Lemma \[lem:ql\], (LHS) of converges to $$\begin{aligned} &\lim_{n \to \infty}E_{x}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n \le t]=\lim_{n \to \infty}(p_{t}f(x)-p_{t}^{n}f(x)) \\ &=\lim_{n \to \infty}E_{x}[f(X_t):t \ge \tau_n] =E_{x}[f(X_t):t \ge {\zeta}] \\ &=E_{x}[f(\partial ):t \ge {\zeta}]=0,\end{aligned}$$ and the proof is complete by Dini’s theorem. For each $n \in {\mathbb{N}}$ and $t>0$, we define the operator $T_{n,t}$ on $L^{\infty}(E,\mu)$ by $$L^{\infty}(E,\mu) \ni f \mapsto E_{(\cdot)}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n\le t].$$ The operator norm of $T_{n,t}$ is estimated as follows. \[prop:on\] Let $n ,m \in {\mathbb{N}}$ with $m<n$. Then, for any $t>0$, $$\begin{aligned} &\| T_{n,t} \|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)} \\ &\le \sup_{x \in K_m}E_{x}[p_{t-\tau_n}{\mathbf{1}}_{E}(X_{\tau_n}):\tau_n \le t] + (4/t) \times \sup_{x \in E \setminus K_m}E_{x}[{\zeta}].\end{aligned}$$ Here, $\|\cdot\|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)}$ denotes the operator norm from $L^{\infty}(E,\mu)$ to itself. Let $f \in L^{\infty}(E,\mu)$ with $\|f\|_{L^{\infty}(E,\mu)}=1$. Then, we have $$\begin{aligned} &\|E_{(\cdot)}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n \le t]\|_{L^{\infty}(E,\mu)} \\ &\le \|f\|_{L^{\infty}(E,\mu)} \times \operatorname*{ess\,sup}_{x \in E}E_{x}[p_{t-\tau_n}{\mathbf{1}}_{E}(X_{\tau_n}):\tau_n \le t]\\ &\le \operatorname*{ess\,sup}_{x \in K_m}E_{x}[p_{t-\tau_n}{\mathbf{1}}_{E}(X_{\tau_n}):\tau_n\le t]+\operatorname*{ess\,sup}_{x \in E \setminus K_m}E_{x}[p_{t-\tau_n}{\mathbf{1}}_{E}(X_{\tau_n}): t/2 < \tau_n \le t] \\ &\quad +\operatorname*{ess\,sup}_{x \in E \setminus K_m}E_{x}[p_{t-\tau_n}{\mathbf{1}}_{E}(X_{\tau_n}): \tau_n \le t/2] \\ &\le \sup_{x \in K_m}E_{x}[p_{t-\tau_n}{\mathbf{1}}_{E}(X_{\tau_n}):\tau_n \le t]+\sup_{x \in E \setminus K_m}P_{x}(t/2<\tau_n) \\ &\quad+ \sup_{x \in E \setminus K_m}\sup_{s \in [t/2,t]}p_{s}{\mathbf{1}}_{E}(x).\end{aligned}$$ Here, $\operatorname*{ess\,sup}$ denotes the essential supremum with respect to $\mu$. Moreover, we see $P_{x}(t/2<\tau_n) \le P_{x}(t/2<{\zeta})\le (2/t) \times E_{x}[{\zeta}]$ and $$p_{s}{\mathbf{1}}_{E}(x)=P_{x}(X_s \in E)=P_{x}(s<{\zeta})\le (1/s)\times E_{x}[{\zeta}].$$ Combining these estimates, we obtain the following estimate $$\begin{aligned} &\| E_{(\cdot)}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n \le t] \|_{L^{\infty}(E,\mu)}\\ &\le \sup_{x \in K_m}E_{x}[p_{t-\tau_n}{\mathbf{1}}_{E}(X_{\tau_n}): \tau_n \le t] + (4/t) \times \sup_{x \in E \setminus K_m}E_{x}[{\zeta}].\end{aligned}$$ Let $X^{(1)}$ be the $1$-subprocess of $X$. Namely, $X^{(1)}=(\{X_{t}^{(1)}\}_{t \ge 0}, \{P_{x}^{(1)}\}_{x \in E}, {\zeta}^{(1)})$ is the $\mu$-symmetric Hunt process on $E$ whose semigroup $\{p_{t}^{(1)}\}_{t \ge 0}$ is given by $$p_{t}^{(1)}f(x):=E_{x}^{(1)}[f(X_{t}^{(1)})]=E_{x}[e^{-t}f(X_t)],\quad t>0,\ x \in E,\ f \in \mathcal{B}_{b}(E),$$ where $E_{x}^{(1)}$ is the expectation with respect to $P_{x}^{(1)}$. For each $n \in {\mathbb{N}}$, we denote by $X^{(1),n}$ the part process of $X^{(1)}$ on $U_n$. The semigroup is denoted by $\{p_{t}^{(1),n}\}_{t \ge 0}$. It is easy to see $$p_{t}^{(1)}f(x)-p_{t}^{(1),n}f(x)=e^{-t}(p_{t}f(x)-p_{t}^{n}f(x)) \label{eq:eqcommute}$$ for any $t>0$, $x \in E$, $f \in \mathcal{B}_{b}(E)$, and $n \in {\mathbb{N}}$. For each $n \in {\mathbb{N}}$ and $t>0$, we define the operator $T_{n,t}^{(1)}$ on $L^{\infty}(E,\mu)$ by $$L^{\infty}(E,\mu) \ni f \mapsto E_{(\cdot)}^{(1)}[p_{t-\tau_{n}'}^{(1)}f(X_{\tau_{n}'}^{(1)}): \tau_{n}'\le t],$$ where we define $\tau_{n}'=\inf \{t >0 \mid X_{t}^{(1)} \notin U_n \}$. By using and applying Lemma \[lem:d\] to $X$ and $X^{(1)}$, we have $$\begin{aligned} &T_{n,t}^{(1)}f(x)=p_{t}^{(1)}f(x)-p_{t}^{(1),n}f(x)=e^{-t}(p_{t}f(x)-p_{t}^nf(x)) \notag \\ &=e^{-t} \times E_{x}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n \le t]=e^{-t} \times T_{n,t}f(x) \label{eq:eqcommute2}\end{aligned}$$ for any $t>0$, $n \in {\mathbb{N}}$, $x \in E$ and $f \in \mathcal{B}_{b}(E)$. By using and Lemma \[lem:uni\], we obtain the next lemma. \[lem:commute\] - It holds that $$\lim_{n \to \infty}\sup_{x \in K} T_{n,t}^{(1)}f(x)=0$$ for any compact subset $K \subset E$, $t>0$ and nonegative $f \in \mathcal{B}_{b}(E)$. - It holds that $$\|T_{n,t}\|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)}=e^{t} \times \|T_{n,t}^{(1)}\|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)}$$ for any $t>0$ and $n \in {\mathbb{N}}$. By the condition III, each $p_{t}^n$ is regarded as a compact operator on $L^{\infty}(E,\mu)$. Therefore it is sufficient to prove $$\lim_{n \to \infty}\| p_{t}-p_{t}^n \|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)}=0.$$ Lemma \[lem:d\] lead us to that for any $n \in {\mathbb{N}}$ and $t>0$ $$\begin{aligned} &\| p_{t}-p_{t}^n \|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)} \notag \\ &= \sup_{f \in L^{\infty}(E,\mu),\ \|f\|_{L^{\infty}(E,\mu)}=1} \| E_{(\cdot)}[p_{t-\tau_n}f(X_{\tau_n}): \tau_n \le t] \|_{L^{\infty}(E,\mu)} \notag \\ &=\|T_{n,t}\|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)}. \label{eq:commute3}\end{aligned}$$ It holds that $E_{x}^{(1)}[{\zeta}^{(1)}]=R_{1}{\mathbf{1}}_{E}(x)$ for any $x \in E$. Applying Proposition \[prop:on\] to $X^{(1)}$, we have $$\begin{aligned} &\|T_{n,t}^{(1)}\|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)} \notag \\ &\le \sup_{x \in K_m}E_{x}^{(1)}[p_{t-\tau_{n}'}^{(1)}{\mathbf{1}}_{E}(X_{\tau_{n}'}^{(1)}):\tau_{n}' \le t] + (4/t) \times \sup_{x \in E \setminus K_m}E_{x}^{(1)}[{\zeta}^{(1)}] \notag \\ &= \sup_{x \in K_m}T_{n,t}^{(1)}{\mathbf{1}}_{E}(x) + (4/t) \times \sup_{x \in E \setminus K_m}R_{1}{\mathbf{1}}_{E}(x). \label{eq:commute4}\end{aligned}$$ Combining , and Lemma \[lem:commute\] (ii), we have $$\begin{aligned} &\| p_{t}-p_{t}^n \|_{L^{\infty}(E,\mu) \to L^{\infty}(E,\mu)} \le e^{t} \times \left\{ \sup_{x \in K_m}T_{n,t}^{(1)}{\mathbf{1}}_{E}(x) + (4/t) \times \sup_{x \in E \setminus K_m}R_{1}{\mathbf{1}}_{E}(x) \right\}.\end{aligned}$$ Letting $n \to \infty$ and then $m \to \infty$, the proof is complete by Lemma \[lem:commute\] (i). Examples ======== Let $\alpha \in (0,2]$ and $X$ be the rotationally symmetic $\alpha$-stable process on ${\mathbb{R}}^d$. If $\alpha=2$, $X$ is identified with the $d$-dimensional Brownian motion. Let $D \subset {\mathbb{R}}^d$ be an open subset of ${\mathbb{R}}^d$ and $X^{D}$ be the $\alpha$-stable process on $D$ with Dirichlet boundary condition. Since $X$ is semigroup doubly Feller in the sense of [@CK], the condition I is satisfied for $X^D$. Since the semigroup of $X$ is ultracontractive, so is the semigroup of $X^D$. Thus, the condition III is also satisfied. It is shown in [@KM Lemma 1] that the semigroup of $X^D$ is a compact operator on $L^{2}(D,m)$ if and only if $\lim_{|x| \to \infty}E_{x}[\tau_D]=0$. Hence, by using Theorem \[th:2\] and Theorem \[th:3\], we obtain the next theoem. \[th:4\] The following are equivalent: - for any $1\le p \le \infty$, the semigroup of $X^D$ is a compact operator on $L^{p}(D,m)$; - the semigroup of $X^D$ is a compact operator on $L^{2}(D,m)$; - $\lim_{ |x| \to \infty}E_{x}[\tau_D]=0$; - $\lim_{ |x| \to \infty} \int_{0}^{\infty}e^{-t}P_{x}[\tau_D>t]\,dt=0$. \[rem:HC\] The semigroup of $X^D$ is not necessarily a Hilbert-Schmidt operator but can be a compact operator on $L^{1}(D,m)$. Namely, there exists an open subset $D \subset {\mathbb{R}}^d$ which satisfies the following conditions: - $\lim_{ |x| \to \infty}E_{x}[\tau_D]=0$; - the trace of the semigroup of $X^D$ is infinite. For example, let $\alpha=2$, $d \in {\mathbb{N}}$, and $$D=\bigcup_{n=1}^{\infty} D_n:=\bigcup_{n=1}^\infty B(e_n,r_n)$$ Here, $B(e_n,r_n) \subset {\mathbb{R}}^d$ denotes the open ball centered at $e_n=(n,0,\cdots,0) \in {\mathbb{R}}^d$ with radius $r_n=\{\log \log(n+3)\}^{-1/2}$. It is easy to see $r_n > 1$ for $n>24$. We shall check $D$ satisfies the conditions (D.1) and (D.2). We denote by $p_{t}^{D_n}(x,y)$ the heat kernel density of $X^{D_n}$ with respect to $m$. By [@Da Theorem 1.9.3], $$\begin{aligned} &\int_{D}p_{t}^{D}(x,x)\,dm(x)\ge \sum_{n=25}^{\infty}\int_{D_n}p_{t}^{D_n}(x,x)\,dm(x) \\ &\ge \sum_{n=25}^{\infty} (8 \pi t)^{-d/2} \times r_n \times \exp(-8 \pi^2 dt/r_n^2)\\ &\ge (8 \pi t)^{-d/2}\sum_{n=25}^{\infty} \{ \log(n+3) \}^{-1/2-8 \pi^2 dt}=\infty.\end{aligned}$$ Therefore, the trace of the semigroup of $X^D$ is infinite. On the other hand, for any $x \in D_n$, $$\begin{aligned} &E_{x}[\tau_D]=E_{x}[\tau_{D_n}] \le E_{o}[\tau_{B(|e_n-x|+r_n)}].\end{aligned}$$ Here, $o$ denotes the origin of ${\mathbb{R}}^d$ and $B(|e_n-x|+r_n)$ denotes the open ball centered at the origin with radius $|e_n-x|+r_n$. $|e_n-x|$ is the length of $e_n-x$. Since $|e_n-x| \le r_n$, it holds that $$E_{o}[\tau_{B(|e_n-x|+r_n)}]=(|e_n-x|+r_n)^2/d \le 4r_n^2/d.$$ Since $r_n \to 0$ as $n \to \infty$, $\lim_{ |x| \to \infty}E_{x}[\tau_D]=0$. \[ex:kill\] Let $\alpha \in (0,2]$ and $X=(\{X_t\}_{t \ge 0}, \{P_x\}_{x \in {\mathbb{R}}^d},{\zeta})$ be the rotationally symmetric $\alpha$-stable process on ${\mathbb{R}}^d$. The semigroup of $X$ is denoted by $\{p_t\}_{t>0}$. Let $V$ be a positive Borel measurable function on ${\mathbb{R}}^d$ with the following properties: - $V$ is locally bounded. Namely, for any relatively compact open subset $G \subset {\mathbb{R}}^d$, $\sup_{x \in G}V<\infty$; - $\lim_{x \in {\mathbb{R}}^d,\ |x| \to \infty}V(x)=\infty$. We set $A_t=\int_{0}^{t}V(X_s)\,ds$. Let $X^V=(\{X_t\}_{t \ge 0}, \{P_{x}^{V}\}_{x \in {\mathbb{R}}^d},{\zeta})$ be the subprocess of $X$ defined by $dP_{x}^{V}=\exp(-A_t)dP_x$. The semigroup $\{p_{t}^{V}\}_{t>0}$ is identified with $$\begin{aligned} &p_{t}^{V}f(x)=E_{x}[\exp(-A_t)f(X_t)], \quad f \in \mathcal{B}_{b}({\mathbb{R}}^d),\ x \in {\mathbb{R}}^d.\end{aligned}$$ \[thm:kill\] $X^V$ satisfies the conditions from I to III. Before proving Theorem \[thm:kill\], we give a lemma. We denote by $B(n)$ the open ball of ${\mathbb{R}}^d$ centered at the origin $o$ and radius $n \in {\mathbb{N}}$. The semigroup of $X$ is doubly Feller in the sense of [@CK]. Thus, for any $n \in {\mathbb{N}}$, the semigroup of $X^{B(n)}$ is strong Feller. \[lem:locuni\] It holds that $$\lim_{n \to \infty}\sup_{x \in K}P_{x}(\tau_{B(n)} \le t)=0$$ for any $t>0$ and compact subset $K \subset {\mathbb{R}}^d$. Here, $\tau_{B(n)}=\inf\{t>0 \mid X_t \in {\mathbb{R}}^d \setminus B(n)\}$. Without loss of generality, we may assume $K \subset B(1)$. For any $t>0$, $n \in {\mathbb{N}}$, and $x \in {\mathbb{R}}^d$, $$\begin{aligned} P_{x}(\tau_{B(n)} \le t)&={\mathbf{1}}_{{\mathbb{R}}^d}(x)-P_{x}(\tau_{B(n)}>t)\\ &={\mathbf{1}}_{{\mathbb{R}}^d}(x)-p_{t}^{B(n)}{\mathbf{1}}_{{\mathbb{R}}^d}(x).\end{aligned}$$ Thus, we see from the strong Feller property of $X^{B(n)}$ that for any $n \in {\mathbb{N}}$, $P_{\cdot}(\tau_{B(n)} \le t)$ is a continuous function on $K$. It follows from the conservativesness of $X$ and Lemma \[lem:ql\] that for any $x \in {\mathbb{R}}^d$, $$\varlimsup_{n \to \infty}P_{x}(\tau_{B(n)} \le t)\le P_{x}({\zeta}\le t)=0$$ and the convergence is non-increasing. The proof is complete by Dini’s theorem. Since the semigroup of $X$ is ultracontractive, so is the semigroup of $X^V$. Hence, the condition III is satisfied. We will check $X^V$ satisfies the condition I. Let $K$ be a compact subset of ${\mathbb{R}}^d$ and take $n_0 \in {\mathbb{N}}$ such that $K \subset B(n_0)$. Then, for any $s \in (0,1)$ and $n>n_0$, $$\begin{aligned} &\sup_{x \in K}E_{x}[1-\exp(-A_s )]\\ & \le \sup_{x \in K}E_{x}[A_{s {\wedge}\tau_{B(n)}}]+\sup_{x \in K}P_{x}(\tau_{B(n)} \le s) \\ &=\sup_{x \in K}E_{x}\left[\int_{0}^{ s{\wedge}\tau_{B(n)}}V(X_t)\,dt\right]+\sup_{x \in K}P_{x}(\tau_{B(n)} \le 1)=:I_1+I_2. \end{aligned}$$ By the condition (V.1), $\lim_{s \to 0}I_1=0$. By Lemma \[lem:locuni\], $\lim_{n \to \infty}I_2=0$. Thus, $$\lim_{s \to 0}\sup_{x \in K}E_{x}[1-\exp(-A_s )]=0.\label{eq:eqa}$$ Let $t>0$ and $f \in \mathcal{B}_{b}({\mathbb{R}}^d)$. Since the semigroup of $X$ is strong Feller, for any $s \in (0,t)$, $p_{s}p_{t-s}^{V}f$ is continuous on ${\mathbb{R}}^d$. By using , we have $$\begin{aligned} &\varlimsup_{s \to 0}\sup_{x \in K}\left| p_{t}^{V}f(x)-p_{s}p_{t-s}^{V}f(x) \right|\\ &=\varlimsup_{s \to 0}\sup_{x \in K} \left| E_{x}[\exp(-A_t)f(X_{t})]-E_{x}[p_{t-s}^{V}f(X_s)]\right| \\ &=\varlimsup_{s \to 0}\sup_{x \in K}\left| E_{x} [\exp(-A_s)E_{X_{s}}[\exp(-A_{t-s})f(X_{t-s})]]-E_{x}[p_{t-s}^{V}f(X_s)] \right|\\ &\le \|f\|_{L^\infty({\mathbb{R}}^d,m)} \times \varlimsup_{s \to 0}\sup_{x \in K}E_{x}[1-\exp(-A_s )]=0.\end{aligned}$$ This means that the semigroup of $X^V$ is strong Feller and the condition I is satisfied. Finally, we shall show the condition II. Let $x \in {\mathbb{R}}^d$ and $t>0$. Since $X$ is spatially homogeneous, $$P_{x}^{V}({\zeta}>t)=E_{x}\left[\exp \left(-\int_{0}^{t}V(X_s)\,ds\right)\right]=E_{o}\left[\exp \left(-\int_{0}^{t}V(x+X_s)\,ds\right)\right].$$ It follows from the condition (V.2) that for any $t>0$, $\lim_{x \in {\mathbb{R}}^d,\ |x| \to \infty}P_{x}^{V}({\zeta}>t)=0$. By the positivity of $V$, we can show that $\sup_{x \in {\mathbb{R}}^d}P_{x}^V({\zeta}>t)<1$ for any $t>0$. By the additivity of $\{A_t\}_{t \ge 0}$, $$\begin{aligned} P_{x}^{V}({\zeta}>t+s)&=E_{x}[\exp(-A_{t+s}):t+s<{\zeta}]\\ &=E_{x}[\exp(-A_s)E_{X_s}[\exp(-A_t):t<{\zeta}]:s<{\zeta}] \\ &\le \sup_{x \in {\mathbb{R}}^d}P_{x}^{V}({\zeta}>t) \times \sup_{x \in {\mathbb{R}}^d}P_{x}^{V}({\zeta}>s)\end{aligned}$$ for any $x \in {\mathbb{R}}^d$ and $s,t>0$. Hence, letting $p= \sup_{x \in {\mathbb{R}}^d}P_{x}^{V}({\zeta}>1)<1 $, we have $$\begin{aligned} \sup_{x \in {\mathbb{R}}^d}E_{x}^{V}[{\zeta}]&=\sup_{x \in {\mathbb{R}}^d}\int_{0}^{\infty}P_{x}^{V}({\zeta}>t)\,dt \le \sum_{n=0}^{\infty}\int_{n}^{n+1}\sup_{x \in {\mathbb{R}}^d}P_{x}^{V}({\zeta}>n)\,dt \\ &\le 1+\sum_{n=1}^{\infty}p^{n}=1/(1-p).\end{aligned}$$ We denote by $p_{t}^{V}(x,y)$ the heat kernel density of $X^V$. For any ${\varepsilon}>0$, $$\begin{aligned} E_{x}^{V}[{\zeta}]&\le {\varepsilon}+E_{x}^{V}[E^{V}_{X_{{\varepsilon}}^V}[{\zeta}]] \le {\varepsilon}+\int_{{\mathbb{R}}^d}p_{{\varepsilon}}^{V}(x,y)E^{V}_{y}[{\zeta}]\,dm(y) \notag \\ &\le {\varepsilon}+\frac{1}{1-p}\times P_{x}^{V}({\zeta}>{\varepsilon}). \notag\end{aligned}$$ By letting $x \to \infty$, we have $\varlimsup_{x \in {\mathbb{R}}^d,\ |x| \to \infty}E_{x}^{V}[{\zeta}] \le {\varepsilon}$. Since ${\varepsilon}$ is chosen arbitrarily, the condition II is satisfied. \[ex:tc\] Let $\alpha \in (0,2]$ and $d >\alpha $, and $X=(\{X_t\}_{t \ge 0}, \{P_x\}_{x \in {\mathbb{R}}^d},{\zeta})$ be the rotationally symmetric $\alpha$-stable process on ${\mathbb{R}}^d$. We note that $X$ is transient. Let us consider the additive functional $\{A_t\}_{t \ge 0}$ of $X$ defined by $$A_t=\int_{0}^{t}W(X_s)^{-1}\,ds,\quad t \ge 0.$$ Here $W$ is a Borel measurable function on ${\mathbb{R}}^d$ with the condition: $$1+|x|^{\beta} \le W(x) <\infty ,\quad x \in {\mathbb{R}}^d,$$ where $\beta \ge 0$ is a constant. The Revuz measure of $\{A_t\}_{t \ge 0}$ is identified with $W^{-1}\,m$. Denote $\mu=W^{-1}m$. $\mu$ is not necessary a finite measure on ${\mathbb{R}}^d$. Noting that $A_t$ is continuous and strictly increasing in $t$, we define $X^{\mu}=(\{X_t^{\mu}\}_{t \ge 0}, \{P_x\}_{x \in {\mathbb{R}}^d},{\zeta}^{\mu})$ by $$X_t^{\mu}=X_{\tau_t},\ t \ge0,\quad \tau=A^{-1},\quad {\zeta}^{\mu}=A_{\infty}.$$ Then, $X^{\mu}$ becomes a $\mu$-symmetric Hunt process on ${\mathbb{R}}^d$. $X^\mu$ is transient because the transience is preserved by time-changed transform ([@FOT Theorem 6.2.3]). The semigroup and the resolvent of $X^{\mu}$ are denoted by $\{p_t^{\mu}\}_{t>0}$, $\{R_{\alpha}^{\mu}\}_{\alpha \ge 0}$, respectively. \[th:tc\] If $\beta>\alpha$, $X^{\mu}$ satisfies the conditions from I to III. Before proving Theorem \[th:tc\], we give some notions and lemmas. Let $(\mathcal{E},\mathcal{F})$ be the Dirichlet form of $X$. $(\mathcal{E},\mathcal{F})$ is identified with $$\begin{aligned} \mathcal{E}(f,g)&=\frac{K(d,\alpha)}{2}\int_{{\mathbb{R}}^d}\hat{f}(x)\hat{g}(x)\,|x|^{\alpha}dx,\\ f,g \in \mathcal{F}&=\left\{f \in L^{2}({\mathbb{R}}^d,m) {\mathrel{}\middle|\mathrel{}} \int_{{\mathbb{R}}^d}|\hat{f}(x)|^2\,|x|^{\alpha}dx<\infty \right\}.\end{aligned}$$ Here $\hat{f}$ denotes the Fourier transform of $f$ and $K(d,\alpha)$ is a positive constant. Recall that $m$ is the Lebesgue measure on ${\mathbb{R}}^d$. $m$ is also denoted by $dx$. Let $(\mathcal{E},\mathcal{F}_e)$ denotes the extended Dirichlet space of $(\mathcal{E},\mathcal{F})$, Namely, $\mathcal{F}_e$ is the family of Lebesgue measurable functions $f$ on ${\mathbb{R}}^d$ such that $|f|<\infty$ $m$-a.e. and there exists a sequence $\{f_n\}_{n=1}^{\infty}$ of functions in $\mathcal{F}$ such that $\lim_{n \to \infty}f_n=f$ $m$-a.e. and $\lim_{n,k\to \infty}\mathcal{E}(f_n-f_k,f_n-f_k)=0$. $\{f_n\}_{n=1}^{\infty}$ as above called an [*approximating sequence*]{} for $f \in \mathcal{F}_e$ and $\mathcal{E}(f,f)$ is defined by $\mathcal{E}(f,f)=\lim_{n \to \infty}\mathcal{E}(f_n,f_n).$ Since the quasi support of $\mu$ is identified with ${\mathbb{R}}^d$, the Dirichlet form $(\mathcal{E}^\mu,\mathcal{F}^\mu)$ of $X^\mu$ is described as follows (see [@FOT Theorem 6.2.1, (6.2.22)] for details). $$\begin{aligned} \mathcal{E}^\mu(f,g)&=\mathcal{E}(f,g), \quad \mathcal{F}^{\mu}= \mathcal{F}_e\cap L^{2}({\mathbb{R}}^d,\mu).\end{aligned}$$ By identifying the Dirichlet form of $X^\mu$, we see that the semigroup of $X^{\mu}$ is ultracontractive. \[lem:uc\] For any $f \in L^{1}({\mathbb{R}}^d,\mu)$ and $t>0$, $p_t^{\mu}f \in L^{\infty}({\mathbb{R}}^d,\mu)$. By [@EG Theorem 1, p138] for $\alpha=2$ and [@DPV Theorem 6.5] for $\alpha \in (0,2)$, there exist positive constants $C>0$ and $q \in (2,\infty)$ such that $$\left\{ \int_{{\mathbb{R}}^d}|f|^{q}\,d\mu \right\}^{2/q} \le \left\{ \int_{{\mathbb{R}}^d}|f|^{q}\,dm \right\}^{2/q} \le C\mathcal{E}(f,f),\quad f \in \mathcal{F}.\label{eq:sobolev}$$ Let $\{f_n\}_{n=1}^{\infty} \subset \mathcal{F}$ be an approximating sequence of $f \in \mathcal{F}^{\mu}=\mathcal{F}_e \cap L^{2}({\mathbb{R}}^d,\mu)$. By using Fatou’s lemma and , we have $$\begin{aligned} \left\{ \int_{{\mathbb{R}}^d}|f|^{q}\,d\mu \right\}^{2/q} \le \varliminf_{n \to \infty} \left\{ \int_{{\mathbb{R}}^d}|f_n|^{q}\,d\mu \right\}^{2/q}\le C\varliminf_{n \to \infty} \mathcal{E}(f_n,f_n)=C\mathcal{E}(f,f).\end{aligned}$$ The proof is complete by [@CKS]. See also [@FOT Theorem 4.2.7]. Let $U$ be an open subset of ${\mathbb{R}}^d$ and $X^{\mu,U}$ be the part of $X^\mu$ on $U$: $$X_t^{\mu,U}=\begin{cases} X_t^{\mu}, \quad &t<T_U:=\inf\{t>0 \mid X_t^{\mu} \notin U\} \\ \partial,\quad &t \ge T_U. \end{cases}$$ The semigroup and the resolvent are denoted by $\{p_{t}^{\mu,U}\}_{t>0}$ and $\{R_{\gamma}^{\mu, U}\}_{\gamma>0}$, respectively. \[lem:str\] Let $f \in \mathcal{B}_{b}(U)$, ${\gamma}>0$, and $U \subset {\mathbb{R}}^d$ be a open subset. Then, $R_{{\gamma}}^{\mu, U}f \in C_{b}({\mathbb{R}}^d)$. In particular, for each ${\gamma}>0$ and $x \in U$, the kernel $R_{{\gamma}}^{\mu,U}(x,\cdot)$ is absolutely continuous with respect to $\mu|_{U}$. It is easy to see that $\lim_{t \to 0}\sup_{x \in {\mathbb{R}}^d}E_{x}[A_t]=0$. This means that $\mu$ is in the Kato class of $X$ in the sense of [@KKT]. Since the resolvent of $X$ is doubly Feller in the sense of [@KKT], by [@KKT Theorem 7.1], the resolvent of $X^{\mu}$ is also doubly Feller. By using [@KKT Theorem 3.1], we complete the proof. “In particular” part follows from the same argument as in [@FOT Exercise 4.2.1]. Following the arguments in [@AK Theorem 5.1], we strengthen Lemma \[lem:str\] as follows. \[prop:str\] Let $f \in \mathcal{B}_{b}(U)$, $t>0$, and $U \subset {\mathbb{R}}^d$ be a bounded open subset. Then, $p_{t}^{\mu,U}f \in C_{b}(U)$. Step 1: We denote by $(\mathcal{L}_{U}, D(\mathcal{L}_{U}))$ the non-positive generator of $\{p_{t}^{\mu,U}\}$ on $L^{2}(U,\mu)$. By Lemma \[lem:uc\], $-\mathcal{L}_{U}$ has only discrete spectrum. Let $\{\lambda_n\}_{n=1}^{\infty} \subset [0,\infty)$ be the eigenvalues of $-\mathcal{L}_{U}$ written in increasing order repeated according to multiplicity, and let $\{\varphi_n\}_{n=1}^{\infty} \subset D(\mathcal{L}_{U})$ be the corresponding eigenfunctions: $-\mathcal{L}_{U}\varphi_n=\lambda_n \varphi_n$. Then, $\varphi_n=e^{\lambda_n} p_{1}^{\mu,U}\varphi_n \in L^{\infty}({\mathbb{R}}^d,\mu)$ by Lemma \[lem:uc\]. Hence, for each $n \in {\mathbb{N}}$, there exists a bounded measurable version of $\varphi_n$ (still denoted as $\varphi_n$). By Lemma \[lem:str\], for each ${\gamma}>0$ and $n \in {\mathbb{N}}$, $R_{{\gamma}}^{\mu,U}\varphi_n$ is continuous on $U$. Furthermore, we see from [@FOT Theorem 4.2.3] that $$R_{{\gamma}}^{\mu,U}\varphi_n=({\gamma}-\mathcal{L}_U)^{-1}\varphi_n=({\gamma}+\lambda_n)^{-1}\varphi_n\quad \mu\text{-a.e. on }U \label{eq:identity}.$$ Therefore, there exists a (unique) bounded continuous version of $\varphi_n$ (still denoted as $\varphi_n$). By [@Da Theorem 2.1.4], the series $$p_{t}^{\mu,U}(x,y):=\sum_{n=1}^{\infty}e^{-\lambda_n t}\varphi_n(x) \varphi_n(y)\label{eq:expan}$$ absoluetely converges uniformly on $[{\varepsilon},\infty) \times U \times U$ for any ${\varepsilon}>0$. Since $\{\varphi_n\}_{n=1}^{\infty}$ are bounded continuous on $U$, $p_{t}^{\mu,U}(x,y)$ is also continuous on $(0,\infty) \times U \times U$ and defines an integral kernel of $\{ p_{t}^{\mu, U}\}_{t>0}$. Namely, for each $t>0$ and $f \in L^{2}(U,\mu)$, $$p_{t}^{\mu,U}f(x)=\int_{U}p_{t}^{\mu,U}(x,y)f(y)\, d\mu(y)\quad\text{for $\mu$-a.e. }x\in U. \label{eq:expan2}$$ The uniform convergence of the series imply the boundedness of $p_{t}^{\mu,U}(x,y)$ on $[{\varepsilon},\infty) \times U \times U$ for each ${\varepsilon}>0$. We also note that $p_{t}^{\mu,U}(x,y) \ge 0$ by and the fact that $p_{t}^{\mu,U}f \ge 0$ $\mu$-a.e. for any $f \in L^{2}(U,\mu)$ with $f \ge 0$. Step 2: In this step, we show that for each $x \in U$, ${\gamma}>0$, and $f \in \mathcal{B}_{b}({\mathbb{R}}^d)$, $$\int_{0}^{\infty}e^{-{\gamma}t}E_{x}[f(X_{t}^{\mu,U})]\,dt=\int_{0}^{\infty}e^{-{\gamma}t}\left(\int_{U}p_{t}^{\mu, U}(x,y)f(y)\,d\mu(y)\right)\,dt\label{eq:laplace}.$$ By the absolute continuity of $R_{{\gamma}}^{\mu,U}$ (Lemma \[lem:str\]), for any ${\varepsilon}>0$, $$\begin{aligned} &\int_{{\varepsilon}}^{\infty}e^{-{\gamma}t}E_{x}[f(X_{t}^{\mu,U})]\,dt=e^{-{\gamma}{\varepsilon}}R_{{\gamma}}^{\mu,U}(p_{{\varepsilon}}^{\mu,U}f)(x) \\ &=e^{-{\gamma}{\varepsilon}}R_{{\gamma}}^{\mu,U}\left(\sum_{n=1}^{\infty}e^{-\lambda_n {\varepsilon}}\left(\int_{U}\varphi_n(y)f(y)\,d\mu(y) \right)\varphi_n \right)(x)\\ &=\sum_{n=1}^{\infty}e^{-({\gamma}+\lambda_n){\varepsilon}}({\gamma}+\lambda_n)^{-1}\left(\int_{U}\varphi_n(y)f(y)\,d\mu(y) \right)\varphi_n(x).\end{aligned}$$ Here, we used the identity and the uniform convergence of the series . Set $$a_n^{{\varepsilon}}=e^{-(\gamma+\lambda_n){\varepsilon}}(\gamma+\lambda_n)^{-1}=\int_{{\varepsilon}}^{\infty}e^{-(\gamma+\lambda_n)t}\,dt.$$ Since the series uniformly converges on $[{\varepsilon},\infty) \times U \times U$ for each ${\varepsilon}>0$, $$\begin{aligned} &\int_{{\varepsilon}}^{\infty}e^{-{\gamma}t}E_{x}[f(X_{t}^{\mu,U})]\,dt=\sum_{n=1}^{\infty}a_{n}^{{\varepsilon}}\left(\int_{U}\varphi_n(y)f(y)\,d\mu(y) \right)\varphi_n(x) \notag \\ &=\sum_{n=1}^{\infty}\int_{{\varepsilon}}^{\infty}\int_{U}e^{-\lambda_nt}\varphi_n(y)\varphi_n(x) f(y) \,d\mu(y)e^{-{\gamma}t}\,dt \notag \\ &=\int_{{\varepsilon}}^{\infty}\left(\int_{U}p_{t}^{\mu,U}(x,y)f(y)\,d\mu(y) \right)e^{-\gamma t}\,dt. \label{eq:laplace2}\end{aligned}$$ By letting ${\varepsilon}\to 0$ in , we obtain . Step 3: By and the uniquness of Laplace transforms, it holds that $$E_{x}[f(X_{t}^{\mu,U})]=\int_{U}p_{t}^{\mu, U}(x,y)f(y)\,d\mu(y)\quad dt\text{-a.e. }t\in(0,\infty) \label{eq:laplace3}$$ for any $x \in E$ and $f \in \mathcal{B}_{b}({\mathbb{R}}^d)$. If $f$ is bounded continuous on $U$, by the continuity of $X_t^{\mu}$ and $p_{t}^{\mu,U}(x,y)$, holds for any $t \in (0,\infty)$. By using a monotone class argument, we have $$E_{x}[f(X_{t}^{\mu,U})]=\int_{U}p_{t}^{\mu,U}(x,y)f(y)\,d\mu(y)$$ for any $x \in E$ and $f \in \mathcal{B}_{b}({\mathbb{R}}^d)$, and $t>0$. By Step 1, for each $t>0$, $p_t^{\mu,U}(x,y)$ is bounded continuous on $U \times U$. Since $\mu(U)<\infty$, the proof is complete by dominated convergence theorem. \[cor:str\] For any $f \in \mathcal{B}_{b}({\mathbb{R}}^d)$ and $t>0$, $p_{t}^{\mu}f \in C_{b}({\mathbb{R}}^d)$. Let $K$ be a compact subset of ${\mathbb{R}}^d$. For any bounded open subset $U \subset {\mathbb{R}}^d$ with $K \subset U$, $$\begin{aligned} &\sup_{x \in K}|p_{t}^{\mu}f(x)-p_{t}^{\mu,U}f(x)|\le \|f\|_{L^{\infty}(E,\mu)}\times \sup_{x \in K}P_{x}[t \ge T_U]. \end{aligned}$$ By Proposition \[prop:str\], $p_{t}^{\mu,U}f$ is continuous on $K$. By Lemma \[lem:ql\] and Dini’s theorem, $$\lim_{U \nearrow {\mathbb{R}}^d} \sup_{x \in K}P_{x}[t \ge T_U]=0,$$ which complete the proof. By Lemma \[lem:uc\] and Corollary \[cor:str\], the conditions I and III are satisfied. We shall prove the condition II. Let $\gamma_1,\gamma_2>0$ such that $\gamma_1<d$ and $\gamma_1+\gamma_2>d$. Setting $$J_{\gamma_1,\gamma_2}(x)=\int_{{\mathbb{R}}^d}\frac{dy}{|x-y|^{\gamma_1}(1+|y|^{\gamma_2})}\quad x \in {\mathbb{R}}^d,$$ $J_{\gamma_1,\gamma_2}$ is bounded on ${\mathbb{R}}^d$ and there exist positive constants $c_1,c_2,c_3$ such that $$J_{\gamma_1,\gamma_2}(x)\le \begin{cases} c_1|x|^{d-(\gamma_1+\gamma_2)}, & \text{if } \gamma_2<d, \\ c_2(1+|x|)^{-\gamma_1} \log |x| & \text{if }\gamma_2=d, \\ c_3(1+|x|)^{-\gamma_1} &\text{if } \gamma_2>d \end{cases} \label{eq:eqbounds}$$ for any $x \in {\mathbb{R}}^d$. See [@MS Lemma 6.1] for the bounds . We denote by $G(x,y)$ the Green function of $X$. It is known that $$G(x,y)=c(d,\alpha) |x-y|^{\alpha-d}.$$ Here $c(d,\alpha)=2^{1-\alpha}\pi^{-d/2}\Gamma((d-\alpha)/2)\Gamma(\alpha/2)^{-1}$ and $\Gamma$ is the gamma function: $$\Gamma(s)=\int_{0}^{\infty}x^{s-1}\exp(-x)\,dx.$$ Recall that $\beta>\alpha$. Since $$\begin{aligned} R_{0}^{\mu}{\mathbf{1}}_{{\mathbb{R}}^d}(x)&=\int_{{\mathbb{R}}^d}G(x,y)\,d\mu(y)\le c(d,\alpha) \int_{{\mathbb{R}}^d}\frac{dy}{|x-y|^{d-\alpha}W(y)} \\ & \le c(d,\alpha) \int_{{\mathbb{R}}^d}\frac{dy}{|x-y|^{d-\alpha}(1+|y|^{\beta})}\\ &=c(d,\alpha)J_{d-\alpha,\beta}(x),\end{aligned}$$ $R_{0}^{\mu}{\mathbf{1}}_{{\mathbb{R}}^d}$ is bounded on ${\mathbb{R}}^d$ and $\lim_{x \in {\mathbb{R}}^d, |x| \to \infty}R_{0}^{\mu}{\mathbf{1}}_{{\mathbb{R}}^d}(x)=0$. [**Acknowledgements.**]{} The author would like to thank referees for their valuable comments and suggestions which improve the quality of the paper. He also would like to thank Professor Masayoshi Takeda for helpful comments and encouragement. He also would like to thank Professors Kwaśnicki Mateusz, Masanori Hino and Naotaka Kajino for their helpful comments on Remark \[rem:HC\]. [^1]:
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider a bivariate time series $(X_t,Y_t)$ that is given by a simple linear autoregressive model. Assuming that the equations describing each variable as a linear combination of past values are considered structural equations, there is a clear meaning of how intervening on one particular $X_t$ influences $Y_{t''}$ at later times $t''>t$. In the present work, we describe conditions under which one can define a causal model between variables that are coarse-grained in time, thus admitting statements like ‘setting $X$ to $x$ changes $Y$ in a certain way’ without referring to specific time instances. We show that particularly simple statements follow in the frequency domain, thus providing meaning to interventions on frequencies.' author: - Dominik Janzing - Paul Rubenstein - Bernhard Schölkopf date: 'April 11, 2018' title: 'Structural causal models for macro-variables in time-series' --- Structural equations from dynamical laws ======================================== Structural equations, also called ‘functional causal models’ [@Pearl:00] are a popular and helpful formalization of causal relations. For a causal directed acyclic graph (DAG) with $n$ random variables $X_1,\dots,X_n$ as nodes they read $$\label{eq:se} X_j = f_j(PA_j,N_j),$$ where $PA_j$ denotes the vector of all parent variables and $N_1,\dots,N_n$ are jointly independent noise variables. Provided the variables $X_j$ refer to measurements that are well-localized in time and correspond to time instances $t_1,\dots,t_n$, one then assumes that $PA_j$ contain only those variables $X_i$ for which $t_i<t_j$.[^1] However, thinking of random variables as measurements that refer to well-defined time instances is too restrictive for many purposes. Random variables may, for instance, describe values attained by a quantity when some system is in its equilibrium state [@Dash05; @DGL]. In that case, intervening on one quantity may change the stationary joint state, and thus also change the values of other quantities. The authors of [@DGL] show how the differential equations describing the dynamics of the system entail, under fairly restrictive conditions, structural equations relating observables in equilibrium. It should be noted, however, that these structural equations may contain causal cycles [@Spirtes95; @Koster96; @PearlDechter96; @VoortmanDashDruzdzel10; @NIPS_cyclic; @Hyttinen12], i.e., unlike the framework in [@Pearl:00] they do not correspond to a DAG. The work [@Rubenstein16] generalized [@DGL], assaying whether the SCM framework can be extended to model systems that do not converge to an equilibrium (cf. also [@VoortmanDashDruzdzel10]), and what assumptions need to be made on the ODE and interventions so that this is possible. Further, also Granger causality [@Granger1969] yields coarse-grained statements on causality (subject to appropriate assumptions such as causal sufficiency of the time series) by stating that $X$ causes $Y$ without reference to specific time instances. The authors of [@Chalupka16] propose an approach for the identification of macro-level causes and effects from high-dimensional micro-level measurements in a scenario that does not refer to time-series. In the present work, we will elaborate on the following question: suppose we are given a dynamical system that has a clear causal direction when viewed on its elementary time scale. Under which conditions does it also admit a causal model on ‘macro-variables’ that are obtained by coarse-graining variables referring to particular time instances? Causal models for equilibrium values — a negative result \[sec:negative\] ========================================================================= The work [@DGL] considered deterministic dynamical systems described by ordinary differential equations and showed that, under particular restrictions, the effect of intervening on some of the variables changes the equilibrium state of the other ones in a way that can be expressed by structural equations among time-less variables, which are derived from the underlying differential equations. Inspired by these results, we consider non-deterministic discrete dynamics[^2] as given by autoregressive (AR) models, and ask whether we can define a causal structural equation describing the effect of an intervention on one variable on another one, which, at the same time, reproduces the observed stationary joint distribution. To this end, we consider the following simple AR model of order 1 depicted in Figure \[fig:ar1\]. (xt) [$X_t$]{}; (yt) [$Y_{t}$]{}; (xt1) [$X_{t+1}$]{}; (yt1) [$Y_{t+1}$]{}; (xt2) [$X_{t+2}$]{}; (yt2) [$Y_{t+2}$]{}; (xt0) [$\vdots$]{}; (yt0) [$\vdots$]{}; (xt3) [$\vdots$]{}; (yt3) [$\vdots$]{}; at ($(xt0)!0.5!(xt)$) [$\alpha$]{}; at ($(xt)!0.5!(xt1)$) [$\alpha$]{}; at ($(xt1)!0.5!(xt2)$) [$\alpha$]{}; at ($(xt2)!0.5!(xt3)$) [$\alpha$]{}; at ($(xt0)!0.5!(yt)$) [$\beta$]{}; at ($(xt)!0.5!(yt1)$) [$\beta$]{}; at ($(xt1)!0.5!(yt2)$) [$\beta$]{}; at ($(xt2)!0.5!(yt3)$) [$\beta$]{}; at ($(yt0)!0.5!(yt)$) [$\gamma$]{}; at ($(yt)!0.5!(yt1)$) [$\gamma$]{}; at ($(yt1)!0.5!(yt2)$) [$\gamma$]{}; at ($(yt2)!0.5!(yt3)$) [$\gamma$]{}; (xt)–(xt1); (yt)–(yt1); (xt)–(yt1); (xt1)–(xt2); (yt1)–(yt2); (xt1)–(yt2); (xt0)–(xt); (yt0)–(yt); (xt0)–(yt); (xt2)–(xt3); (yt2)–(yt3); (xt2)–(yt3); We assume a Markov chain evolving in time according to the following equations: $$\begin{aligned} X_{t+1} &= \alpha X_t + E_t^X \label{eq:sex} \\ Y_{t+1} &= \beta X_t + \gamma Y_t + E_t^Y \label{eq:sey}\end{aligned}$$ Let us assume that $E_t^X,E_t^Y$ are $\mathcal{N}(0,1)$ and *i.i.d.* random variables. We assume that the chain goes ‘back forever’ such that $(X_t,Y_t)$ are distributed according to the stationary distribution of the Markov chain, and are jointly normal.[^3] We want to express the stationary distribution and how it changes under (a restricted set of) interventions using a structural causal model. In this example, we consider interventions do($X=x$) and do($Y=y$), by which we refer to the sets of interventions do($X_t=x$) or do($Y_t=y$) for all $t$, respectively. Let us state our goal more explicitly: we want to derive a structural causal model (SCM) with variables $X$ and $Y$ (and perhaps others) such that the stationary distribution of the Markov chain is the same as the observational distribution on $(X,Y)$ implied by the SCM, and that the stationary distribution of the Markov chain after intervening do($X_t=x$) for all $t$ is the same as the SCM distribution after do($X=x$) (and similar with interventions on $Y$) This is informally represented by the diagram shown in Figure \[fig:commute\]. We seek a ‘transformation’ $\mathcal{T}$ of the original Markov chain (itself an SCM) such that interventions on *all* $X_t$ can be represented as an intervention on a single variable, and such that the SCM summarises the stationary distributions. (Note that in fact as we will see, we cannot express this in general without extra variables as confounders.) The diagram should commute, compare also [@Rubensteinetal17]. We first compute the stationary joint distribution of $(X,Y)$. Since there is no influence of $Y$ on $X$, we can first compute the distribution of $X$ regardless of its causal link to $Y$. Using $$X_{t+1} = E_t^X + \alpha E_{t-1}^X + \alpha^2 E_{t-2}^X + \ldots = \sum_{k=0}^\infty \alpha^k E_{t-k}^X,$$ and the above conventions $${\mathbb{E}}[E^X_t] =0 \quad \hbox{ and } \quad {\mathbb{V}}[E^X_t] =1,$$ we then obtain $${\mathbb{E}}[ X_t ] =0$$ and $$\begin{aligned} {\mathbb{V}}[X_t] &=& \mathbb{V}\left[\sum_{k=0}^\infty \alpha^k E_{t-k}^X\right] \\ &=& \sum_{k=0}^\infty \alpha^{2k} \mathbb{V}\left[ E_{t-k}^X\right] \\ &=& \sum_{k=0}^\infty \alpha^{2k}\\ &=& \frac{1}{1-\alpha},\end{aligned}$$ where we have used the independence of the noise terms for different $t$. For the expectation of $Y_t$ we get $${\mathbb{E}}[Y_t] = \beta {\mathbb{E}}[X_t ] + \gamma {\mathbb{E}}[Y_t] + {\mathbb{E}}[E_t^X] = 0.$$ To compute the variance of $Y_t$ we need to sum the variance of all independent noise variables. We obtain (the calculation can be found in the appendix): $${\mathbb{V}}[ Y_t] =\frac{1}{1-\gamma^2} + \frac{\beta^2 (1 + \alpha\gamma)} {(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)}$$ For the covariance of $X$ and $Y$ we get (see also appendix): $${{\rm Cov}}\left[X_{t},Y_{t}\right] = \frac{\beta\alpha}{(1-\alpha\gamma)(1-\alpha^2)}.$$ We have thus shown that the stationary joint distribution of $X,Y$ is $$(X,Y) \sim \mathcal{N}(0,C),$$ where the entries of $C$ read $$\begin{aligned} C_{XX} &= \frac{1}{1-\alpha^2}\\ C_{XY} &= \frac{\alpha \beta}{(1-\alpha\gamma)(1-\alpha^2)}\\ C_{YY} &= \frac{1}{1-\gamma^2}\\ & + \frac{\beta^2}{(\alpha - \gamma)^2} \left[ \frac{\alpha^2}{1-\alpha^2} - \frac{2\alpha\gamma}{1-\alpha\gamma} + \frac{\gamma^2}{1-\gamma^2} \right].\end{aligned}$$ Since the DAG in Figure \[fig:ar1\] contains arrows from $X$ to $Y$ and none in the opposite direction, one would like to explain this bivariate joint distribution by the causal DAG in Figure \[fig:commute\] (bottom) where $X$ is causing $Y$. This would imply $P(Y|do(X))=P(Y|X)$. The conditional $P(Y|X)$ is given by a simple regression which yields $$Y = a X + E_Y,$$ where $E_Y$ is an independent Gaussian noise variable and $a$ is the regression coefficient defined by $$\label{eq:a} a := C_{XY} C_{XX}^{-1} = \frac{\alpha \beta}{1-\alpha \gamma}.$$ We now show that is not consistent with the effect of interventions on $X$ when the latter are defined by setting all $X_t$ to some value $x$. We refer to this intervention as $do(X=x)$. The corresponding interventional distribution of $Y$ reads: $$\begin{aligned} Y^{\text{do}(X=x)}_{t+1} &= \beta x + \gamma Y^{\text{do}(X=x)}_{t} + E_t^Y \\ &= \beta x + \beta \gamma x + \beta \gamma^2 x + \ldots +\\ & \quad E_t^Y + \gamma E_{t-1}^Y + \gamma^2 E_{t-2}^Y + \ldots\end{aligned}$$ If the distribution is stationary, we have $$Y^{\text{do}(X=x)}_{t} = \frac{\beta x}{1-\gamma} + \sum_{k=0}^{\infty} \gamma^k E_{t-k}^Y.$$ Hence, $$\begin{aligned} Y^{\text{do}(X=x)}_{t} &\sim \mathcal{N}\left(\frac{\beta x}{1-\gamma}, \frac{1}{1-\gamma^2}\right).\end{aligned}$$ Note that this interventional conditional requires a structural equation whose regression coefficient reads $$\label{eq:a'} a' := \frac{\beta x}{1-\gamma},$$ which does not coincide with the coefficient $a$ given by . We now want to provide an intuition about the mismatch between the regression coefficient $a$ that would be needed to explain the observed stationary joint distribution and the coefficient $a'$ describing the true effect of interventions. One the one hand, it should be noted that the conditional of $Y$ given $X$ in the stationary distribution refers to observing only the current value $X_t$. More precisely, $a$ describes the conditional $P(Y_t|X_t)$, that is, how the distribution of the current value $Y_t$ changes after the current value $X_t$ is observed. In contrast, the interventions we have considered above are of the type [*set all variables $X_{t'}$ with $t'\in \mathbb{Z}$ to some fixed value $x$*]{}. In other words, the intervention is not localized in time while the observation refers to one specific time instance. This motivates already the idea of the following section: in order to explain the observational stationary joint distribution by an arrow from $X$ to $Y$, we need to define variables that are de-localized in time because in that case observations and interventions are de-localized in time. Non-local variables and frequency analysis ========================================== To understand the reason for the negative result of the preceding section, we recall that we compared the interventional conditional variable $Y_t^{do(X=x)}$ (where the intervention $do(X=x)$ referred to all variables $X_t$) to the observational conditional $Y|X_t=x_t$ (where the observation referred only to the current value $x_t$). To overcome this mismatch of completely non-local interventions on the one hand and entirely local observations on the other hand, we need to use non-local variables for observations and interventions. This motivates the following. For any functions $f,g \in l^1({\mathbb{Z}})$ we define the random variables[^4] $$X_f:= \sum_{t\in {\mathbb{Z}}} f(t) X_t \quad \hbox{ and } \quad Y_g:= \sum_{t\in {\mathbb{Z}}} g(t) Y_t.$$ One may think of $f,g$ as [*smoothing kernels*]{} (for instance, discretized Gaussians). Then $X_f,Y_g$ may be the resultand the above conventions of measurements that perform coarse-graining in time. Alternatively, one could also think of $f,g$ as trigonometric functions $\sin,\cos$ restricted to a certain time window. Then $X_f,Y_g$ are Fourier transforms of the observations in the respective window. In the spirit of [@Rubensteinetal17], $X_f$ and $Y_g$ can be thought of as macro-variables derived from the micro-variables $X_t,Y_t$ by a ‘projection’. We will now show how a simple causal model emerges between the macro-variables provided that we consider [*appropriate pairs*]{} of macro-variables $X_f,Y_g$. First, we also define averages over noise variables, which we think of as ‘macro-noise-variables’: $$E^X_f:= \sum_{t\in {\mathbb{Z}}} f(t) E^X_t \quad \hbox{ and } \quad E^Y_g:= \sum_{t\in {\mathbb{Z}}} g(t) E^Y_t.$$ Introducing the shift operator $S$ by $(Sf)(t):=f(t+1)$ we can rewrite and concisely as $$\begin{aligned} \label{eq:sexshift} X_f &=& X_{\alpha S f} + E^X_f \\ \label{eq:seyshift} Y_g &=& X_{\gamma S g} + Y_{\beta S g} + E_g,\end{aligned}$$ which can be transformed to $$\begin{aligned} X_{(I- \alpha S) f} &=& E^X_f\\ Y_{(I-\beta S)g} &=& X_{\gamma Sg} + E^Y_g,\end{aligned}$$ and, finally, $$\begin{aligned} \label{eq:seXf} X_f &=& E^X_{(I-\alpha S)^{-1} f}\\ \label{eq:seYg} Y_g &=& X_{\gamma S (I-\beta S)^{-1} g} + E^Y_{(I-\beta S)^{-1} g}. \end{aligned}$$ Note that the inverses can be computed from the formal von Neumann series $$(I-\alpha S)^{-1} = \sum_{j=0}^\infty (\alpha S)^j,$$ and $\sum_{j=1}^\infty (\alpha S)^j f$ converges in $l^1({\mathbb{Z}})$-norm for $\alpha<1$ due to $\|S^jf\|_1 = \|f\|_1$, and likewise for $\beta<1$. Equation describes how the scalar quantity $X_f$ is generated from a single scalar noise term that, in turn, is derived from a weighted average over local noise terms. Equation describes how the scalar quantity $Y_g$ is generated from the scalar $X_{\gamma S (I-\beta S)^{-1} g}$ and a scalar noise term. Making coarse-graining compatible with the causal model ------------------------------------------------------- The following observation is crucial for the right choice of pairs of macro-variables: whenever we choose $$\label{eq:fgrel} f_g:= \gamma S (I-\beta S)^{-1} g,$$ equations  and turn into the simple form $$\begin{aligned} \label{eq:seXfsimple} X_{f_g} &=& E^X_{(I-\alpha S)^{-1} f_g}\\ \label{eq:seYgsimple} Y_g &=& X_{f_g} + E^Y_{(I-\beta S)^{-1} g}. \end{aligned}$$ Equations  and describe how the joint distribution of $(X_{f_g},Y_g)$ can be generated: first, generate $X_{f_g}$ from an appropriate average over noise terms. Then, generate $Y_g$ from $X_{f_g}$ plus another averaged noise term. For any $x\in {\mathbb{R}}$, the conditional distribution of $Y_g$, given $X_{f_g}=x$, is therefore identical to the distribution of $x + E^Y_{(I-\beta S)^{-1} g}$. We now argue that and can even be read as structural equations, that is, they correctly formalize the effect of interventions. To this end, we consider the following class of interventions. For some arbitrary bounded sequence ${{\bf x}}= (x_t)_{t\in {\mathbb{Z}}}$ we look at the effect of setting ${{{\bf X}}}$ to ${{{\bf x}}}$, that is, setting each $X_t$ to $x_t$. Note that this generalizes the intervention $do(X=x)$ considered in section \[sec:negative\] where each $X_t$ is set to the same value $x\in {\mathbb{R}}$. Using the original structural equation yields $$Y_g^{do({{\bf X}}={{\bf x}})} = \sum_{t\in {\mathbb{Z}}} x_t f_g(t) + Y_{\beta S g} + E_g^Y.$$ Applying the same transformations as above yields $$Y_g^{do({{\bf X}}={{\bf x}})} = \sum_{t\in {\mathbb{Z}}} x_t f_g(t) + E_{(I-\beta S)^{-1}g}^Y.$$ Note that the first term on the right hand side is the value attained by the variable $X_{f_g}$. Hence, the only information about the entire intervention that matters for $Y_g$ is the value of $X_{f_g}$. We can thus talk about ‘interventions on $X_{f_g}$’ without further specifying what the intervention does with each single $X_t$ and write $$Y_g^{do(X_{f_g} =x)} = x + E_{(I-\beta S)^{-1}g}^Y.$$ We have thus shown that and also reproduce the effect of interventions and can thus be read as structural equations for the variable pair $(X_{f_g},Y_g)$. To be more explicit about the distribution of the noise terms in and , straightforward computation shows the variance to be given by $$\begin{aligned} {\mathbb{V}}(E^X_{(I-\alpha S)^{-1}f}) &= \nonumber \langle (I-\alpha S)^{-1} f,(I-\alpha S)^{-1} f\rangle \\&= \sum_{t\in {\mathbb{Z}}} \sum_{k,k'\geq 0} \alpha^{k+k'} f(t+k') f(t+k). \label{eq:fvar}\end{aligned}$$ Likewise, $$\label{eq:gvar} {\mathbb{V}}(E^Y_{(I-\beta S)^{-1}g}) = \sum_{t\in {\mathbb{Z}}} \sum_{k,k'\geq 0} \beta^{k+k'} g(t+k') g(t+k).$$ We have thus shown the following result. \[thm:main\] Whenever $f,g\in l^1 ({\mathbb{Z}})$ are related by , the AR process in and entails the scalar structural equations $$\begin{aligned} X_f &=& \tilde{E}^X \label{eq:sextheorem}\\ Y_g &=& X_f + \tilde{E}^Y. \label{eq:seytheorem}\end{aligned}$$ Here, $\tilde{E}^X$ and $\tilde{E}^Y$ are zero mean Gaussians whose variances are given by and , respectively. Equation can be read as a functional ‘causal model’ or ‘structural equation’ in the sense that it describes both the observational conditional of $Y_g$, given $X_g$ and the interventional conditional of $Y_g$, $do(X_f=x)$. In the terminology of [@Rubensteinetal17], the mapping from the entire bivariate process $(X_t,Y_t)_{y\in {\mathbb{Z}}}$ to the macro-variable pair $(X_f,Y_g)$ thus is an [*exact transformation*]{} if $f$ and $g$ are related by . Revisiting the negative result ------------------------------ Theorem \[thm:main\] provides a simple explanation for our negative result from section \[sec:negative\]. To see this, we recall that we considered the distribution of $Y_t$, which corresponds to the variable $Y_g$ for $g = (\dots,0,1,0,\dots)$, where the number $1$ occurs at some arbitrary position $t$. To compute the corresponding $f$ according to note that $$\gamma S (I-\beta S)^{-1} = \gamma S \sum_{j=0}^\infty (\beta S)^j = \frac{\gamma}{\beta} \sum_{j=1}^\infty (\beta S)^j.$$ We thus obtain the following ‘smoothing function’ $f$ that defines the appropriate coarse graining for $X$ for which we obtain an [*exact transformation*]{} of causal models: $$\label{eq:ffor delta} f =\gamma (\dots,\beta^2,\beta^1,\beta^0,0,\dots),$$ where the first entry from the right is at position $t-1$, in agreement with the intuition that this $X_{t-1}$ is the latest value of $X$ that matters for $Y_t$. The intervention $do(X=x)$, where all variables $X_t$ are set to the value $x$, corresponds to setting $X_f$ to $$x \sum_{t\in {\mathbb{Z}}} f(t) = x \frac{\gamma}{\beta}.$$ We thus conclude $$Y^{do(X=x)}_t = Y_t^{do(X_f =x \frac{\gamma}{\beta})} = Y_t |_{X_f = x \frac{\gamma}{\beta}}.$$ In words, to obtain a valid structural equation that formalizes both interventional and observational conditional we need to condition on $X_f$ given by . Decoupling of interventions in the frequency domain --------------------------------------------------- Despite the simple relation between $f$ and $g$ given by , it is somehow disturbing that [*different*]{} coarse-grainings are required for $X$ and $Y$. We will now show that $g$ can be chosen such that $f$ is [*almost*]{} the same as $g$ (up to some scalar factor), which leads us to Fourier analysis of the signals. So far, we have thought of $f,g$ as real-valued functions, but for Fourier analysis it is instructive to consider complex waves on some window $[-T,T]$, $$g_{\nu,T} (t):= \left\{\begin{array}{cc} \frac{1}{\sqrt{2T+1}} e^{2\pi i\nu t} &\hbox{ for } t=-T,\dots,T \\ 0 & \hbox{ otherwise.} \end{array} \right.$$ For notational convenience, we also introduce the corresponding vectors $f$ by $$f_{\nu,T} := \gamma S (I-\beta S)^{-1} g_{\nu,T},$$ which are not as simple as $g_{\nu,T}$. However, for sufficiently large $T$, the functions $g_{\nu,T}$ are almost eigenvectors of $S$ with eigenvalue $z_\nu :=e^{2\pi i \nu}$ since we have $$\label{eq:almosteigen} \|S^j g_{\nu,T} - z_\nu^j g_{\nu,T} \|_1 \leq \frac{2j}{\sqrt{2T+1}},$$ because the functions differ only at the positions $-T,\dots,-T+j-1$ and $T+1,\dots,T+j$. We show in the appendix that this implies $$\begin{aligned} \label{eq:almosteigenderived} &&\|f_{\nu,T} - \gamma z_\nu (1-z_\nu)^{-1} g_{\nu,T}\|_1\\ &\leq& \nonumber \frac{2}{\sqrt{2T+1}} \frac{|\gamma|}{|\beta|} |(1-\beta)^{-2}|.\end{aligned}$$ that is, $f_{\nu,T}$ coincides with a complex-valued multiple of $g_{\nu,T}$ up to an error term that decays with $O(1/\sqrt{T})$. Using the abbreviations $$E_{\nu,T}^X := E^X_{(I-\alpha S)^{-1}f_{\nu,T}},$$ and $$E_{\nu,T}^Y := E^Y_{(I-\beta S)^{-1}g_{\nu,T}},$$ the general structural equations and thus imply the approximate structural equations $$\begin{aligned} \label{eq:asx} X_{g_{\nu,T}} &= E_{\nu,T}^X,\\ Y_{g_{\nu,T}} &\approx \gamma e^{2\pi i \nu} (1- \beta e^{2\pi i \nu})^{-1} X_{g_{\nu,T}} + E_{\nu,T}^Y, \label{eq:asy}\end{aligned}$$ where the error of the approximation is a random variable whose $L^1$-norm is bounded by $$\frac{2}{\sqrt{2T+1}} \frac{|\gamma|}{|\beta|} |(1-\beta)^{-2}| \cdot {\mathbb{E}}[|X_t|],$$ due to . We conclude with the interpretation that the structural equations for different frequencies perfectly [*decouple*]{}. That is, intervening on one frequency of $X$ has only effects on the same frequency of $Y$, as a simple result of linearity and time-invariance of the underlying Markov process. To phrase this decoupling over frequencies in a precise way, show that $ E^X_{\nu,T}$ and $ E^Y_{\nu,T} $ exist in distribution as complex-valued random variables. It is sufficient to show that the variances and covariances of real and imaginary parts of $E_{\nu,T}^Y$ converge because both variables are Gaussians with zero mean. We have $$\begin{aligned} &{\mathbb{V}}[{\rm Re} \{E^Y_{\nu,T}\}] = \frac{1}{4}{\mathbb{E}}[ (E^Y_{\nu,T} + \bar{E}^Y_{\nu,T})^2] =\\ & \frac{1}{4}\left( {\mathbb{E}}[ (E^Y_{\nu,T})^2] + {\mathbb{E}}[ (\bar{E}^Y_{\nu,T})^2] + 2 {\mathbb{E}}[ \bar{E}^Y_{\nu,T} E^Y_{\nu,T} ]\right).\end{aligned}$$ We obtain $$\begin{aligned} \label{eq:asymvar} & {\mathbb{E}}[ \bar{E}^Y_{\nu,T} E^Y_{\nu,T} ] = {\mathbb{E}}[ \overline{E^Y_{(I-\beta S)^{-1} g_{\nu,T}}} E^Y_{(I-\beta S)^{-1}g_{\nu,T}} ]\\ &= \langle (I+\beta S)^{-1} g_{\nu,T} , (I-\beta S)^{-1} g_{\nu,T}\rangle\\ &\to |(1-\beta z_\nu)^{-1}| ^2. \nonumber \end{aligned}$$ For the first equality, recall that the complex-valued inner product is anti-linear in its first argument. The limit follows from straightforward computations using an analog of for the $L^2$ norm, $$\|S^j g_{\nu,T} - z_\nu^j g_{\nu,T} \|_2 \leq \sqrt{\frac{2j}{2T+1}},$$ and further algebra akin to the proof of in the appendix. Moreover, $$\begin{aligned} & {\mathbb{E}}[ (E^Y_{\nu,T})^2] = {\mathbb{E}}[ E^Y_{(I-\beta S)^{-1} g_{\nu,T}} E^Y_{(I-\beta S)^{-1}g_{\nu,T}} ] =\\ & \langle \overline{(I+\beta S)^{-1} g_{\nu,T} }, (I-\beta S)^{-1} g_{\nu,T}\rangle. \end{aligned}$$ Hence, ${\mathbb{E}}[(E^Y_{\nu,T})^2]$ and its conjugate ${\mathbb{E}}[\overline{(E^Y_{\nu,T})^2}]$ converge to zero for all $\nu\neq 0$ because $$\begin{aligned} &\lim_{T\to\infty} \langle \overline{(I+\beta S)^{-1} g_{\nu,T} }, (I-\beta S)^{-1} g_{\nu,T}\rangle \label{eq:withS}\\ &(1-\beta z_\nu)^{-2} \lim_{T\to\infty} \langle \overline{g_{\nu,T}}, g_{\nu,T} \rangle \label{eq:withz} \\ & = (1-\beta z_\nu)^{-2} \lim_{T\to\infty} \sum_t g^2_{\nu,T} (t) = 0, \end{aligned}$$ where equality of and follows from . Hence, only the mixed term containing both $E^Y_{\nu,T}$ and its conjugate survives the limit. We conclude $$\lim_{T\to \infty} V [{\rm Re} \{E^Y_{\nu,T}\}] = \frac{1}{2} |(1-\beta z_\nu)^{-1}|^2.$$ Similarly, we can show that $V [{\rm Im} \{E^Y_{\nu,T}\}] $ converges to the same value. Moreover, $$\lim_{T\to\infty} {{\rm Cov}}[{\rm Re} \{E^Y_{\nu,T}\} ,{\rm Im} \{E^Y_{\nu,T}\}] =0,$$ because straightforward computation shows that the covariance contains no mixed terms. Hence we can define $$E_\nu^Y := \lim_{T\to \infty} E^Y_{\nu,T},$$ with convergence in distribution. Real and imaginary parts are uncorrelated and their variance read: $$\label{eq:asymvarY} {\mathbb{V}}[{\rm Re}\{E^Y_\nu\}] = {\mathbb{V}}[{\rm Im} \{E^Y_{\nu}\}] = \frac{1}{2} |(1-\beta z_\nu)^{-1}|^2.$$ We conclude that the distribution of $E^Y_\nu$ is an isotropic Gaussian in the complex plane, whose components have variance $\frac{1}{2} |(1-\beta z_\nu)^{-1}|^2$. To compute the limit of $E^X_{\nu,T}$ we proceed similarly and observe $$\begin{aligned} & {\mathbb{E}}[ \bar{E}^Y_{\nu,T} E^Y_{\nu,T} ] = {\mathbb{E}}[ \overline{E^Y_{\gamma S (I-\beta S)^{-2} g_{\nu,T}}} E^Y_{\gamma S(I-\beta S)^{-2}g_{\nu,T}} ]\nonumber \\ &\to |\gamma z_\nu (1-\beta z_\nu)^{-2}| ^2.\end{aligned}$$ We can therefore define the random variable $E^X_\nu:= \lim_{T\to \infty}$ (again with convergence in distribution) with $$\label{eq:asymvarX} {\mathbb{V}}[{\rm Re}\{E^X_\nu\}] = {\mathbb{V}}[{\rm Im} \{E^X_{\nu}\}]= \frac{1}{2} \left|\frac{\gamma z_\nu}{(1-\beta z_\nu)^2}\right|^2.$$ We can phrase these findings by asymptotic structural equations $$\begin{aligned} X_\nu &=E^X_\nu\\ Y_\nu &= \gamma e^{2\pi i \nu} (1-\beta e^{2\pi i \nu})^{-1} X_\nu + E^Y_\nu,\end{aligned}$$ where the variances of real and imaginary parts of $E_\nu^X$ and $E_\nu^Y$ are given by and , respectively. Conclusion ========== We have studied bivariate time series $(X_t,Y_t)$ given by linear autoregressive models, and described conditions under which one can define a causal model between variables that are coarse-grained in time, thus admitting statements like ‘setting $X$ to $x$ changes $Y$ in a certain way’ without referring to specific time instances. We show that particularly elegant statements follow in the frequency domain, thus providing meaning to interventions on frequencies. [10]{} J. Pearl. . Cambridge University Press, 2000. D. Dash. Restructing dynamic causal systems in equilibrium. In [*Proc. Uncertainty in Artifical Intelligence*]{}, 2005. J. Mooij, D. Janzing, and B. Schölkopf. From ordinary differential equations to structural causal models: the deterministic case. In Nicholson A. and P. Smyth, editors, [*Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI)*]{}, pages 440–448, Oregon, USA, 2013. AUAI Press Corvallis. P. Spirtes. Directed cyclic graphical representations of feedback models. In [*Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence*]{}, UAI’95, pages 491–498, San Francisco, CA, USA, 1995. Morgan Kaufmann Publishers Inc. J. T. A. Koster. Markov properties of nonrecursive causal models. , 24(5):2148–2177, 1996. J. Pearl and R. Dechter. Identifying independence in causal graphs with feedback. In [*Proceedings of the Twelfth Annual Conference on Uncertainty in Artificial Intelligence (UAI-96)*]{}, pages 420–426, 1996. M. Voortman, D. Dash, and M. Druzdzel. Learning why things change: The difference-based causality learner. In [*Proceedings of the Twenty-Sixth Annual Conference on Uncertainty in Artificial Intelligence (UAI)*]{}, pages 641–650, Corvallis, Oregon, 2010. AUAI Press. J. Mooij, D. Janzing, B. Schölkopf, and T. Heskes. Causal discovery with cyclic additive noise models. In [*Advances in Neural Information Processing Systems 24, Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS 2011), Curran*]{}, pages 639–647, NY, USA, 2011. Red Hook. A. Hyttinen, F. Eberhardt, and P.O. Hoyer. Learning linear cyclic causal models with latent variables. , 13:3387−3439, November 2012. P. K. Rubenstein, S. Bongers, J. M. Mooij, and B. Sch[ö]{}lkopf. From deterministic [ODEs]{} to dynamic structural causal models. , 1608.08028, 2016. C. W. J. Granger. Investigating causal relations by econometric models and cross-spectral methods. , 37(3):424–38, July 1969. K. Chalupka, P. Perona, and F. Eberhardt. Multi-level cause-effect systems. In [*Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS)*]{}, JMLR: W&CP volume 41, 2016. N. Hansen and Al. Sokol. Causal interpretation of stochastic differential equations. , 19:1–24, 2014. P. K. Rubenstein, S. Weichwald, S. Bongers, J. M. Mooij, D. Janzing, M. Grosse-Wentrup, and B. Sch[ö]{}lkopf. Causal consistency of structural equation models. In [*Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (UAI)*]{}, 2017. Appendix ======== Covariance of $X$ and $Y$ in the stationary distribution -------------------------------------------------------- $$\begin{aligned} &{{\rm Cov}}\left[X_{t+1},Y_{t+1}\right] \\ &= \mathbb{C}\left[\sum_{k=0}^\infty \alpha^k E_{t-k}^X, \sum_{k=0}^\infty \gamma^k E_{t-k}^Y+ \beta \sum_{k=0}^\infty \frac{\alpha^{k+1}-\gamma^{k+1}}{\alpha-\gamma} E_{t-1-k}^X \right] \\ &= {{\rm Cov}}\left[\sum_{k=0}^\infty \alpha^{k+1} E_{t-k-1}^X, \beta \sum_{k=0}^\infty \frac{\alpha^{k+1}-\gamma^{k+1}}{\alpha-\gamma} E_{t-1-k}^X \right] \\ &= \frac{\beta}{\alpha-\gamma}\sum_{k=0}^\infty \alpha^{2k+2}-\alpha^{k+1}\gamma^{k+1} \\ &= \frac{\beta}{\alpha-\gamma} \left[ \frac{\alpha^2}{1-\alpha^2} - \frac{\alpha\gamma}{1-\alpha\gamma} \right] \\ &= \frac{\beta}{\alpha-\gamma} \left[ \frac{\alpha^2(1-\alpha\gamma) - \alpha\gamma(1-\alpha^2)}{(1-\alpha^2)(1-\alpha\gamma)} \right] \end{aligned}$$ Approximate eigenvalues of functions of $S$ ------------------------------------------- Using we obtain $$\begin{aligned} &&\left\|\gamma S (I-\beta S)^{-1} g_{\nu,T} - \gamma z_\nu (1-z_\nu)^{-1} g_{\nu,T}\right\|_1\\ &=& \left\| \frac{\gamma}{\beta} \sum_{j=1}^\infty (\beta S)^j g_{\nu,T} - \frac{\gamma}{\beta} \sum_{j=1}^\infty (\beta z_\nu)^j g_{\nu,T}\right\|_1\\ &\leq & \frac{|\gamma|}{|\beta|} \left|\sum_{j=1}^\infty \frac{2j \beta^j}{\sqrt{2T+1}}\right| = \frac{2}{\sqrt{2T+1}} \frac{|\gamma|}{|\beta|} \left|\frac{d}{d\beta} \sum_{j=0}^\infty \beta^j\right| \\ &=& \frac{2}{\sqrt{2T+1}} \frac{|\gamma|}{|\beta|} \left|\frac{d}{d\beta} (1-\beta)^{-1}\right| = \frac{2}{\sqrt{2T+1}} \frac{|\gamma|}{|\beta|} |(1-\beta)^{-2}|.\end{aligned}$$ Variance of $Y$ in the stationary distribution ---------------------------------------------- $$\begin{aligned} &\mathbb{V}\left[Y_{t+1} \right] \\ &= \mathbb{V}\left[\sum_{k=0}^\infty \gamma^k E_{t-k}^Y \right] + \mathbb{V}\left[ \beta \sum_{k=0}^\infty \frac{\alpha^{k+1}-\gamma^{k+1}}{\alpha-\gamma} E_{t-1-k}^X \right] \\ &= \sum_{k=0}^\infty \gamma^{2k}+ \frac{\beta^2}{(\alpha-\gamma)^2} \sum_{k=0}^\infty \alpha^{2k+2} -2\alpha^{k+1}\gamma^{k+1}-\gamma^{2k+2} \\ &= \frac{1}{1-\gamma^2} + \frac{\beta^2}{(\alpha-\gamma)^2} \left[\frac{\alpha^2}{1-\alpha^2} - \frac{2\alpha\gamma}{1-\alpha\gamma} + \frac{\gamma^2}{1-\gamma^2} \right] \\ & = \frac{1}{1-\gamma^2} + \frac{\beta^2}{(\alpha-\gamma)^2}\times\\ & \left[ \frac{\alpha^2(1-\alpha\gamma)(1-\gamma^2) -2\alpha\gamma(1-\alpha^2)(1-\gamma^2)}{(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)} \right.\\ & \left.+\frac{\gamma^2(1-\alpha^2)(1-\alpha\gamma)}{(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)}\right] \\ & = \frac{1}{1-\gamma^2} + \frac{\beta^2}{(\alpha-\gamma)^2}\times \\ &\left[ \frac{\alpha^2 - \textcolor{black}{\alpha^3\gamma} - \textcolor{black}{\alpha^2\gamma^2} + \textcolor{black}{\alpha^3\gamma^3} - 2\alpha\gamma + \textcolor{black}{2\alpha^3\gamma} }{(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)}\right.\\ &\left. +\frac{\textcolor{black}{2\alpha\gamma^3} - \textcolor{black}{2\alpha^3\gamma^3} + \gamma^2 - \textcolor{black}{\alpha^2\gamma^2} -\textcolor{black}{\alpha\gamma^3} + \textcolor{black}{\alpha^3\gamma^3}}{(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)}\right] \\ & = \frac{1}{1-\gamma^2} + \frac{\beta^2}{(\alpha-\gamma)^2} \left[ \frac{\alpha^2 + \alpha^3\gamma - 2\alpha\gamma + \alpha\gamma^3 + \gamma^2 - 2\alpha^2\gamma^2 }{(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)}\right] \\ & = \frac{1}{1-\gamma^2} + \frac{\beta^2}{(\alpha-\gamma)^2} \left[ \frac{\alpha^2 - 2\alpha\gamma + \gamma^2 + \alpha\gamma(\alpha^2 - 2\alpha\gamma + \gamma^2 }{(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)}\right] \\ & = \frac{1}{1-\gamma^2} + \frac{\beta^2}{(\alpha-\gamma)^2} \left[ \frac{(\alpha-\gamma)^2(1 + \alpha\gamma)}{(1-\alpha^2)(1-\alpha\gamma)(1-\gamma^2)}\right] \\ $$ [^1]: Einstein’s special theory of relativity implies even stronger constraints: If the measurements are also localized in space and correspond to spatial coordinates $z_j$, then $(t_j -t_i)c \geq \|z_j-z_i\|$ where $c$ denotes the speed of light. That is, $X_j$ needs to be contained in the forward light cone of all $PA_j$. [^2]: Note that [@hansen2014] considers interventions in stochastic differential equations and provides conditions under wich they can be seen as limits of interventions in the ‘discretized version’, such as autoregressive models. [^3]: Note that we could assume initial conditions $X_0$ and $Y_0$, in which case the joint distribution $(X_t,Y_t)$ would not be independent of $t$, but would converge to the stationary distribution. [^4]: Since ${\mathbb{E}}[|X_t|]$ and ${\mathbb{E}}[|Y_t|]$ exist, the series converge in $L^1$ norm of the underlying probability space, hence they converge in probability by Markov’s inequality.
{ "pile_set_name": "ArXiv" }
--- author: - 'G. Brunetti[^1], L. Rudnick, R. Cassano, P. Mazzotta, J.Donnert, K. Dolag' date: 'Received...; accepted...' title: 'Is the Sunyaev-Zeldovich effect responsible for the observed steepening in the spectrum of the Coma radio halo ?' --- Introduction ============ Giant radio halos are diffuse synchrotron radio sources of Mpc-scale in galaxy clusters. They are observed in about $1/3$ of the X–ray luminous galaxy clusters (e.g. Giovannini et al. 1999; Kempner & Sarazin 2001; Cassano et al. 2008; Venturi et al. 2008), in a clear connection with dynamically disturbed systems (Buote 2001, Govoni et al. 2004, Cassano et al. 2010, 2013). The connection between cluster mergers and radio halos suggests that these sources trace the hierarchical cluster assembly and probe the dissipation of gravitational energy during the dark-matter-driven mergers that lead to the formation of clusters. However, the details of the physical mechanisms responsible for the generation of synchrotron halos are still unclear. Two main scenarios are advanced for the origin of these sources. One, the “reacceleration” model, is based on the idea that seed relativistic electrons are re-accelerated by turbulence produced during merger events (Brunetti et al. 2001; Petrosian 2001; Fujita et al 2003; Cassano & Brunetti 2005; Brunetti & Lazarian 2007; Beresnyak et al. 2013). The alternative is that cosmic ray electrons (CRe) are injected by inelastic collisions between long-lived relativistic protons (CRp) and thermal proton-targets in the intra-cluster-medium (ICM) (the “hadronic” model, Dennison 1980; Blasi & Colafrancesco 1999; Pfrommer & Enßlin 2004, PF04; Keshet & Loeb 2010). More general calculations attempt to combine the two mechanisms by modeling the reacceleration of relativistic protons and their secondary electrons (e.g. Brunetti & Blasi 2005; Brunetti & Lazarian 2011). Concerns with a purely hadronic origin of radio halos arise from the large energy content of CRp that is necessary to explain radio halos with very steep spectra (Brunetti 2004; PE04; Brunetti et al 2008; Macario et al 2010) and from the non-detection of galaxy clusters with radio halos in the $\gamma$–rays (Ackermann et al. 2010; Jeltema & Profumo 2011; Brunetti et al. 2012). Turbulent reacceleration of CRe and the production of secondary cosmic rays via hadronic collisions leave different imprints in the spectra of the cluster non-thermal emission. In re-acceleration models, the radio spectra are determined by the low efficiencies of the acceleration mechanism, allowing the acceleration of CRe only up to energies of several GeV, where radiative, synchrotron and inverse Compton (IC), losses become stronger and quench the acceleration process (Schlickeiser et al. 1987; Brunetti et al. 2001; Petrosian 2001). This effect leads to spectra that may steepen at high radio frequencies and a variety of spectral shapes and slopes. Pure hadronic models, by contrast, have spectra with fairly smooth power-laws extending to very high frequencies. Significant spectral breaks at radio frequencies would, in the hadronic model, imply an unnatural strong break in the spectrum of the primary CRp at energies 10–100GeV (eg. Blasi 2001). Although the spectra of radio halos are difficult to measure, with only a handful of good-quality data-sets available to date, studying them provides one of the most promising ways to shed light on the origin of these sources (e.g., Orru’ et al. 2007; Kale & Dwarakanath 2010; van Weeren et al. 2012; Venturi et al. 2013; Macario et al 2013), especially in view of the new generation of low–frequency radio telescopes, such as the Low Frequency Array (LOFAR) and the Murchison Widefield Array (MWA). The radio halo in the Coma cluster is unique in that its spectrum has been measured over almost two decades in frequency (Fig. 1). The observed steepening at high frequencies was used to argue against a hadronic origin of the halo (Schlickeiser et al. 1987; Blasi 2001; Brunetti et al. 2001; Petrosian 2001). Enßlin (2002, E02) first proposed that such a steepening may be caused (at least in part) by the thermal Sunyaev-Zeldovich (SZ) decrement seen in the direction of the cluster. This possibility was further elaborated by PE04, concluding that a simple synchrotron power-law spectrum, with spectral index $\alpha = 1.05-1.25$, provides a fair description of the observed radio spectrum after taking into account the negative flux bowl due to the SZ effect (see also Enßlin et al 2011). Other efforts to model the SZ-effect in the Coma cluster, however, concluded that this effect is not significant (Reimer et al 2004; Donnert et al 2010). All these attempts used the $\beta$–model spatial distribution of the ICM from X–ray observations[^2], but the differences between the two lines of thought comes essentially from the apertures used to estimate the SZ-decrement. The Planck satellite has recently obtained resolved and precise measurements of the SZ signal in the Coma cluster (Planck Collaboration X (2012), PIPX). These data strongly reduce the degrees of freedom and uncertainties in the modeling of the SZ contribution and allow a straightforward test of the spectral steepening hypothesis. This is the aim of the present paper. A $\Lambda$CDM cosmology ($H_{o}=70\,\rm km\,\rm s^{-1}\,\rm Mpc^{-1}$, $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$) is adopted. Thermal Sunyaev-Zeldovich effect ================================ The inverse Compton interaction of photons of the Cosmic Microwave Background (CMB) with hot electrons in the ICM modifies the CMB spectrum, with respect to a blackbody, by a quantity $\delta F_{SZ}$ that depends on frequency (see e.g., Carlstrom et al. 2002 for a review). For low optical depths this quantity measured on an aperture $\Omega$ is $$\delta F_{SZ}(\nu,\Omega) = 2 {{(k_B T_{cmb})^3}\over{(h c)^{2}}} f(\nu) \int_{\Omega} y d\Omega^{\prime} \, ,$$ where the cluster Compton parameter is $$y = {{\sigma_T}\over{m_e c^2}} \int P_e(r) dl \, ,$$ $l$ is the line of sight, and the spectral distortion at the frequency $\nu$ is $$f(\nu) = {{x_{\nu}^4 e^{x_{\nu}}}\over{(e^{x_{\nu}} -1)^2}} \left[ {x_{\nu} \over{\tanh(x_{\nu}/2)}} -4 \right] \stackrel{x_{\nu} \rightarrow 0}{\longrightarrow} -2 x_{\nu}^2 \, ,$$ where $x_{\nu}= {\rm h} \nu/k_B T_{cmb}$. At GHz frequencies $x_{\nu} << 1$, and the SZ effect from the ICM in galaxy clusters creates a negative flux bowl on the scale of the clusters. The flux from radio sources measured by single-dish radio telescopes is that in excess of the background “zero” level that is determined on larger scales. Once discrete sources are subtracted appropriately, the resulting flux observed from radio halos in an aperture $\Omega_H$ is $F_{obs}(\nu, \Omega_H) = F(\nu, \Omega_H) + \delta F_{SZ}(\nu, \Omega_H)$, $F$ is relative to the intrinsic emission. The SZ-decrement (negative) signal, $\delta F_{SZ}$, will become comparable to the flux from the halo at sufficiently high frequencies, leading to an apparent steepening of the observed spectrum (Liang et al 2000, E02). ![Observed spectrum of the Coma radio halo (black data-points) and the power law ($\alpha = 1.22 \pm 0.04$) that best fits the spectrum at [*lower*]{} frequencies, $\nu \leq 1.4$ GHz (solid line). Empty points are the data with the SZ correction added, the SZ-decrement is calculated from Planck measurements by adopting an aperture radius $= 0.48 R_{500}$. The dotted line is a synchrotron model assuming a broken power-law energy distribution of the emitting electrons, $N(E) \propto E^{-\delta}$ and $\propto E^{-(\delta+\Delta \delta)}$, at lower and higher energies, with $\delta =2.4$ and $\Delta \delta = 1.6$. The dashed line is a synchrotron model assuming a power-law ($\delta =2.5$) with a high energy cut-off, which occurs for instance in (homogeneous) reacceleration models (see eg. Schlickeiser et al. 1987). The red point is the flux measured in the high-sensitivity observations at 330 MHz by Brown & Rudnick (2011) within an aperture radius =$0.48 R_{500}$.[]{data-label="Fig.Lr_RH"}](spectrum_SZ_0p48_ALLNEW2.ps){width="42.50000%"} Implications for the Coma radio halo ==================================== In this section we use Planck measurements to calculate the modification of the spectrum of the Coma radio halo at higher frequencies that is caused by the SZ-decrement. Because this correction depends on the aperture over which it is calculated, as well as no the varying radio sensitivities and observed halo sizes in the literature, we used three different approaches to ensure that our results are robust. - \(i) We used the halo spectrum from the data compilation from Thierbach et al.(2003, T03) and calculated the SZ decrement in a reference aperture radius that is consistent with those used at the various frequencies in the T03 compilation; - \(ii) we used the subset of radio observations for which the actual radial profiles are available to derive the halo spectrum in two fixed aperture radii and self-consistently calculated the SZ decrement within these radii; - \(iii) we used the correlation between radio brightness and y-parameter discovered by PIPX to derive the ratio of radio–halo flux and negative SZ flux as a function of frequency and almost independently of the aperture radius. ![image](combined2.ps){width="95.00000%"} Before using these different methods for evaluating the SZ contribution, we first discuss the main uncertainties in the radio halo spectrum, in particular the scale size of the halo which is essential for the application of these methods. Fig. 1 shows the observed spectrum from the T03 compilation of data–points[^3] and the best fit to the data at lower ($\leq 1.4$ GHz) frequencies, a power law with $\alpha = 1.22 \pm 0.04$. We note that the scatter of the data is larger than expected from the quoted errors (the best fit has $\chi^2_{\rm red}=2.65$), which suggests systematics that are likely due to the different sensitivities and the variety of telescope systems used over the past 30 years. To investigate the effects of sensitivity on the observed halo sizes and fluxes we returned to the original papers used for Fig. 1. Wherever the information was available, we show in Fig. 2 (left) the sensitivities to diffuse emission of the observations at the different frequencies (sensitivities are all scaled to 0.3 GHz using $\alpha =1.22$). If we focus on the data at $\nu \leq 1.4$ GHz (points 1-11), the comparison between Fig. 1 and 2 (left) immediately shows that fluxes from higher-sensitivity observations (points 4,6,8,10) are systematically biased high (about 50% higher) with respect to those derived from the less sensitive observations (points 1, 5, 9, and 11). This is because better sensitivities allow one to trace the diffuse emission of the halo to larger distances from the cluster center. This is clear from Fig. 2 (right), where we show the diameters of the radio halo that we measured from the 3$\sigma$ contours in the published radio maps as a function of the sensitivity. The measured 3$\sigma$ diameter ($=2 \sqrt{R_{min} R_{max}}$, $R_{min}$ and $R_{max}$ are the smallest and largest radii in the map) increases with sensitivity in a way that depends on the brightness distribution of the radio halo: in Fig. 2 (right) we also show the behavior expected assuming a radio-brightness distribution of the form $I_R \propto (1 + (r/r_c)^2)^{-k}$, where $r_c$ is the X-ray core radius of Coma, $r_c = 10.5$ arcmin, and where $k \simeq 0.7*(3\beta -1/2)$, as inferred by Govoni et al. (2001) ($\beta = 0.75$). This provides a good description of the data; note that points 5 and 11, with the poorest sensitivities, are both approximated well by the model, but come from very different frequencies, 151 MHz and 1.38 GHz, respectively. With a good estimate of the sensitivity dependence, we can now address a key concern, i.e., whether the observed spectral steepening is caused by a lack of sensitivity. For point 12, at 2.675 GHz , the sensitivity is the same as for a number of points from 139 MHz to 1.4 GHz, therefore it is expected to appear on the power-law line (Fig. 1) if there is no spectral steepening. However, it is a factor of $\sim$2 below this line, supporting the case for actual spectral steepening, whether intrinsic or caused by the SZ effect. We also note that the flux of the Coma radio relic measured at 2.675 GHz by T03 is indeed consistent with the power-law shape derived from other observations in the range 151 MHz–4.75 GHz (T03, Fig. 8), adding confidence on the steepening measured by T03 for the halo. The effect of sensitivity on the highest frequency point, 13, can be best assessed by comparison with points 1, 5 and 11, from 30 MHz, 151 MHz and 1380 MHz, respectively, which have a similar sensitivity. The flux of points 1, 5 and 11 is $\sim$20% lower than the power-law line in Fig.1. A drop of 20% for point 13 does not explain its flux, which is more than a factor of $\sim$3 below the power-law line. We conclude that the spectral steepening at 2.675 GHz and 4.85 GHz is not an observational effect caused by the sensitivity of the observations at the different frequencies. Systematics in the spectrum might also come from the different procedures used to subtract discrete sources in the halo region. The subtraction becomes critical at higher frequencies where most of the flux in the halo region is associated with discrete sources. The flux at 4.85 GHz of the two brightest sources in the halo region (NGC 4869 and 4874) is known from VLA observations (175 mJy in total, see T03 and Kim 1994), while additional 55 mJy are attributed by T03 to discrete sources in the halo region by using the master–source list of Kim (1994), which also reports spectral indices. Most of the spectral information in Kim (1994) was obtained at frequencies $\leq 1.6$ GHz, therefore the flux of discrete sources may be overestimated if their spectral indices actually steepen at higher frequencies (T03); this would bias the resulting flux of the halo low. We investigated this effect by assuming the typical spectral steepenings between low and high (1.4 – 4.8 GHz) frequencies that are measured for samples of radio sources, $\Delta \alpha \leq 0.15$ (Kuehr et al. 1981; Helmboldt et al. 2008), and found that the increment of the radio halo flux with respect to that from T03 is $< 30$%, which is lower than the error reported in Fig. 1. Again we conclude that the spectral steepening observed at high frequencies is not driven by obvious observational biases. As a final step we studied the possible effect caused by the different adopted flux calibration scales, because spectral measurements were taken over a long time-span. The best modern values are presented by Perley & Butler (2013). Using their values for the calibrator 3C286, we corrected the T03 2675 and 4850 MHz data. This resulted in a reduction of their fluxes by 3.5% and 2.1%, enhancing the steepening by a tiny amount, well within the errors. Most of the other papers summarized by T03 do not explicitly describe their flux scale, although the most common scale in usage was that of Baars et al. (1977). Down to 326 MHz, which is the lowest frequency studied by Perley & Butler, the other literature values would decrease by an average of 1.6%, with the largest decrease being 2.5% for the 608.5 MHz point, producing no significant change in the spectrum[^4]. We proceed to examine the potential contribution of the SZ effect. Method (i) ---------- Our first method of evaluating the SZ contribution involves using the fluxes as reported in the literature and adopting an appropriate “reference” aperture radius for the correction. Fig. 3 (left) shows the “equivalent diameter” (defined below) of the regions that is used by the respective authors for flux measurements, wherever these are available. We deliberately biased our calculations toward higher SZ contributions by using the (larger) apertures at $\nu \leq 1.4$ GHz; at these low frequencies, the SZ decrement is negligible and cannot artificially make the halo look smaller. In some cases the fluxes were taken from boxes or from complex (non circular) regions that encompass the scale where diffuse emission is detected, therefore we define the “equivalent diameter” as $2 \sqrt{A / \pi}$, $A$ being the area from which fluxes in the literature were extracted. In Fig. 3 we see that for the high-sensitivity points 4, 6, and 10 (139, 330, and 1400 MHz), the aperture radii are consistent with a value of 23($=0.48 R_{500}$ , where $R_{500}=$47 arcmin$=1.3$ Mpc, PIPX). The low-sensitivity point 1 at 30 MHz is consistent with this as well. These points with consistent aperture define the power law seen in Figure 1; the low-sensitivity point 5, at 151 MHz, has a smaller size and is indeed $\sim 20$% below the power law in Fig. 1. We therefore adopted 23as the radius over which to calculate the SZ correction [^5]. With this 23($0.48 R_{500}$ ) radius, we corrected for the SZ decrement on scales significantly larger than the observed halo sizes at the high frequencies (points 12, and 13), and thus we eventually overestimated the actual effect. From Planck measurements of the Compton parameter $y$ (PIPX) and from Eqs.1–3, we found $\delta F_{SZ}= -(1.08 \pm 0.05) (\nu / GHz)^2$ mJy. This is about four times lower than that used by PE04 (their eq. 74) and about 30% higher than that used in Donnert et al. (2010). The main reason for the discrepancy with PE04 is that they calculated the SZ-signal by integrating Eq.1 over an excessively large aperture, $R = 5 h_{50}^{-1}$Mpc (in Eq.1 $\Omega = 2\pi \int_0^{R/D_A} d\theta$, $D_A$ the angular distance). This corresponds to $\sim 2.75 R_{500}$, which indeed is 5-6 times higher than the radius of the radio halo in the observations of the T03 compilation (Fig.3 left) [^6]. Fig. 1 shows the high-frequency data-points with the SZ correction added (filled symbols). We conclude that the SZ-decrement is not important: an SZ-decrement about four times larger than that measured by Planck would be needed to reconcile the data-points at higher frequency with a power-law spectrum. With this result, we must conclude that the spectral break above 1.4 GHz is intrinsic to the source. We attempted to constrain the magnitude of the break by fitting the data-points in Fig. 1, corrected for the SZ-decrement (adding a flux $= 1.08 (\nu / GHz)^2$ mJy), with a broken power-law with slopes at lower and higher frequencies $\alpha$ and $\alpha + \Delta \alpha$. An F-test analysis, using the statistical errors from Fig.1, constrains $\Delta \alpha > 0.45$ (90% confidence level), implying a corresponding break in the spectrum of the emitting electrons $\Delta \delta > 0.9$. A strong break is also clear from Fig. 1, which shows synchrotron models assuming both a break $\Delta \delta = 1.6$ (dotted line) and a high-energy cut-off (dashed line). A strong break (or cut-off) in the spectrum of the emitting electrons is thus required by the current data even after correcting for the SZ effect. Method (ii) ----------- As a second approach, we directly used the brightness profiles of the radio halo at different frequencies wherever these were available. This avoided problems associated with measurements at better sensitivities, which can be integrated to larger radii. In particular, we constructed new spectra by integrating the lower-frequency fluxes only out to radii of 17.5and 13(0.37 and 0.27 $R_{500}$), which correspond to the effective radii at 2.675 and 4.85 GHz, respectively (as in T03). The calculated SZ decrements from the Planck observations are $=-0.82 \pm 0.03$ and $-(0.50\pm 0.02)(\nu / GHz)^2$ mJy, respectively. In Fig. 4 we show the reconstructed spectrum (SZ-corrected) within these radii. As in method (i), we found that the SZ decrement is negligible and that a correction 4-5 times larger than that implied by Planck measurements would be needed to reconcile the data at 2.675 and 4.85 GHz with the best-fit spectrum obtained at lower frequencies. We furthermore note that Fig. 4 shows indications for a steepening of the halo spectrum with increasing aperture radius. The best-fit slope obtained using the smallest aperture $=$13($=0.27 R_{500}$), $\alpha = 1.10 \pm 0.01$, is smaller than that obtained with an aperture of 17.5($=0.37 R_{500}$), $\alpha = 1.17 \pm 0.02$, and than the best-fit slope to the TH03 compilation of data, which refer to larger apertures (Sect. 3.1). This qualitatively agrees with the radial spectral steepening of the Coma radio halo that was reported by Giovannini et al. (1993) and Deiss et al.(1997). Method (iii) ------------ In a third approach we evaluated the importance of the SZ decrement by using the correlation found by PIPX between the y-signal and the radio flux in a beam area, $F(0.3, \Omega_b)$, from recent deep WSRT observations at 330 MHz (Brown & Rudnick 2011). This is $y = 10^{-5} \times 10^{(0.86\pm 0.02)} (F(0.3, \Omega_b)/Jy)^{0.92 \pm 0.04}$, using a 10-arcmin FWHM beam and measured to a maximum radial distance $\sim 0.8 \times$ $R_{500} \sim 38$ arcmin. From Eq. 1, this point-to-point radio-SZ correlation can be converted into a relation between the SZ-decrement integrated over a beam area $\Omega_b$, $\delta F_{SZ} (\nu, \Omega_b) = 2(k_B T)^3f(\nu) y \Omega_b/(hc)^2$, where $\Omega_b \simeq 9.6 \times 10^{-6}$rad$^2$ and the radio flux integrated in the same beam. Assuming that the radio halo has an intrinsic power-law spectrum, $F(\nu) \propto \nu^{-\alpha}$, it is $${{ \delta F_{SZ}(\nu, \Omega_b) }\over{ F(\nu, \Omega_b) }} \simeq -{{ 1.2 \times 10^{-4} }\over{ 0.33^{\alpha} }} ( {{\nu}\over{GHz}} )^{2 + \alpha} \left( {{F(0.3, \Omega_b)}\over{Jy}} \right)^{-0.08 \pm 0.04} \, .$$ Here the ratio $\delta F_{SZ} / F$ is calculated within a beam area ($\Omega_b = 9.6 \times 10^{-6}$rad$^2$) and depends on the distance from the cluster center because $F$, on the right-hand side of Eq. 4, decreases with distance. However, the quasi–linear scaling between $\delta F_{SZ}$ and $F$ makes this dependence very weak. Indeed, according to Fig. 9 in PIPX, $F(0.3, \Omega_b)$ ranges from $\sim 1$ Jy in the central halo regions to $\sim 0.06$ Jy in the periphery, implying a maximum variation of $<$25% in the ratio $\delta F_{SZ} / F$ as a function of distance. Such a weak dependence allows us to readily also derive a ratio $\delta F_{SZ} / F$ referred to a larger aperture. If we assume the average value of $F$ within the halo region, $F \sim 0.2$ Jy (PIPX), we obtain the ratio $\delta F_{SZ} / F$ on the aperture of the halo, $\Omega_H$ (with $R_H \sim 0.85 R_{500}$) $${{ \delta F_{SZ}(\nu, \Omega_H) }\over{ F(\nu, \Omega_H) }} \sim - 1.4 \times 10^{-4} {{(\nu/GHz)^{2 + \alpha} }\over{0.33^{\alpha} }} \big{\langle} ({{F(0.3, \Omega_b)}\over{ 0.2 Jy}})^a \big{\rangle}_{\Omega_H} \, ,$$ where $a=-0.08$ and $\langle .. \rangle$ is a flux-weighted average on the halo aperture. The ratio $\delta F_{SZ} / F$ from Eq. 5 at different frequencies is shown in Fig. 5 assuming different (intrinsic) spectral slopes. Assuming a power law with slope $\alpha \sim 1.22$, the spectrum of the halo at lower frequencies, we found that the negative signal caused by the SZ-decrement reduces the radio flux at 4.85 GHz by only $\leq$10%. That is much less than required to substantially affect the shape of the spectrum; for example a reduction of about 75% would be required assuming the T03 data-set (Fig. 1). Therefore, as in the previous cases, we conclude that the steepening induced by the SZ-decrement is negligible. Radio halo and SZ-correction on larger scales --------------------------------------------- The recent WSRT observations (Brown & Rudnick 2011) allow the halo to be firmly traced out to unprecedentely large scales, $\sim 0.8-0.9\, R_{500}$ radius (Fig. 3(right)). On small apertures the flux of the halo derived from these observations is consistent with that from Venturi et al.(1990) (Fig. 4), but their higher sensitivity[^7] allows the detection of more flux. This effect can already be seen on apertures $0.4-0.5 R_{500}$ (Fig. 1) and is especially strong on larger scales (Fig. 3(right)). If we had adopted the $\sim 0.8-0.9\, R_{500}$ scale for our calculations, the integrated SZ decrement would be about twice as large as that derived above; however, this effect is more than compensated for by the fact that the 330 MHz halo flux integrated on such a large scale is also almost three times higher than in Fig. 1 (Fig. 3, right). Consequently, in this way we would simply re-obtain a similar fractional decrement based on the y-radio correlation (Fig. 5). This is shown in Fig. 6 where the observed data-points at high frequency are compared with a bundle of power-law spectra normalized to the 330 MHz halo flux integrated on an aperture radius $=0.85\, R_{500}$ and corrected for the intervening (negative) SZ-decrement measured by Planck on the same aperture ($F_{obs}(\nu, \Omega_H) = F(\nu, \Omega_H) + \delta F_{SZ}(\nu, \Omega_H)$). We conclude that an intrinsic power-law spectrum with a slope $\alpha \sim 1.2-1.3$ would produce an observed flux at 4.8 GHz that is 7-8 times higher than that measured by current Effelsberg observations (T03), thus very deep single-dish observations at high frequencies are expected to easily test for the presence of a spectral break in the spectrum of the Coma halo. As a final remark, we note that our approaches also assumed no influence from the SZ decrement on scales larger than the radio halo scale. To the extent that this is significant, it would eventually bias low the radio “zero” level at high frequencies, and consequently, our procedures would [*over-estimate*]{} the SZ-correction. We expect, however, that this may affect our conclusion only at $\leq$ few percent level. ![image](combined3.ps){width="95.00000%"} ![ Spectrum of the radio halo extracted within an aperture of 17.5(red, 0.37 $R_{500}$) and 13(blue,0.27 $R_{500}$). The high-frequency points are corrected for the SZ-decrement measured on the same scales. For comparison the empty symbols mark fluxes measured in the same aperture radius using the Brown & Rudnick (2011) data. Best fits to the low-frequency data are reported as solid lines (same color-code), while the best fit to the T03 compilation (dashed line) and the synchrotron model with the cut-off of Fig. 1 (dot-dashed line) are reported for comparison.](spectrum_SZ_0p37e0p27R500NEW.ps){width="42.50000%"} ![ Ratio $\delta F_{SZ} / F$ as a function of the synchrotron spectral index of the emitted spectrum of the radio halo, assuming a power law $F(\nu) \propto \nu^{-\alpha}$ and an average halo flux in a beam area at 330 MHz $= 0.2$ Jy. Calculations are reported at 1.4, 2.65, 4.85, and 8.45 GHz. Vertical dashed lines mark the synchrotron spectral index of the Coma radio halo derived from data at $\nu \leq 1.4$ GHz, $\alpha = 1.22 \pm 0.04$.[]{data-label="Fig.Lr_Lx"}](yF.ps){width="42.50000%"} ![Bundle of power-law spectra, $F(\nu) \propto \nu^{-\alpha}$, with $\alpha=1.1$, 1.22, 1.4, 1.55, 1.7 normalized to the flux of the radio halo derived from Brown & Rudnick (2011) WSRT data using an aperture radius $R_H =0.85 R_{500}$. Models are corrected for the SZ-decrement ($F_{obs}(\nu, \Omega_H) = F(\nu, \Omega_H) + \delta F_{SZ}(\nu, \Omega_H)$, Sect. 2) measured on the same aperture radius. The observed high-frequency points are taken from T03.](spectrum_SZ_PLANCK_nuFnu_ONLYDATA_RUDNICK_0p85R500NEW.ps){width="42.50000%"} Conclusions =========== The spectra of radio halos are important probes of the underlying mechanisms for the acceleration of the electrons responsible for the radio emission. The spectrum of the Coma radio halo shows a steepening at higher frequencies. This has triggered an on-going debate on the possibility that this steepening is not intrinsic to the emitted radiation, but it is caused by the intervening SZ effect with the thermal ICM. The recent Planck data (PIPX) allow for a correct evaluation of this effect. Using Planck results, we have shown that the negative signal caused by the SZ decrement does not produce a significant effect on the shape of the spectrum of the Coma radio halo. The spectral information of the Coma halo comes from heterogeneous observations in the past 30 years. For this reason, before evaluating the potential effect of the SZ-effect, we have discussed the main uncertainties on the halo spectrum that derive from the different sensitivities of the observations at different frequencies, from the different apertures used to measure the flux of the halo, and from subtracting discrete sources embedded in the halo region. We showed that the different sensitivities of the observation can explain the large $\pm 30$% scattering of the data-points observed in the global spectrum of the halo collected by T03. However, we also showed that neither the different sensitivity of the observations (and the aperture radius of the halo), nor the subtraction of discrete sources can naturally explain the steepening of the halo spectrum observed at higher frequencies. We examined the potential contribution of the SZ-effect to the observed steepening using three complementary approaches to ensure that our results are robust. With the first two methods we measured the SZ-decrement by self-consistently adopting the aperture radii used for flux measurements of the radio halo at the different frequencies. First we adopted the global compilation of data-points from T03 and a radius $=23$ arcmin which is consistent with the aperture used to measure the halo flux in the most sensitive observations, between 30 MHz and 1.4 GHz. We derived an SZ-decrement $= -1.08 (\nu/GHz)^2$mJy, which is about four times smaller than that required to explain the observed steepening. Second we used the available brightness profiles of the halo at 139, 330, and 1400 MHz to derive the spectrum of the halo within two fixed apertures, $=$17.5 and 13 arcmin, which correspond to the effective radius of the regions where the halo is detected at higher frequencies, 2.675 and 4.85 GHz, respectively. In this case the flux of the halo between 139 and 1400 MHz is lower than that in the T03 compilation, but the SZ-signal measured by Planck within these apertures also decreases significantly and is about 4-5 times weaker than that required to explain the steepening of the spectrum measured within the same apertures. As a third complementary approach we used the (almost) scale-independent correlation between $y$ and the 330 MHz halo’s flux within a beam–aperture discovered by PIPX. From this correlation we derived the ratio of the SZ-decrement and the radio flux of the halo, $\delta F_{SZ}/F$, and showed that this is very low. In particular, by assuming a spectral index of the halo $\alpha =1.2-1.3$ the ratio is $\delta F_{SZ}/F \leq 10$% at 4.85 GHz, whereas it should be $\geq 70$% to explain the steepening observed by T03. Consequently, based on our analysis of the current radio data, an intrinsic spectral break, or cut-off, is required in the energy distribution of the electrons that generate the radio halo. It is important to note, however, that the spectral analysis presented here does not tell the whole story of Coma radio spectrum. The recent very high sensitivity observations by Brown & Rudnick (2011) showed that most of the total flux at 330 MHz is emitted beyond a radius of 20-25 arcmin, implying that the halo flux is significantly higher than previously thought. The upcoming LOFAR observations at low frequencies and more sensitive single-dish measurements at high frequencies have the potential of detecting the halo on larger scales and will be essential for evaluating the global spectral shape of the halo and possible spectral variations with radius. For instance, we also showed that future deep observations with single dishes at 5 GHz are expected to measure a halo flux on a 40 arcmin aperture radius that is predicted to be $\sim$7-8 times higher than currently measured if a spectral steepening is absent, thus providing a complementary test to our present findings. We thank the referee for useful comments and R.Pizzo and T.Venturi for providing useful information on their observations. L. R. acknowledges support from the U.S. National Science Foundation, under grant AST-1211595 to the University of Minnesota. J.D. acknowledges support by FP7 Marie Curie programme “People” of the European Union. K.D. acknowledges the support by the DFG Cluster of Excellence “Origin and Structure of the Universe”. Ackermann, M., Ajello, M., Allafort, A., et al. 2010, ApJ, 717, L71 Baars, J., Genzel, G., Pauliny-Toth, I., Witzel, A. 1977, A&A 61, 99 Beresnyak, A., Xu, H., Li, H., Schlickeiser, R. 2013, ApJ 771, 131 Blasi P., Colafrancesco S., 1999, APh 12, 169 Blasi P., 2001, APh 15, 223 Brown S., Rudnick L., 2011, MNRAS, 412, 2 Brunetti, G. 2004, JKAS, 37, 493 Brunetti G., Setti G., Feretti L., Giovannini G., 2001, MNRAS 320, 365 Brunetti G., Blasi P., 2005, MNRAS 363, 1173 Brunetti G., Lazarian A., 2007, MNRAS 378, 245 Brunetti G., et al. 2008, Nature 455, 944 Brunetti, G., Lazarian, A., 2011, MNRAS, 410, 127 Brunetti, G., Blasi, P., Reimer, O., et al. 2012, MNRAS, 426, 956 Buote, D. A. 2001, ApJ, 553, L15 Carlstrom, J. E., Holder, G. P., & Reese, E. D. 2002, ARA&A, 40, 643 Cassano, R., Brunetti, G. 2005, MNRAS 357, 1313 Cassano, R., et al., 2008, A&A, 480, 687 Cassano, R., et al. 2010, ApJ 721, L82 Cassano, R., et al. 2013, arXiv:1306.4379 Deiss, B. M., Reich, W., Lesch, H., & Wielebinski, R. 1997, A&A, 321, 55 Dennison B., 1980, ApJ 239L Donnert J., Dolag K., Brunetti G., Cassano R., Bonafede A., 2010, MNRAS 401, 47 En[ß]{}lin, T. A. 2002, A&A, 396, L17 (E02) En[ß]{}lin, T., Pfrommer, C., Miniati, F., & Subramanian, K. 2011, A&A, 527, A99 Fujita, Y., Takizawa, M., Sarazin, C.L., 2003, ApJ 584, 190 Giovannini, G., Feretti, L., Venturi, T., Kim, K.-T., Kronberg, P.P., 1993, ApJ 406, 399 Giovannini, G., Tordi, M., Feretti, L., 1999, NewA 4, 141 Govoni F., Feretti L., Giovannini G. et al. 2001, A&A, 376, 803 Govoni, F., Markevitch, M., Vikhlinin, A., et al. 2004, ApJ, 605, 695 Hanisch, R. 1980, AJ 85, 1565 Helmboldt, J. F., Kassim, N. E., Cohen, A. S., Lane, W. M., Lazio, T. J.2008, ApJS, 174, 313 Liang H., Hunstead R.W., Birkinshaw M., Andreani P., 2000, ApJ 544, 686 Jeltema T.E., Profumo S., 2011, ApJ, 728, 53 Kale, R., Dwarakanath, K. S. 2010, ApJ, 718, 939 Kempner, J. C., Sarazin, C. L., 2001, ApJ, 548, 639 Keshet U., Loeb A., 2010, ApJ, 722, 737 Kuehr, H., Witzel, A., Pauliny-Toth, I. I. K., Nauber, U. 1981, A&AS, 45, 367 Macario, G., Venturi, T., Brunetti, G., et al. 2010, A&A, 517, A43 Macario, G., Venturi, T., Intema, H. T., et al. 2013, A&A, 551, A141 Orru’, E., Murgia, M., Feretti, L., et al. 2007, A&A, 467, 943 Perley, R., Butler, B. 2013 ApJSS 204, 19 Petrosian V., 2001, ApJ 557, 560 Pfrommer C., Enßlin T. A. 2004, A&A 413, 17 (PE04) Pizzo, R., 2010, PhD Thesis, Groningen University Planck Collaboration X., 2012, A&A sub., arXiv:1208.3611 (PIPX) Reimer A., Reimer O., Schlickeiser R., Iyudin A., 2004, A&A 424, 773 Schlickeiser R., Sievers A., Thiemann H.: 1987, A&A 182, 21 Thierbach M., Klein U., Wielebinski R., 2003, A&A 397, 53 (T03) van Weeren, R. J., R[ö]{}ttgering, H. J. A., Rafferty, D. A., et al. 2012, A&A, 543, A43 Venturi, T., Giovannini, G., & Feretti, L. 1990, AJ, 99, 1381 Venturi T., Giacintucci S., Dallacasa D. et al. 2008, A&A 484, 327 Venturi, T., Giacintucci, S., Dallacasa, D., et al. 2013, A&A, 551, A24 [^1]: [^2]: Donnert et al. used the Coma cluster from a constrained cosmological simulation of the local Universe. [^3]: we also add WSRT data at 139 MHz (Pizzo 2010) [^4]: When we evaluated the corrections to the halo spectrum below 300 MHz by extrapolating the Perley & Butler analytic fit to the spectrum of 3C 286, we found a marginal increment of the halo flux below 50 MHz. [^5]: Here we ignore point 7, at 430 MHz (Hanisch 1980), which shows a larger radius, to have a consistent aperture within which the fluxes are measured. However. including it has no effect on the spectral fit (Fig. 1). In addition, we note that although its larger radius should have led to a higher flux, oversubtraction of point sources in the central region of the halo, which led to a bowl in the Hanisch (1980) map (Fig. 1c in Hanisch 1980), resulted in fluxes consistent with the 23power law. [^6]: Within this aperture we found that the SZ-decrement measured by Planck is $\simeq -3.7 (\nu/GHz)^2$mJy. This is similar, although slightly smaller, than the PE04 value, which was calculated using X-ray data, assuming an isothermal (kT=8.2 keV) and spherical cluster with the spatial distribution of the gas density given by the the extrapolation of the beta-model to $R = 5 h_{50}^{-1}$Mpc distances. [^7]: these observations are $\sim$4-5 times deeper (brightness sensitivity) than those in Venturi et al.(1990)
{ "pile_set_name": "ArXiv" }
--- abstract: | Hardness magnification reduces major complexity separations (such as $\mathsf{\mathsf{EXP}} \nsubseteq \mathsf{NC}^1$) to proving lower bounds for some natural problem $Q$ against weak circuit models. Several recent works [@OS18_mag_first; @MMW_STOC_paper; @CT19_STOC; @OPS19_CCC; @CMMW_CCC_paper; @DBLP:conf/icalp/Oliveira19; @Magnification_FOCS19] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than $Q$, while $Q$ itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: – *Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich *[@DBLP:journals/jcss/RazborovR97]*?* – *Can we adapt known lower bound techniques to establish the desired lower bound for $Q$?* We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem [${\sf MCSP}$]{}imply the non-existence of natural proofs. As a corollary of our result, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a *locality barrier*: existing magnification theorems *unconditionally* show that the problems $Q$ considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification. author: - 'Lijie Chen[^1]\' - 'Shuichi Hirahara[^2]\' - 'Igor C. Oliveira[^3]\' - 'Ján Pich[^4]\' - 'Ninad Rajgopal[^5]\' - | Rahul Santhanam[^6]\  \ bibliography: - 'refs.bib' title: | Beyond Natural Proofs:\ Hardness Magnification and Locality --- Preliminaries {#s:preliminaries} ============= Magnification Frontiers {#a:improved_magnification_MCSP} ======================= Hardness Magnification and Natural Proofs {#s:non-natural} ========================================= The Locality Barrier {#s:difficulties} ==================== Acknowledgements {#acknowledgements .unnumbered} ================ Part of this work was completed while some of the authors were visiting the Simons Institute for the Theory of Computing. We are grateful to the Simons Institute for their support. This work was supported in part by the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2014)/ERC Grant Agreement no. 615075. Ján Pich was supported in part by Grant 19-05497S of GA ČR. Lijie Chen is supported by NSF CCF-1741615 and a Google Faculty Research Award. Igor C. Oliveira was supported in part by a Royal Society University Research Fellowship.[^7] Review of Hardness Magnification in Circuit Complexity {#a:review_magnification} ====================================================== [^1]: `lijieche@mit.edu` [^2]: `s_hirahara@nii.ac.jp` [^3]: `igor.oliveira@warwick.ac.uk` [^4]: `jan.pich@cs.ox.ac.uk` [^5]: `ninad.rajgopal@cs.ox.ac.uk` [^6]: `rahul.santhanam@cs.ox.ac.uk` [^7]: Most of this work was completed while Igor C. Oliveira was affiliated with the University of Oxford.
{ "pile_set_name": "ArXiv" }
[**DECAY OF A SCALAR $\sigma$-MESON NEAR THE CRITICAL END-POINT IN THE PNJL MODEL**]{} [*A.V. Friesen, Yu.L. Kalinovsky and V.D. Toneev*]{}\ Join Institute for Nuclear Research, Dubna [Properties of a scalar $\sigma$-meson are investigated in the two-flavor Nambu-Jona-Lasinio model with the Polyakov loop. Model analysis of the phase diagram of strong interacting matter is performed. The temperature dependence of the $\sigma\rightarrow\pi\pi$ decay width is studied at the zero chemical potential and near the critical end-point. The calculated strong coupling constant $g_{\sigma\pi\pi}$ and the decay width are compared with available experimental data and other model results. Nonthermal enhancement of the total decay width is noted for the $\sigma$ meson near the critical end-point when the condition $m_\sigma\geq2m_\pi$ is broken. ]{}\ PACS 13.25.Jx, 25.75.Nq Introduction {#introduction .unnumbered} ============ The models of Nambu–Jona-Lasinio type [@volk2; @volk3; @Ebert; @VolkM] have a long history and are used to describe the dynamics and thermodynamics of light mesons. This type of models gives a simple and practical example of the basic mechanism of spontaneous breaking of chiral symmetry and key features of QCD at finite temperature and chemical potential [@echaya; @njl2; @klev; @hatsuda; @Eb-kalin]. The behavior of a QCD system is governed by the symmetry properties of the Lagrangian, namely, the global $SU_L(N_f)\times SU_R(N_f)$ symmetry which is spontaneously broken to $SU_V(N_f)$ and the exact $SU_c(N_c)$ local color symmetry. On the other hand, in a non-Abelian pure gauge theory, the Polyakov loop serves as an order parameter of a transition from the low temperature confined phase ($Z_{N_c}$ symmetric) to the high temperature deconfined phase characterized by the spontaneously breaken $Z_{N_c}$ symmetry (PNJL model). In the PNJL model, quarks are coupled simultaneously to the chiral condensate and to the Polyakov loop, and the model includes the features of both the chiral and $Z_{N_c}$ symmetry breaking. The model reproduces rather successfully lattice data on QCD thermodynamics. The use of the PNJL model is therefore reasonable for investigating the in-medium properties of mesons and their decays [@pedro; @mesons]. The aim of this work is the investigation of the meson properties and $\sigma$ decay near the critical end-point (CEP). In this letter, we discuss the decay process $\sigma \to \pi\pi$ at the finite temperature $T$ and chemical potential $\mu$ in the framework the Nambu-Jona-Lasinio model with the Polykov-loop (PNJL) that is believed to describe well the chiral properties and simulates a deconfinement transition. Our motivation here is to elaborate these features in a large region of the temperature $T$ and quark chemical potential $\mu$, where a non-thermal enhancement of pions due to the $\sigma\to\pi\pi$ decay may take place. The model and the phase diagram =============================== We use the two-flavor PNJL model with the following Lagrangian [@pnjl1; @pnjl2; @pnjl3] $$\begin{aligned} \label{pnjl} \mathcal{L}_{\it PNJL}=\bar{q}\left(i\gamma_{\mu}D^{\mu}-\hat{m}_0 \right) q+ G \left[\left(\bar{q}q\right)^2+\left(\bar{q}i\gamma_5 \mathbf {\tau} q \right)^2\right] -\mathcal{U}\left(\Phi[A],\bar\Phi[A];T\right),\end{aligned}$$ where the covariant gauge derivative $D_\mu \equiv \partial_\mu -iA_\mu$ with $A^\mu = \delta_0^\mu A^0$, $A^0 = -iA_4$ (the Polyakov calibration). The strong coupling constant is absorbed in the definition of $A_{\mu}$. At the zero temperature the Polyakov loop field $\Phi$ and the quark field are decoupled. Here, the quark field $\bar{q} = (\bar{u},\bar{d})$, current masses $\hat{m} =\mbox{diag} (m_u, m_d)$, Pauli matrices $\mathbf {\tau}=\sigma/2$ act in the two color flavor space and $G$ is the coupling constant. The gauge sector of the Lagrangian density (\[pnjl\]) is described by an effective potential $\mathcal{U}\left(\Phi[A],\bar\Phi[A];T\right)\equiv \mathcal{U}\left(\Phi,\bar\Phi;T\right)$ $$\begin{aligned} \label{effpot} \frac{\mathcal{U}\left(\Phi,\bar\Phi;T\right)}{T^4} &=&-\frac{b_2\left(T\right)}{2}\bar\Phi \Phi- \frac{b_3}{6}\left(\Phi^3+ {\bar\Phi}^3\right)+ \frac{b_4}{4}\left(\bar\Phi \Phi\right)^2~,\end{aligned}$$ where $$\begin{aligned} \label{Ueff} b_2\left(T\right)&=&a_0+a_1\left(\frac{T_0}{T}\right)+a_2\left(\frac{T_0}{T} \right)^2+a_3\left(\frac{T_0}{T}\right)^3~.\end{aligned}$$ The parameter set is obtained by fitting the lattice results in the pure $SU(3)$ gauge theory at $T_0 = 0.27$ GeV [@pnjl2; @pnjl3] and is given in Table \[table1\]. $a_0$ $a_1$ $a_2$ $a_3$ $b_3$ $b_4$ ------- ------- ------- ------- ------- ------- 6.75 -1.95 2.625 -7.44 0.75 7.5 : The parameter set of the effective potential $\mathcal{U}(\Phi, \overline{\Phi}; T)$. []{data-label="table1"} Before discussing the meson properties, one should indroduce the gap equation for constituent quark mass should be introduced. For describing the system properties at the finite temperature and density the grand canonical potential in the Hartree approximation is considered [@pnjl2; @pnjl3] $$\begin{aligned} \label{grandcan} \Omega (\Phi, \bar{\Phi}, m, T, \mu) &=& \mathcal{U}\left(\Phi,\bar\Phi;T\right) + N_f \frac{(m-m_0)^2}{4G}- 2N_c N_f \int_\Lambda \dfrac{d^3p}{(2\pi)^3} E_p \nonumber \\ && - 2N_f T \int \dfrac{d^3p}{(2\pi)^3} \left[ \ln N_\Phi^+(E_p)+ \ln N_\Phi^-(E_p) \right]~,\end{aligned}$$ where $E_p$ is the quark energy, $E_p=\sqrt{{\bf p}^2+m^2}$, $E_p^\pm = E_p\mp \mu$, and $$\begin{aligned} && N_\Phi^+(E_p) = \left[ 1+3\left( \Phi +\bar{\Phi} e^{-\beta E_p^+}\right) e^{-\beta E_p^+} + e^{-3\beta E_p^+} \right]^{-1}, \\ && N_\Phi^-(E_p) = \left[ 1+3\left( \bar{\Phi} + {\Phi} e^{-\beta E_p^-}\right) e^{-\beta E_p^-} + e^{-3\beta E_p^-} \right]^{-1}.\end{aligned}$$ Integrals in Eq. (\[grandcan\]) contain the three-momentum cutoff $\Lambda$. From the grand canonical potential $\Omega$ the equations of motion can be obtained $$\label{set} \dfrac{\partial \Omega}{\partial m} = 0, \,\, \dfrac{\partial \Omega}{\partial \Phi} = 0, \,\, \dfrac{\partial \Omega}{\partial \bar{\Phi}} = 0,$$ and the gap equation for the constituent quark mass can be written as follows: $$\begin{aligned} \label{gap-eq} m = m_0-N_f G <{\bar q}q> =m_0 + 8 G N_c N_f \int_{\Lambda} \dfrac{d^3p}{(2\pi)^3} \dfrac{m}{E_p} \left[ 1 - f^+_{\Phi} - f^- _{\Phi}\right]~,\end{aligned}$$ where $f_\Phi^+$, $f_\Phi^-$ are the modified Fermi functions $$\begin{aligned} f_\Phi^+ = ((\Phi + 2\bar{\Phi}e^{-\beta E^+})e^{-\beta E^+} +e^{-3\beta E^+})N_\Phi^+, \nonumber\\ f_\Phi^- = ((\bar{\Phi} + 2{\Phi}e^{-\beta E^-})e^{-\beta E^-} +e^{-3\beta E^-})N_\Phi^-, \label{fermi}\end{aligned}$$ with $E^\pm = E\mp\mu$. The regularization parameter $\Lambda$, the quark current mass $m_0$, the coupling strength G and physics quantities to fix these parameters are presented in Table \[param\]. $m_0$ \[MeV\] $\Lambda$ \[GeV\] $G$ \[GeV\]$^{-2}$ $F_\pi$ \[GeV\] $m_\pi$ \[GeV\] --------------- ------------------- -------------------- ----------------- ----------------- 5.5 0.639 5.227 0.092 0.139 : The model parameters and quantities used for their tuning.[]{data-label="param"} The $\sigma$ and $\pi$ meson masses are the solutions of the equation $$1 - 2G \ \Pi_{{ps}/{s}}(k^2) = 0,$$ where $k^2 = m^2_\pi$ and $k^2 = m^2_\sigma$ in pseudo scalar and scalar sectors, and $\Pi_{{ps}/{s}}$ are standard mesonic correlation functions [@klev] $$\begin{aligned} && i\Pi_\pi (k^2) = \int \frac{d^4p}{(2\pi)^4} \ \mbox{Tr}\, \left[ i \gamma_5 \tau^a S(p+k) i \gamma_5 \tau^b S(p) \right], \label{Polpi} \\ && i\Pi_\sigma (k^2)= \int \frac{d^4p}{(2\pi)^4} \ \mbox{Tr}\, \left[ i S(p+k)i S(p) \right]. \label{Polsig}\end{aligned}$$ Both pion-quark $g_{\sigma\pi\pi}(T,\mu)$ and sigma-quark $g_{\sigma\pi\pi}$ coupling strengths can be obtained from $\Pi_{{ps}/{s}}$: $$\begin{aligned} g_{\pi /\sigma}^{-2}(T, \mu) = \frac{\partial\Pi_{{\pi}/{\sigma}}(k^2)} {\partial k^2}\vert_{^{k^2 = m_\pi^2}_{k^2 =m_\sigma^2} }. \label{couple}\end{aligned}$$ As is seen from the gap equation (\[gap-eq\]), the quark condensate $<\bar{q}q>$ defines completely the quark mass in a hot and dense matter. This correlation is clearly demonstrated in Fig. \[cond\], where the temperature dependence of the order parameters of the chiral condensate and the Polyakov loop, as well as the sigma and pion masses are shown for $\mu=0$. The pion hardly changes its mass and starts to become heavier only near $T_c \sim$ 200 MeV, while the sigma mass $m_\sigma(T)$ decreases as the chiral symmetry gets restored, and eventually $m_\pi(T)$ becomes larger than double quark mass $m_q$ at the temperature $T_{Mott} \sim$ 190 MeV. Since it is still difficult to extract certain information from the lattice simulations with the nonzero baryon density, we need QCD models for investigating the phase transitions at the finite baryon density. The calculated phase diagram of the physical states of matter within the PNJL model is given in Fig. \[phasediag\]. As is seen, in the real world with nonzero pion mass we have the first-order phase transition at a moderate temperature and a large baryon chemical potential $\mu_B=3\mu$ that, with increasing $T$, terminates at the critical end-point $(T_{CEP},\mu_{CEP})$ where the second-order phase transition occurs. At a higher temperature $T>T_{CTP}$ we have a smooth crossover. In the chiral limit with massless pions there is a tricritical point that separates the second-order phase transition at high temperature $T$ and the first-order transition at lower $T$ and high $\mu$. For the model parameters chosen (see Table \[param\]) we obtain $T_{CEP} \simeq $0.095 GeV and $\mu_{CEP} \simeq $0.32 GeV (cf. with [@FKT11], where the critical temperature and chemical potential are calculated with the same parameters). Decay $\sigma\rightarrow\pi\pi$ =============================== It is in order here to mention the significance of the scalar meson $\sigma$ (a chiral partner of the pion) in QCD. A model-independent consequence of dynamic breaking of chiral symmetry is the existence of the pion and its chiral partner $\sigma$-meson: The former is the phase fluctuation of the order parameter ${\bar q}q$ while the latter is the amplitude fluctuation of ${\bar q}q$. During the expansion of the system, the in-medium $\sigma$ mass increases toward its vacuum value and eventually exceeds the $2m_\pi$ threshold. As the $\sigma\to\pi\pi$ coupling is large, the decay proceeds rapidly. Since this process occurs after freeze-out, the pions generated by it do not have a chance to thermalize. Thus, one may expect that the resulting pion spectrum should have a nonthermal enhancement at low transverse momentum. To the lowest order in a $1/N_c$ expansion, the diagram for the process $\sigma\rightarrow\pi\pi$ is shown in Fig. \[fdiag\]. ![The Feynman diagram of the $\sigma\rightarrow\pi\pi$ decay](sigpipi.eps) . \[fdiag\] The amplitude of the triangle vertex $\sigma\rightarrow\pi\pi$ can be obtained analytically as $$\begin{aligned} A_{\sigma\pi\pi} = \int \frac{d^4q}{(2\pi)^4} \ Tr \{S(q) \ \Gamma_\pi \ S(q+P) \ \Gamma_\pi \ S(q)\},\end{aligned}$$ where $\Gamma_\pi = i\gamma_2\tau$ is the pion vertex function and $S(q) = {\hat{q} + m}/{\hat{q}^2 - m^2}$ is the quark propagator, a trace is beeng taken over color, flavor and spinor indices. After tracing and evaluation of the Matsubara sum one obtains [@hadron; @zhuang] $$\begin{aligned} A_{\sigma\pi\pi} &=& 2mN_cN_f\int\frac{d^3q}{(2\pi)^3}\frac{(1 - f^+_{\Phi} - f^-_{\Phi})}{2E_q} \nonumber \\ &\times& \frac{({\bf q}\cdot {\bf p})^2 - (2m_{\sigma}^2 + 4m_{\pi}^2) ({\bf q}\cdot {\bf p}) + m_{\sigma}^2/2 - 2m_{\sigma}^2E_q^2}{(m_{\sigma}^2 - 4E_q^2)((m_\pi^2 - 2 ({\bf q}\cdot {\bf p}))^2 - m_{\sigma}^2E_q^2)}~,\end{aligned}$$ where $f_\Phi^+$, $f_\Phi^-$ are the modified Fermi functions (\[fermi\]). The coupling strength $g_{\sigma\pi\pi}(T, \mu) = 2g_\sigma g^2_\pi A_{\sigma\pi\pi}(T, \mu)$, where $g_\sigma$ and $g_\pi$ are coupling constants defined from Eq. (\[couple\]) The decay width is defined by the cut of the Feynman diagram in Fig.\[fdiag\] treating the sigma meson as a quark-antiquark system $$\begin{aligned} \label{Gam0} \Gamma_{\sigma\rightarrow\pi\pi} = \frac{3}{2} \ \frac{g^2_{\sigma\pi\pi}} {16\pi \ m_{\sigma}} \sqrt{1 - \frac{4m_{\pi}^2}{m_{\sigma}^2}}~.\end{aligned}$$ The scalar meson $\sigma$ can decay either into two neutral or two charged pions. All these channels are taken into account. The factor 3/2 in Eq.(\[Gam0\]) takes into account the isospin conservation. For the existing decay $\sigma\rightarrow\pi\pi$ the kinematic factor $\sqrt{1-{4m_{\pi}^2}/{m_{\sigma}^2}}$ in (\[Gam0\]) leads to the constraint $m_{\sigma} \leq 2m_{\pi}$ which is $T$- and $\mu$-dependent. One may expect that in the temperature region where this condition is broken the values $g_{\sigma\pi\pi}$ and $\Gamma_{\sigma\rightarrow\pi\pi}$ will drop to zero (Fig. \[fdec\]). In the case $\mu=0$ considered the kinematic condition is broken at the $\sigma\to\pi\pi$ dissociation temperature $T_d^\sigma\approx$ 210 MeV. The coupling constant $g_{\sigma \pi \pi}$ is about $2.1$ GeV (note the scaling factor 1/10 in Fig.\[fdec\]) in vacuum and stays almost constant up to $T\leqslant 0.22$ GeV (at $\mu = 0$) and then it drops to zero at $m_{\sigma} = 2m_{\pi}$. The experimental value, extracted from the J/$\psi$ decays, $g_{\sigma\pi\pi}=2.0^{+0.3}_{-0.9}$ GeV [@BES01] is in reasonable agreement with our result. It is of interest to note that the quark-meson models predict $g_{\sigma\pi\pi}=$1.8 GeV [@FGILW03] and $1.8^{+0.5}_{-0.3}$ GeV [@HZH02], and the linear sigma model gives $g_{\sigma\pi\pi}=2.54\pm$ 0.01 GeV [@DR01]. The total $\sigma$ decay width was measured recently in two experiments [@BES01; @HZH02; @E791]. The Beijing Spectrometer (BES) Collaboration at the Beijing Electron-Positron Collider reported evidence of the existence of the $\sigma$ particle in J/$\psi$ decays. In the $\pi^+\pi^-$ invariant mass spectrum in the process of the J/$\psi \to \sigma\omega\to \pi^+\pi^-\omega$ they found a low mass enhancement, and the detailed analysis strongly favors $O^{++}$ spin parity with a statistical significance for the existence of the $\sigma$ particle. The BES measured values of the $\sigma$ mass and total width are [@BES01; @HZH02] $$\begin{aligned} \label{exp1} m_\sigma=390^{+60}_{-36} \ MeV, \hspace*{1cm} \Gamma_{\sigma\to\pi\pi}= 282_{-50}^{+77} \ MeV.\end{aligned}$$ The E791 Collaboration at Fermilab reported on evidence of a light and broad scalar resonance in the nonleptonic cascade decays of heavy mesons [@FGILW03]. It was found in the Fermilab experiment that the $\sigma$ meson is rather important in the $D$ meson decay $D\to3\pi$ generated by the intermediate $\sigma$-resonance channel $$\begin{aligned} \label{exp2} m_\sigma=478^{+24}_{-23}\pm17 \ MeV, \hspace*{1cm} \Gamma_{\sigma\to\pi\pi}= 324_{-42}^{+40}\pm 21 \ MeV.\end{aligned}$$ These experimental values should be compared with $\Gamma_{\sigma\to\pi\pi}\simeq$190 MeV at $T=\mu=$0 (see Fig.\[fdec\]). One can additionally take into account the Bose-Einstein statistics of final pion states by introducing the factor $F_{\pi\pi} = (1+f_B(\frac{m_\sigma}{2}))^2$ [@zhuang] into the total decay width (\[Gam0\]), where the boson distribution function $f_B(x) = (e^{x/T} - 1)^{-1}$. In contrast to the kinematic factor $\sqrt{1-{4m_{\pi}^2}/{m_{\sigma}^2}}$, this pion distribution function tends to increase the width. Near the Mott temperature $\Gamma_{\sigma\to\pi\pi}\approx$250 MeV at $\mu=$0. In fact, the numerical calculation shows that $\Gamma_{\sigma\pi\pi}(T)$ decreases as $T$ goes up, and eventually vanishes at a high temperature. The measured widths (\[exp1\]),(\[exp2\]) are somewhat higher then that in our model at $T=$0. It is noteworthy that the measured $\sigma$ masses $390^{+60}_{-36}$ [@BES01] and $478^{+24}_{-23}$ [@E791] are noticeably smaller than in our model $m_\sigma\approx$620 MeV (to be fixed by the model parameters) while the total decay width depends strongly on $m_\sigma$, see Eq.(\[Gam0\]). The quark-meson models give the decay width that does not differ essentially from our estimate: $\Gamma_{\sigma\to\pi\pi}=$173 [@FGILW03] and 149.9 [@MA10] MeV though the used sigma meson mass is close to experimental ones, being $m_\sigma=$ 485.5 and 478 MeV, respectively. The decay width was really not studied at the nonzero baryon density, in particular, in the region near the critical end-point. For the finite $\mu$ the coupling strength and meson masses behave in a nontrivial way. Both $\sigma$ and $\pi$ mesons suffer a jump in the region of the first-order phase transition (see curve for $\mu+10$ MeV in Fig.\[massB\]) which ends at the critical end-point $\mu_{CEP}$ and then they change continuously for $\mu<\mu_{CEP}$, where the crossover-type phase transition occurs. According to (\[Gam0\]), this mass behavior defines the total decay rate of a $\sigma$ meson at $\mu \neq 0$, as shown in the right panel of Fig. \[deccep\]. As can be seen, the $T$-region of the decay enhancement becomes narrower with increasing $\mu$. The total widths $\Gamma_{\sigma\to\pi\pi}$ and $\Gamma_{\sigma\to\pi\pi} F_{\pi\pi}$ tend to grow with temperature exhibiting a narrow maximum due to particularities of the $g_{\sigma\pi\pi}^2$ term in (\[Gam0\]). The inclusion of the $F_{\pi\pi}$ factor enhances this maximum, but this effect becomes smaller when one moves by $\Delta \mu$ toward larger $\mu$ from $\mu_{CEP}$ because the pion mass increases above $\mu_{CEP}$ (see Fig.\[massB\]). If the chemical potential $\mu=\mu_{CEP}$, the temperature at which coupling strength $g_{\sigma\pi\pi}$ or total decay width $\Gamma_{\sigma\to\pi\pi}$ drops down to zero, the temperature is just $T=T_{CEP}$. If $\mu$ increases (decreases) with respect to $\mu_{CEP}$ by $\Delta \mu\sim$10 MeV, the temperature decreases (increases), respectively, by about 30 MeV from $T_{CEP}$. Maximal values of $\Gamma_{\sigma\to\pi\pi}$ near the critical end-point are larger than that for $\mu=$0 by a factor about 3. It means that a $\sigma$ meson lives shorter in the dense baryon matter. The shape of $g_{\sigma\pi\pi}(T)$ is insensitive to $\mu$ around $\mu_{CEP}$. The full decay width $\Gamma_{\sigma\to\pi\pi}$ exhibits some wide maximum near the temperature $T_{CEP}$. This maximum is enhanced due to the $F_{\pi\pi}$ factor and gets more pronounced for smaller $\mu$. Concluding remarks {#concluding-remarks .unnumbered} ================== The two-flavor PNJL model that reasonably describes quark-meson thermodynamics at finite temperature and chemical potential is applied to calculate the $\sigma\to \pi\pi$ decay in this medium. The emphasis here is made on the behavior near the critical end-point. At $\mu=$0 and $T\to$0 the coupling constant $g_{\sigma\pi\pi}=$2.1 GeV and the total $\sigma$ decay width $\Gamma_{\sigma\to\pi\pi}\approx$190 MeV are in reasonable agreement with both available experimental data and quark-meson model estimates. At finite quark chemical potential near the critical end-point the $\Gamma_{\sigma\to\pi\pi}$ width shows a sharp maximum coming from a particular behavior of the coupling strength $g_{\sigma\pi\pi}$. The sigma mesons live here a shorter time than in the baryonless matter. The rapid decrease of $\Gamma_{\sigma\to\pi\pi}$ at a high temperature is due to the phase space factor (see Eq.(\[Gam0\])). The account for the Bose-Einstein statistics in the final state pions (the factor $F_{\pi\pi}$) results in the appearance of some nonthermal maximum of the decay width near $T$ and $\mu$ at which the kinematic condition $m_{\sigma} \leq 2m_{\pi}$ is broken. This width enhancement is about $\sim 20\%$ at $\mu_{CEP}$ and is negligible if one moves to $\mu=\mu_{CEP}+\Delta \mu$. The presented results are obtained in the first order in the $1/N_c$ expansion. In a more realistic case the $\sigma-\omega$ and $\sigma-A_1$ mixing can affect noticeably the considered quantities, especially for the $\mu\ne $0 case [@STT98]. However, it corresponds to higher orders in $1/N_c$. The measurements of nonthermal enhancement of pions might be considered as a signature of chiral phase transition. However, it is a difficult experimental problem since the $\sigma$ life time is very short, and the pion contribution from the resonance decay should be separated carefully. So a more elaborated analysis is needed. Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to P. Costa, E.A. Kuraev, V.V. Skokov, and M.K. Volkov for useful comments. V.T acknowledges the financial support within the “HIC for FAIR” center of the “LOEWE” program and Heisenberg-Landau grant. The work of Yu. K. was supported by the RFFI grant 09-01-00770a. [99]{} Composite-meson model with vector dominance based on U(2) invariant four-quark interactions // Z. Phys. C. 1983. V. 16, No 3. P. 205-210. , Meson Lagrangians in a superconductor quark model // Ann. Phys. 1984. V. 157, No 1. P. 282. , Effective chiral hadron lagrangian with anomalies and skyrme terms from quark flavour dynamics // Nucl. Phys. B. 1986. V. 271, No 1. P. 188-226. , Low-Energy Meson Physics in the Quark Model of Superconductivity type // Sov. J. Part. and Nuclei. 1986. V. 17. P. 186; [*Ebert D., Reinhardt H., Volkov M. K.*]{}, Effective Hadron Theory of QCD // Prog. Part. Nucl. Phys. 1994. V. 33. P. 1-120. , The Nambu-Jona-Lasinio model and its development // Phys. Usp. 2006. V. 49. P. 551-561. , Effective hadron theory of QCD // Prog. Part. Nucl. Phys. 1991. V. 27, P. 195. , The Nambu-Jona-Lasinio model of quantum chromodynamics // Rev. Mod. Phys. 1992. V. 64. P. 649-708. , QCD phenomenology based on a chiral effective Lagrangian // Phys. Rep. 1984. V. 247, P. 221-367. , Mesons and diquarks in a NJL model at finite temperature and chemical potential // Int. J. Mod. Phys. A. 1993. V. 8. P. 1295-1312. , Pseudoscalar neutral mesons in hot and dense matter // Phys. Lett. B. 2003. V.560. P. 171-177. , Pseudoscalar mesons in hot, dense matter // Phys. Rev. C. 2004. V. 70. 025204. , Quark-gluon plasma as a condensate of Z(3) Wilson lines // Phys. Rev. D. 2000. V. 62. 111501. , Mesonic correlations at finite temperature and densitybin the Nambu-Jona-Lasinio model with a Polyakov loop // Phys. Rev.D. 2007. V. 75. P. 065004. Phases of QCD: Lattice thermodynamics and a field theoretical model // Phys. Rev. D. 2006. V. 73. P. 014019. , Thermodynamics in NJL-like models // arXiv:1102.1813. Hadronisation cross-sections at the chiral phase transition of a quark plasma // Phys. Lett. B. 1994. V. 337. P. 30-36. , Sigma Decay at Finite Temperature and Density // Chin. Phys. Lett. 2001. V. 18. P. 344-346; arxiv:nucl-th/0008041. , BES R measurements and $J/\psi$ decays // hep-ex/0104050. , Pion and sigma meson properties in a relativistic quark model // Phys. Rev. D. 2003. V. 68. P. 014011 \[arxiv:hep-ph/0304031\] // 2002 Phys. Rev. D 65 097505 \[arXiv:hep-ph/0112025\]. , Estimating sigma meson couplings from $D\to3\pi$ decays, // Phys. Rev. D. 2001. V. 63. P. 11750. , Experimental evidence for light and broad scalar resonance in $D\to\pi^-\pi^+\pi^+$ decay // Phys. Rev. Lett. 2001 V. 86. P. 770-774. , Statistical analysis of the sigma meson considered as two-pion resonance // arXiv: 1003.3493 Saito K., Tsushima K., Thomas A. W., Spectral change of sigma and omega mesons in a dense nuclear medium // arxiv:nucl-th/9811031.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The transformation of monolayer islands into bilayer islands as a first step of the overall two-dimensional to three-dimensional (2D-3D) transformation in the coherent Stranski-Krastanov mode of growth is studied for the cases of expanded and compressed overlayers. Compressed overlayers display a nucleation-like behavior: the energy accompanying the transformation process displays a maximum at some critical number of atoms, which is small for large enough values of the misfit, and then decreases gradually down to the completion of the transformation, non-monotonically due to the atomistics of the process. On the contrary, the energy change in expanded overlayers increases up to close to the completion of the transformation and then abruptly collapses with the disappearance of the monoatomic steps to produce low-energy facets. This kind of transformation takes place only in materials with strong interatomic bonding. Softer materials under tensile stress are expected to grow predominantly with a planar morphology until misfit dislocations are introduced, or to transform into 3D islands by a different mechanism. It is concluded that the coherent Stranski-Krastanov growth in expanded overlayers is much less probable than in compressed ones for kinetic reasons.' author: - José Emilio Prieto - Ivan Markov title: 'Quantum dot nucleation in strained-layer epitaxy: minimum-energy pathway in the stress-driven 2D-3D transformation' --- INTRODUCTION ============ Understanding the mechanism of transition from a planar two-dimensional (2D) thin film to a three-dimensional (3D) island morphology in the heteroepitaxy of highly mismatched materials is of crucial importance for the growth of self-assembled quantum dots in nanoscale technology. In the [*coherent*]{} Stranski-Krastanov (SK) mode of growth, dislocation-free 3D islands develop on top of a 2D wetting layer in order to relieve the misfit strain at the expense of an increase in surface energy.[@Eag] This mechanism of strain relaxation is established in a multitude of systems of technological importance for the manufacturing of optoelectronic devices.[@Bruce] Despite the huge amount of studies devoted to the evolution of the cluster shape, many aspects, in particular the very beginning of the 2D-3D transition, still remain unclear. The first theoretical concept for the transition from a 2D layer to faceted 3D islands included a nucleation mechanism as a result of the interplay between the surface energy and the relaxation of the strain energy relative to the values of the wetting layer.[@Jerr] Irreversible 3D growth was predicted to begin above a critical volume, overcoming an energetic barrier whose height is inversely proportional to the forth power of the lattice misfit. Mo [*et al.*]{} observed Ge islands representing elongated pyramids (“huts”) bounded by {105} facets inclined by 11.3$^{\circ}$ to the substrate.[@Mo] These clusters were thought to be a step in the pathway to the formation of larger islands with steeper side walls (“domes” or “barns”).[@Sut] Chen [*et al.*]{} studied the earliest stages of Ge islanding and found that Ge islands smaller than the hut clusters do not involve discrete {105} facets.[@Chen] This result was later confirmed by Vailionis [*et al.*]{} who observed the formation of 3-4 monolayers-high “prepyramids” with rounded bases which existed over a narrow range of Ge coverages in the beginning of the 2D-3D transformation.[@Vai] Sutter and Lagally[@Sut2] assumed that faceted, low-misfit alloy 3D islands can result from morphological instabilities (ripples) that are inherent to strained films,[@AT; @G; @S] thus suggesting that 3D islands can be formed without the necessity to overcome a nucleation barrier. Similar views were simultaneously expressed by Tromp [*et al.*]{}[@Tromp] Tersoff [*et al.*]{} developed further this idea suggesting that the transition from the initial smooth “prepyramids” to faceted pyramids can be explained by assuming that the polar diagram of SiGe alloy islands allows the existence of all orientations vicinal to (001) with the first facet being {105}.[@Jerry1] In order to explain the experimental observation that SiGe alloy films roughen only under compressive stresses larger than a critical value of 1.4 %, Xie [*et al.*]{} assumed that the smallest 3D islands have stepped rather than faceted side surfaces.[@Xie1] They noted that the steps on the SiGe(001) vicinals are under tensile stress and their energy of formation is lowered by the compressive misfit but increased by the tensile strain \[see also the discussion in Refs. () and ()\]. As a result, the step formation and in turn the roughening are favored by the compressive misfit. It is worth noting that a barrierless evolution of stepped islands was predicted by Sutter and Lagally under the assumption that the slope of the side walls of the stepped islands increases continuously from zero to 11.3$^{\circ}$.[@Sut2] Priester and Lannoo suggested that 2D islands of monolayer height appear as precursors of the 3D islands.[@Pri] In addition, it was established that the minimum-energy pathway of the 2D-3D transition has to consist of a series of intermediate states with thicknesses increasing in monolayer steps and which are stable in separate intervals of volume. The first step in this transformation should be the rearrangement of monolayer into bilayer islands.[@Kor; @Prieto] Khor and Das Sarma found by Monte Carlo simulations that during the rearrangement, the material for the bilayer island comes almost completely from the original monolayer island, the bulk of the material for the three-layer island comes from the original two-layer island, etc.[@Khor] Moison [*et al.*]{} reported that the coverage suddenly decreases from about 1.75 ML to 1.2 ML when 3D InAs islands begin to form on GaAs.[@Moi] The same phenomenon has been observed by Shklyaev [*et al.*]{} in the case of Ge/Si(111).[@Shkl] These observations suggest a process of rearrangement as mentioned above. Voigtländer and Zinner noted that 3D Ge islands have been observed in the same locations on the Si(111) surface where 2D islands locally exceeded the critical thickness of the wetting layer of two bilayers.[@Voi] One-monolayer thick InAs islands were suggested to act as precursors for formation of thicker structures on GaAs.[@Polimeni] The simultaneous presence of stable one, two, three or four monolayers-thick islands has been observed in heteroepitaxy of InAs on InP and GaAs.[@Rud1; @Rud2; @Col] In this paper we studied the earliest stages of growth of thin films in the coherent (dislocation-free) Stranski-Krastanov mode. We considered the instability of the planar growth against clustering by focussing on the conservative (i.e., without considering further deposition) mono- to bilayer transformation as a first step of the overall 2D-3D transition, or the beginning of the formation of the “prepyramids” mentioned above. We found that this transformation is a true nucleation process in compressed overlayers, in the sense that a critical nucleus of the second layer is initially formed and then grows further up to the complete mono-bilayer transformation. The energy associated with the transformation thus reaches a maximum and then starts a decreasing trend.[@CGB] This is not the case in expanded overlayers, where the energy tends to increase up to very close to the completion of the transformation and then steeply decreases at the very end. The main result of this study is that coherent Stranski-Krastanov growth in expanded overlayers is much less probable than in compressed ones. Model ===== We consider an atomistic model in $2+1$ dimensions. The 3D crystallites have [*fcc*]{} structure and (100) surface orientation, thus possessing the shape of truncated square pyramids. We found that monolayer-high elongated islands always have higher energy than square islands with the same number of atoms, as expected from the symmetry of the lattice and the isotropy of the interaction potentials (see below). The lattice misfit is the same in both orthogonal directions. We consider interactions only in the first coordination sphere; inclusion of further coordination spheres does not alter qualitatively the results. The choice of crystal lattice and interaction potential is more appropriate for the heteroepitaxy of (close-packed) metals on metals rather than for that of semiconductor materials. As a consequence, properties that depend crucially on the strong directional bonding characteristic of semiconductors cannot be addressed by our model. Some examples are: the dependence of the shape of GeSi/Si dots on volume [@Mo; @Sut; @Chen] as discussed above; the observation of lens-shaped [@Don; @Wal] and pyramidal [@Heyn] dots and even of coexistence of both types [@Bhatti] in InAs/GaAs, the other well-studied system (for a recent review see Ref. ) or the cases where the accommodation of the lattice misfit of a given material on different crystallographic faces of the same substrate takes place by other mechanisms (also found for InAs/GaAs), [@Joyce] where additional aspects as the presence of different surface reconstructions affect the thermodynamical balance of surface energies as well as the diffusion kinetics and, as a consequence, the nucleation behaviour and the growth mode. As our aim is to study the “reversible" minimum-energy pathway of the transition from metastable states to the ground state of a given system, the exact particularities of the model are not likely to play a crucial role and we expect the same qualitative behavior for any crystal lattice, crystal shape and interatomic potential. We have performed atomistic calculations making use of a simple minimization procedure. The atoms interact through a pair potential whose anharmonicity can be varied by adjusting two constants $\mu $ and $\nu $ ($\mu > \nu $) that govern separately the repulsive and the attractive branches, respectively,[@Mar1; @Mar2] $$\begin{aligned} \label{potent} V(r) = V_{o}\Biggl[\frac{\nu }{\mu - \nu }e^{-\mu (r-b)} - \frac{\mu }{\mu - \nu }e^{-\nu (r-b)}\Biggr],\end{aligned}$$ where $b$ is the equilibrium atom separation. For $\mu = 2\nu $ the potential (\[potent\]) turns into the familiar Morse form. A static relaxation of the system is performed by allowing each atom to displace in the direction of the force, i.e., the gradient of the energy with respect to the atomic coordinates, in an iterative procedure until all the forces fall below some negligible cutoff value. As we were only interested in the 2D-3D transformation of isolated islands, the calculations were performed under the assumption that the substrate (the wetting layer) is rigid; the atoms there are separated by a distance $a$. The lattice misfit is thus given by $\varepsilon=(b-a)/a$. ![\[mech\] Schematic process for the evaluation of the activation energy of the monolayer-bilayer transformation. The initial state is a square monolayer island. The intermediate state is a monolayer island short of some number of atoms which are detached from the edges and placed in the second level. The final state is a truncated bilayer pyramid.](./fig1.eps){width="7.5cm"} For the study of the mechanism of mono-bilayer transformation, we assume the following imaginary model process:[@Stmar] atoms detach from the edges of monolayer islands, which are larger than the critical size for the mono- to bilayer transformation $N_{12}$ and thus unstable against bilayer islands, diffuse on top of them, aggregate and give rise to second-layer nuclei. These grow at the expense of the atoms detached from the edges of the lower islands. The process continues up to the moment when the upper island completely covers the lower-level island. To simulate this process, we assume an initial square monolayer island, detach atoms one by one from its edges and locate them on top and at the center of the ML island, building there structures as compact as possible (Fig. \[mech\]). The energy change associated with the process of transformation at a particular stage is given by the difference between the energy of the incomplete bilayer island and that of the initial monolayer island. This is in fact a conservative version of the mechanism observed by Khor and Das Sarma in 1+1 dimensions.[@Khor] Results ======= Stability of Monolayer Islands ------------------------------ In our previous work in (1+1)D models, it was established that monolayer-high islands are stable against bilayer islands up to some critical volume or number of atoms $N_{12}$; in turn, bilayer islands are stable against trilayer islands up to another critical number $N_{23} > N_{12}$; etc.[@Kor; @Prieto] The mono-bilayer transformation was considered as the first step of the overall 2D-3D transformation and a critical misfit was determined from the misfit dependence of $N_{12}$. The latter was found to increase with decreasing misfit diverging at a critical value $\varepsilon _{12}$. Above the critical misfit, the coherent Stranski-Krastanov mode is favored against the layer-by-layer growth followed by introduction of misfit dislocations. The opposite is true below the critical value. Whereas this critical behavior is clearly pronounced with compressive strain, it is much smoother in expanded overlayers. It is worth noting that the existence of critical misfit was observed in a series of heteroepitaxial systems.[@Xie1; @Leo; @Wal; @Pinc] ![\[e12\] Total energy per atom of mono- and bilayer islands at (a) positive (+4.0 %) and (b) negative (-11.0 %) values of the misfit as a function of the total number of atoms. The potential is of the form given by eq. (\[potent\]) with $\mu=16$ and $\nu=14$.](./fig2.eps){width="8.5cm"} In the present work, using more realistic (2+1)D models, we found a larger difference in the behavior of expanded and compressed overlayers. Figure \[e12\] shows that the total energies of mono- and bilayer islands under tensile stress containing the same total number of atoms are very close to each other compared with the corresponding behavior in compressed overlayers. As will be shown below this leads to the conclusion that even for $N \gg N_{12}$, determined as the crossing point of the curves corresponding to monolayer and bilayer islands in Fig. \[e12\], the probability of the 2D-3D transformation remains nearly equal to the probability of the reverse 3D-2D transformation. It turns out that the misfit dependence of $N_{12}$ is very sensitive to the value of the force constant $\gamma = \mu \nu V_0$ of the interatomic bonds, particularly in expanded overlayers (Fig. \[gamma\]). Decreasing $\mu$ and $\nu$ ($V_0$ is assumed equal to unity) in such a way that the ratio $\mu / \nu$ is kept constant (in this case equal to 8/7), shifts the intersection points $N_{12}$ to larger absolute values of the misfit. As a result, a critical size $N_{12}$ in compressed overlayers exists practically for all values of $\gamma $ whereas in overlayers under tensile stress, $N_{12}$ shifts to so large values of the misfit that they effectively disappear below some critical value of $\gamma $. ![\[gamma\] Critical island size $N_{12}$ (number of atoms) as a function of the lattice mismatch at different values of the force constant $\gamma = \mu \nu V_0$. Potentials of the form given by eq. (\[potent\]) were used, with $\mu / \nu$ = 8/7 ($V_0$ is taken equal to unity). As seen, coherent 3D islanding is favored in expanded overlayers only in “stiffer” materials.](./fig3.eps){width="8.5cm"} It was established that in the case of a force constant of an intermediate value ($\mu = 2\nu = 12$), $N_{12}$ disappears (the monolayer islands are always stable against bilayer islands), but $N_{13}, N_{14}, N_{23}...$ still exist. This points to a novel mechanism of 2D-3D transformation which differs from the consecutive formation of bilayer, trilayer, etc. islands, each from the previous one. The new mechanism obviously consists of the formation and 2D growth of bilayer islands on top of the monolayer island, thus transforming the initial monolayer island directly into a trilayer island. At even smaller values of $\gamma$, the critical values $N_{13}, N_{14}$ etc. consecutively disappear, suggesting a generalized mechanism of 2D-3D transformation in which the monolayer islands transform into thicker islands. This multilayer 2D mechanism will be a subject of a separate study. In this paper we focus on the layer-by-layer 2D-3D transformation. ![\[N12\_2d\] Misfit dependence of the critical sizes $N_{12 }$ of mono- and bilayer islands with different shapes, given by different angles of the side walls: 60$^{\circ}$ and 30$^{\circ}$ ($\mu= 2 \nu = 12$). The critical misfit $\varepsilon_{12}$ is shown by the vertical dashed line. ](./fig4.eps){width="8.5cm"} We considered also the stability of monolayer islands against bilayer islands with a different slope of the side walls. It was found that $N_{12}$ is smaller if the slope of the side walls is the steepest one (60$^{\circ}$) for this lattice in comparison with flatter islands (Fig. \[N12\_2d\]). This is due to the fact that in crystals with steeper side walls, the strain relaxation is more efficient than in flatter islands. This is in contradiction with the experiments in semiconductor growth in which islands with side walls of a smaller slope than that of the first facet are initially observed. [@Chen; @Vai; @Xie1] In any case this means that we can exclude the flatter islands from our consideration. ![\[DeltaE\] Transformation curves representing the energy change in units of the bond energy $V_0$ as a function of the number of atoms in the upper level for (a) positive (+2.5 %) and (b) negative (-7.0 %) values of the misfit. The number of atoms in the initial monolayer island ($N_0 = 841 = 29\times29$) is chosen in such a way that the resulting truncated bilayer pyramid is complete ($21\times21 = 441$ atoms in the lower and $20\times20 = 400$ atoms in the upper level); $\mu = 2\nu = 36$.](./fig5.eps){width="8.5cm"} We conclude that for some reasonable degree of anharmonicity (e.g. $\mu = 2\nu$ in our model), monolayer islands become unstable against bilayer islands thus making possible the 2D-3D transformation by the layer-by-layer mechanism only at strong enough interatomic bonding. Soft materials are expected to grow either with a planar morphology until misfit dislocations are introduced or to transform into 3D islands by a different, multilayer 2D mechanism. Mechanism of 2D-3D transformation --------------------------------- Figure \[DeltaE\] shows typical transformation curves of the energy change as a function of the number of atoms in the upper island for (a) positive and (b) negative misfits. It is immediately seen that in a compressed overlayer, the transformation curve for $\Delta G$ has the typical shape of a nucleus formation curve: it displays a maximum $\Delta G_{max}$ for a cluster consisting of a small number of atoms $n_{max}$ and then decreases beyond this size up to the completion of the transformation. The atomistics of the transfer process (i.e. the completion of rows in the upper level and their depletion in the lower one) is responsible for the jumps (the non-monotonic behaviour) in the curve. ![\[height\] Height of the energetic barriers in units of $V_0$ as a function of the absolute value of the lattice misfit. The figures at each point show the number of atoms in the critical nucleus $n_{max}$. The initial island size was 29$\times$29 = 841 atoms. The round symbols were calculated for $\mu=2\nu=36$, the squares for $\mu=2\nu=12$.](./fig6.eps){width="8.5cm"} In the case of expanded overlayers, the energy change increases up to a large number of atoms (again non-monotonically due to the atomistics) and then abruptly decreases at the end of the transformation. No true maximum is displayed. The energy change becomes negative only after the transfer of the last several atoms. Comparing the largest value of the energy with the energy at the transformation completion leads to the conclusion that the probabilities of the direct and reverse transformations are nearly equal. ![\[n12dg\] Misfit dependence of the critical island size $N_{12}$, the critical nucleus size $n_{max}$, (both expressed in number of atoms), and the nucleation barrier height $\Delta G_{max}$ (in units of $V_0$) for compressed overlayers and $\mu=2\nu=12$. The last two magnitudes were computed for islands of an initial size of 20$\times$20 = 400 atoms.](./fig7.eps){width="8.5cm"} Figure \[height\] depicts the evolution of the height of the barrier $\Delta G_{max}$ as a function of misfit (in expanded overlayers, this is the highest value reached before the collapse of the energy). The figures at each point show the number of atoms in the cluster at the maximum of the transformation curve. As seen, in the case of compressed overlayers, $\Delta G_{max}$ decreases steeply with increasing misfit in a way similar to the decrease of the work required for nucleus formation with increasing supersaturation in the classical theory of nucleation.[@CGB] Assuming a dependence of the kind $\Delta G = K\varepsilon ^{-n}$ where $K$ is a constant proportional to the Young modulus (or the force constant $\gamma $) and $\varepsilon $ is the lattice misfit, we found $n = 4.29$ for $\mu = 12, \nu = 6$, and $n = 4.75$ for $\mu = 36, \nu = 18$. It is worth noting that assuming 3D nucleation on top of the wetting layer, Grabow and Gilmer predicted a value $n = 4$ for small misfits (large nuclei) assuming that $\Delta G_{max}$ is inversely proportional to the square of the supersaturation, which in turn is proportional to the square of the lattice misfit.[@GG] Note that the same exponent of four was obtained also by Tersoff and LeGoues.[@Jerr] Obviously, in our case the exponent $n$ is a complicated function of the force constant in the interatomic bonds but the value of the exponent is of the same order. ![\[strains\] Variation of the in-plane strain energy and out-of-plane interaction energies during the mono-bilayer transformation process in (a) compressed ($\varepsilon = +2.5~\%$) and (b) expanded ($\varepsilon = -7.0~\%$) overlayers with $\mu=2\nu=36$. The initial sizes of the islands were 29$\times$29=841 atoms. The inserts show the curves at the beginning of the transformation with enlarged scales.](./fig8.eps){width="8.5cm"} The misfit behavior of the critical nucleus size $n_{max}$ is also similar to that found in the theory of nucleation. The nucleus size decreases with increasing misfit, reaching eventually only one atom just as the nucleus size on the supersaturation in nucleation theory.[@CGB] It is interesting to note that the critical nuclei contain always a number of atoms which [*exceeds*]{} by one atom the size of a compact cluster, and is [*not*]{} one atom short of being a compact cluster, as might be expected from too-simplistic energetic considerations on the basis of bond-counting arguments. The reason is that the highest departure from compactness giving the maximum of the $\Delta G$ curve is achieved when the additional atom creates two kink positions for the attachment of the next row of atoms. On the contrary, in expanded overlayers, the number of atoms at which the transformation curve reaches its highest value does not depend on the misfit (Fig. \[height\]), demonstrating the non-nucleation behavior of the process. This number is roughly equal to the number of atoms which completes the upper level minus the number of atoms required to build the last four edge rows of the upper level in order to produce four facets. A special feature of the above results is that the barriers for expanded overlayers are in general larger than those for compressed overlayers. Having in mind that the typical time needed for the transformation to occur is inversely proportional to $\exp(- \Delta G_{max}/kT)$, we have to expect much longer times for the occurrence of the 2D-3D transformation in expanded overlayers as compared with compressed ones particularly at larger values of the misfit. Thus limitations of kinetic origin (astronomically long times for second layer nucleation) are expected in expanded overlayers and at small misfits in compressed overlayers. Figure \[n12dg\] compares the misfit dependences of the critical island size $N_{12}$, the critical 2nd-layer nucleus size $n_{max}$ and the height of the nucleation barrier $\Delta G_{max}$ for compressed overlayers. The tree curves display a similar behavior increasing steeply with decreasing misfit, with $N_{12}$ showing a critical behavior at the critical misfit $\varepsilon _{12}$. As will be discussed below, the 2D-3D transformation will be inhibited for kinetic reasons at values of the misfit not sufficiently larger than $\varepsilon _{12}$. Discussion ========== In compressed overlayers the atoms interact with their in-plane neighbours through the steeper repulsive branch of the interatomic potential. This means that compressed overlayers are effectively “stiffer” than expanded overlayers. Then compressed pseudomorphic overlayers will contain a larger amount of elastic energy than expanded pseudomorphic ones of the same thickness and bonding strength. Another consequence is that the accumulation of strain energy with thickness in compressed overlayers will be steeper than in expanded overlayers. Or, the strain energy of a bilayer will differ considerably from that of a monolayer from both sides of the critical size $N_{12}$ compared with expanded islands as seen in Fig. \[e12\]. This is the reason why expanded overlayers require greater force constants in order for the monolayer islands to become unstable against bilayer islands. This is also the reason why the critical sizes $N_{12}$ are larger in expanded than in compressed overlayers of the same force constant as seen in Fig. \[gamma\]. Figure \[DeltaE\] illustrates the central result of our study. It shows the energy of transformation of compressed and expanded monolayer islands. In compressed overlayers the overall fall of the energy begins when a small cluster (three atoms in this particular case) is formed in the second level. This is a typical phase transition of first order - nucleation followed by irreversible growth.[@Stmar] In expanded overlayers the energy increases up to nearly the end of the transformation and then abruptly collapses. This collapse of the energy is connected with the transfer of the remaining last edge rows of atoms which leads to the coalescence of the lower and upper steps to produce four side facets. In the particular transfer process considered in the calculation, there are two regions, one before the beginning of the collapse, the second between the two sub-collapses, where the total energy rises slightly; this is due to the energetic cost of repulsion between steps that are very close, separated only by a single atomic row. The reason for this “non-nucleation" behavior is the effectively weaker expanded bonds which results in relatively close energies of the monolayer and bilayer islands. With increasing size of the second level cluster, the misfit strain is not as effectively relaxed as in the case of compressed islands and the collapse of the energy is due to the replacements of isolated repulsing steps by low-energy facets. The different transformation behavior can be understood on the basis of the above considerations accounting in addition for the finite size of the islands. During the transformation of the monolayer islands we should expect a relaxation of the in-plane strain, which is the driving force for the transformation and an increase of the total energy of interaction between the lower level of the island and the substrate and between both levels of the island.[@Kor; @Prieto] The physics behind the process of mono-bilayer transformation is the same as behind the formation of 3D islands on the wetting layer. [@Jerr; @Duport] Formation and growth of new steps and relaxation of the strain stored in the monolayer island compete in the process. In addition, the steps in the first and second levels repel each other and the associated repulsion energy increases towards the completion of the transformation to disappear when the steps coalesce to give rise to microfacets. Epitaxial strain is relaxed at the islands edges.[@Vill] Edge atoms are displaced from the bottoms of their respective potential troughs giving rise to relaxation of the bonds parallel to the surface (in-plane stress relaxation). The displaced atoms loose contact with the substrate atoms, which leads to an increase of the out-of-plane energy of interaction with the underlying substrate.[@Rolf; @JH] During the transformation, new edges on top of the initial monolayer island are formed and the total length of the edges increases as $\Delta L(N_2) = 4(\sqrt{N_0 - N_2}+\sqrt{N_2}-\sqrt{N_0})$ where $N_0$ and $N_2$ are the number of atoms in the initial monolayer island and the current number of transferred atoms in the second level, respectively. Note that the energy of repulsion between the edges bounding the lower and upper islands is implicitly accounted for by both the in-plane and out-of-plane energies. Then a larger number of atoms are displaced from the bottom of their respective potential troughs during the transformation process and the total in-plane strain-relaxation energy decreases. Simultaneously, the out-of-plane interaction energy increases. Owing to the weaker attractive forces in expanded overlayers, only a small number of bonds close to the edges are relaxed. [@Prieto] Most of the bonds at the center of the islands are strained to fit the underlying wetting layer. As a result, the average relaxation in expanded islands is smaller than in compressed islands, where even bonds at the center of medium-sized islands are partly relaxed. In compressed overlayers, the decrease of the in-plane strain energy rapidly overcompensates the increase of the out-of-plane interaction energy which results in a nucleation-like transformation curve (Fig. \[strains\]). In expanded overlayers, the absolute value of the total in-plane strain energy is smaller than the out-of-plane interaction energy with the exception of the final stage when the monolayer-high steps disappear to produce facets of small surface energy. The typical time required for the appearance of a second-layer nucleus is inversely proportional to the nucleation frequency $\omega = S_{12}K\exp(- \Delta G_{max}/kT)$, where $S_{12} = a^2N_{12}$ is the area of the critical monolayer island and $K$ is the pre-exponential of the nucleation rate. As seen in Fig. \[height\], in the case of $\mu = 2\nu = 12$, the barrier height increases approximately 5 times in an interval of $\varepsilon$ of 2.5 % whereas the number of atoms in the critical nucleus $n_{max}$ increases nearly 70 times. For a greater force constant ($\mu = 2\nu = 36$), the increase of both $\Delta G_{max}$ and $n_{max}$ is even larger: 20 and 110 times, respectively, in a smaller misfit interval of about 1.5 %. The energy to break a first-neighbor bond, $V_0$, for most semiconductor materials is of the order of 2 to 2.5 eV (the enthalpy of evaporation is of the order of 4 to 5 eV). Assuming $N_{12}$ is of the order of 100 - 120 atoms we could expect a mono-bilayer transformation to take place at misfits for which $\Delta G_{max}/kT < 15 - 20$ ($n_{max} \le 3$). The reason is that the pre-exponential $K$ in 2D nucleation rate from vapor is usually of the order of $10^{20}$ cm$^{-2}$s$^{-1}$.[@CGB] Otherwise, due to the exponential dependence, times of the order of centuries would be required for second-layer nucleation.[@Dash] Thus, although in compressed overlayers second-layer nucleation can be expected for thermodynamic reasons at misfits above $\varepsilon _{12}$, a real 2D-3D transition can only take place at even larger misfits or higher temperatures for kinetic reasons. As the height of the transformation barriers in expanded overlayers is always greater than several times $V_0$, the mono-bilayer transformation should be strongly inhibited for kinetic reasons. We conclude that the case of a layer-by-layer mechanism for the 2D-3D transformation is expected only in compressed overlayers at misfits sufficiently larger than $\varepsilon _{12}$. The reason is that the mechanism of the mono-bilayer transformation is nucleation-like due to the interplay of relaxation of the in-plane strain, which is proportional to the total edge length and the increase of the total edge energy and repulsion between the edges. The transformation curve in expanded overlayers shows a “non-nucleation" behavior characterized by an overall increase of the energy up to the stage when the single steps coalesce to produce low-energy facets. The latter is accompanied by a collapse of the energy. The maximum energy is large and 2D-3D transformation of expanded overlayers is not expected for kinetic reasons even for materials with strong interatomic bonds. Softer materials are expected to grow with a planar morphology until misfit dislocations are introduced, or to transform into 3D islands by a different mechanism. J.E.P. gratefully acknowledges financiation by the programme “Ramón y Cajal” of the Spanish Ministerio de Educación y Ciencia. D.J. Eaglesham and M. Cerullo, Phys. Rev. Lett. [**64**]{}, 1943 (1990). B.A. Joyce, P.C. Kelires, A.G. Naumovets, and D.D. Vvedensky, eds. [*Quantum Dots: Fundamentals, Applications and Frontiers*]{}, NATO Science Series, II. Mathematics, Physics and Chemistry - Vol. 190, (Springer, 2005). J. Tersoff and F.K. LeGoues, Phys. Rev. Lett. [**72**]{}, 3570 (1994). Y.-W. Mo, D.E. Savage, B.S. Swartzentruber, and M.G. Lagally, Phys. Rev. Lett. [**65**]{}, 1020 (1990). E. Sutter, P. Sutter, and J.E. Bernard, Appl. Phys. Lett. [**84**]{}, 2262 (2004). K.M. Chen, D.E. Jesson, S.J. Pennycook, T. Thundat, and R.J. Warmack, Phys. Rev. B [**56**]{}, R1700 (1997). A. Vailionis, B. Cho, G. Glass, P. Desjardins, D.G. Cahill, and J.E. Greene, Phys. Rev. Lett. [**85**]{}, 3672 (2000). P. Sutter and M.G. Lagally, Phys. Rev. Lett. [**84**]{}, 4637 (2000). R.J. Asaro and W.A. Tiller, Metal. Trans. [**3**]{}, 1789 (1972). M. Ya. Grinfeld, Sov. Phys. Dokl. [**31**]{}, 831 (1986). D. J. Srolovitz, Acta Metall. [**37**]{}, 621 (1989). R.M. Tromp, F.M. Ross, and M.C. Reuter, Phys. Rev. Lett. [**84**]{}, 4641 (2000). J. Tersoff, B.J. Spencer, A. Rastelli, and H. von Känel, Phys. Rev. Lett. [**89**]{}, 196104 (2002). Y.H. Xie, G.H. Gilmer, C. Roland, P.J. Silverman, S.K. Buratto, J.Y. Cheng, E.A. Fitzgerald, A.R. Kortan, S. Schuppler, M.A. Marcus, and P.H. Citrin, Phys. Rev. Lett. [**73**]{}, 3006 (1994). J. Tersoff, Phys. Rev. Lett. [**74**]{}, 4962 (1995). Y.H. Xie [*et al*]{}. Phys. Rev. Lett. [**74**]{}, 4963 (1995). C. Priester and M. Lannoo, Phys. Rev. Lett. [**75**]{}, 93 (1995). E. Korutcheva, A.M. Turiel and I. Markov, Phys. Rev. B [**61**]{}, 16890 (2000). J.E. Prieto and I. Markov, Phys. Rev. B [**66**]{}, 073408 (2002). K.E. Khor and S. Das Sarma, Phys. Rev. B [**62**]{}, 16657 (2000). J.M. Moison, F. Houzay, F. Barthe, L. Leprince, E. André, and O. Vatel, Appl. Phys. Lett. [**64**]{}, 196 (1994). A. Shklyaev, M. Shibata, and M. Ichikawa, Surf. Sci. [**416**]{}, 192 (1998). B. Voigtländer and A. Zinner, Appl. Phys. Lett. [**63**]{}, 3055 (1993). A. Polimeni, A. Patane, M. Capizzi, F. Martelli, L. Nasi, and G. Salviati, Phys. Rev. B [**53**]{}, R4213 (1996). A. Rudra, R. Houdré, J.F. Carlin, and M. Ilegems, J. Cryst. Growth [**136**]{}, 278 (1994). R. Houdré, J.F. Carlin, A. Rudra, J. Ling, and M. Ilegems, Superlattices and Microstructures [**13**]{}, 67 (1993). M. Colocci, F. Bogani, L. Carraresi, R. Mattolini, A. Bosacchi, S. Franchi, P. Frigeri, M. Rosa-Clot, and S. Taddei, Appl. Phys. Lett. [**70**]{}, 3140 (1997). I. Markov, I. [*Crystal Growth for Beginners*]{}, 2nd edition, (World Scientific, 2003). D.W. Pashley, J.H. Neave, and B.A. Joyce, Surf. Sci. [**476**]{}, 35 (2001). T. Walther, A.G. Cullis, D.J. Norris, and M. Hopkinson, Phys. Rev. Lett. [**86**]{}, 2381 (2001). K. Zhang, Ch. Heyn, W. Hansen, Th. Schmidt, and J. Falta, Appl. Phys. Lett. [**76**]{}, 2229 (2000). A.S. Bhatti, M. Grassi Alessi, M. Capizzi, P. Frigeri and S. Franchi, Phys. Rev. B [**60**]{}, 2592 (1999). B.A. Joyce and D.D. Vvedensky, in [*Atomistic Aspects of Epitaxial Growth*]{}, ed. by M. Kotrla, N. I. Papanicolaou, D. D. Vvedensky and L. T. Wille, NATO Science Serries II. Mathematics, Physics and Chemistry, Vol. 65, (Kluwer, 2002) p. 301. B.A. Joyce, J.L. Sudijono, J.G. Belk, H. Yamaguchi, X.M. Zhang, H.T. Dobbs, A. Zangwill, D.D. Vvedensky, and T.S. Jones, Jpn. J. Appl. Phys. [**36**]{}, 4111 (1997). I. Markov and A. Trayanov, J. Phys. C [**21**]{}, 2475 (1988). I. Markov, Phys. Rev. B [**48**]{}, R14016 (1993). S. Stoyanov and I. Markov, Surf. Sci. [**116**]{}, 313 (1982). D. Leonard, M. Krishnamurthy, C.M. Reaves, S.P. Denbaars, and P.M. Petroff, Appl. Phys. Lett. [**63**]{}, 3203 (1993). M. Pinczolits, G. Springholz and G. Bauer, Appl. Phys. Lett. [**73**]{}, 250 (1998). M. Grabow and G. Gilmer, Surf. Sci. [**194**]{}, 333 (1988). C. Duport, C. Priester, and J. Villain, in [*Morphological Organization in Epitaxial Growth and Removal*]{}, Vol. 14 of [*Directions in Condensed Matter Physics*]{}, ed. by Z. Zhang and M. Lagally (World Scientific, Singapore, 1998). J. Villain, J. Crystal Growth [**275**]{}, e2307 (2005). R. Niedermayer, Thin Films [**1**]{}, 25 (1968). W.A. Jesser and J. H. van der Merwe, Surf. Sci. [**31**]{}, 229 (1972). J.G. Dash, Phys. Rev. B [**15**]{}, 3136 (1977).
{ "pile_set_name": "ArXiv" }
--- abstract: | Muons produced in atmospheric cosmic ray showers account for the by far dominant part of the event yield in large-volume underground particle detectors. The IceCube detector, with an instrumented volume of about a cubic kilometer, has the potential to conduct unique investigations on atmospheric muons by exploiting the large collection area and the possibility to track particles over a long distance. Through detailed reconstruction of energy deposition along the tracks, the characteristics of muon bundles can be quantified, and individual particles of exceptionally high energy identified. The data can then be used to constrain the cosmic ray primary flux and the contribution to atmospheric lepton fluxes from prompt decays of short-lived hadrons. In this paper, techniques for the extraction of physical measurements from atmospheric muon events are described and first results are presented. The multiplicity spectrum of TeV muons in cosmic ray air showers for primaries in the energy range from the knee to the ankle is derived and found to be consistent with recent results from surface detectors. The single muon energy spectrum is determined up to PeV energies and shows a clear indication for the emergence of a distinct spectral component from prompt decays of short-lived hadrons. The magnitude of the prompt flux, which should include a substantial contribution from light vector meson di-muon decays, is consistent with current theoretical predictions. The variety of measurements and high event statistics can also be exploited for the evaluation of systematic effects. In the course of this study, internal inconsistencies in the zenith angle distribution of events were found which indicate the presence of an unexplained effect outside the currently applied range of detector systematics. The underlying cause could be related to the hadronic interaction models used to describe muon production in air showers. address: - 'III. Physikalisches Institut, RWTH Aachen University, D-52056 Aachen, Germany' - 'School of Chemistry & Physics, University of Adelaide, Adelaide SA, 5005 Australia' - 'Dept. of Physics and Astronomy, University of Alaska Anchorage, 3211 Providence Dr., Anchorage, AK 99508, USA' - 'CTSPS, Clark-Atlanta University, Atlanta, GA 30314, USA' - 'School of Physics and Center for Relativistic Astrophysics, Georgia Institute of Technology, Atlanta, GA 30332, USA' - 'Dept. of Physics, Southern University, Baton Rouge, LA 70813, USA' - 'Dept. of Physics, University of California, Berkeley, CA 94720, USA' - 'Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA' - 'Institut für Physik, Humboldt-Universität zu Berlin, D-12489 Berlin, Germany' - 'Fakultät für Physik & Astronomie, Ruhr-Universität Bochum, D-44780 Bochum, Germany' - 'Physikalisches Institut, Universität Bonn, Nussallee 12, D-53115 Bonn, Germany' - 'Université Libre de Bruxelles, Science Faculty CP230, B-1050 Brussels, Belgium' - 'Vrije Universiteit Brussel, Dienst ELEM, B-1050 Brussels, Belgium' - 'Dept. of Physics, Chiba University, Chiba 263-8522, Japan' - 'Dept. of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch, New Zealand' - 'Dept. of Physics, University of Maryland, College Park, MD 20742, USA' - 'Dept. of Physics and Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210, USA' - 'Dept. of Astronomy, Ohio State University, Columbus, OH 43210, USA' - 'Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen, Denmark' - 'Dept. of Physics, TU Dortmund University, D-44221 Dortmund, Germany' - 'Dept. of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA' - 'Dept. of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2E1' - 'Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, D-91058 Erlangen, Germany' - 'Département de Physique Nucléaire et Corpusculaire, Université de Genève, CH-1211 Genève, Switzerland' - 'Dept. of Physics and Astronomy, University of Gent, B-9000 Gent, Belgium' - 'Dept. of Physics and Astronomy, University of California, Irvine, CA 92697, USA' - 'Dept. of Physics and Astronomy, University of Kansas, Lawrence, KS 66045, USA' - 'Dept. of Astronomy, University of Wisconsin, Madison, WI 53706, USA' - 'Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin, Madison, WI 53706, USA' - 'Institute of Physics, University of Mainz, Staudinger Weg 7, D-55099 Mainz, Germany' - 'Université de Mons, 7000 Mons, Belgium' - 'Technische Universität München, D-85748 Garching, Germany' - 'Bartol Research Institute and Dept. of Physics and Astronomy, University of Delaware, Newark, DE 19716, USA' - 'Department of Physics, Yale University, New Haven, CT 06520, USA' - 'Dept. of Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP, UK' - 'Dept. of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, USA' - 'Physics Department, South Dakota School of Mines and Technology, Rapid City, SD 57701, USA' - 'Dept. of Physics, University of Wisconsin, River Falls, WI 54022, USA' - 'Oskar Klein Centre and Dept. of Physics, Stockholm University, SE-10691 Stockholm, Sweden' - 'Dept. of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800, USA' - 'Dept. of Physics, Sungkyunkwan University, Suwon 440-746, Korea' - 'Dept. of Physics, University of Toronto, Toronto, Ontario, Canada, M5S 1A7' - 'Dept. of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA' - 'Dept. of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802, USA' - 'Dept. of Physics, Pennsylvania State University, University Park, PA 16802, USA' - 'Dept. of Physics and Astronomy, Uppsala University, Box 516, S-75120 Uppsala, Sweden' - 'Dept. of Physics, University of Wuppertal, D-42119 Wuppertal, Germany' - 'DESY, D-15735 Zeuthen, Germany' author: - 'M. G. Aartsen' - 'K. Abraham' - 'M. Ackermann' - 'J. Adams' - 'J. A. Aguilar' - 'M. Ahlers' - 'M. Ahrens' - 'D. Altmann' - 'T. Anderson' - 'M. Archinger' - 'C. Argüelles' - 'T. C. Arlen' - 'J. Auffenberg' - 'X. Bai' - 'S. W. Barwick' - 'V. Baum' - 'R. Bay' - 'J. J. Beatty' - 'J. Becker Tjus' - 'K.-H. Becker' - 'E. Beiser' - 'S. BenZvi' - 'P. Berghaus' - 'D. Berley' - 'E. Bernardini' - 'A. Bernhard' - 'D. Z. Besson' - 'G. Binder' - 'D. Bindig' - 'M. Bissok' - 'E. Blaufuss' - 'J. Blumenthal' - 'D. J. Boersma' - 'C. Bohm' - 'M. Börner' - 'F. Bos' - 'D. Bose' - 'S. Böser' - 'O. Botner' - 'J. Braun' - 'L. Brayeur' - 'H.-P. Bretz' - 'A. M. Brown' - 'N. Buzinsky' - 'J. Casey' - 'M. Casier' - 'E. Cheung' - 'D. Chirkin' - 'A. Christov' - 'B. Christy' - 'K. Clark' - 'L. Classen' - 'S. Coenders' - 'D. F. Cowen' - 'A. H. Cruz Silva' - 'J. Daughhetee' - 'J. C. Davis' - 'M. Day' - 'J. P. A. M. de André' - 'C. De Clercq' - 'H. Dembinski' - 'S. De Ridder' - 'P. Desiati' - 'K. D. de Vries' - 'G. de Wasseige' - 'M. de With' - 'T. DeYoung' - 'J. C. D[í]{}az-Vélez' - 'J. P. Dumm' - 'M. Dunkman' - 'R. Eagan' - 'B. Eberhardt' - 'T. Ehrhardt' - 'B. Eichmann' - 'S. Euler' - 'P. A. Evenson' - 'O. Fadiran' - 'S. Fahey' - 'A. R. Fazely' - 'A. Fedynitch' - 'J. Feintzeig' - 'J. Felde' - 'K. Filimonov' - 'C. Finley' - 'T. Fischer-Wasels' - 'S. Flis' - 'T. Fuchs' - 'M. Glagla' - 'T. K. Gaisser' - 'R. Gaior' - 'J. Gallagher' - 'L. Gerhardt' - 'K. Ghorbani' - 'D. Gier' - 'L. Gladstone' - 'T. Glüsenkamp' - 'A. Goldschmidt' - 'G. Golup' - 'J. G. Gonzalez' - 'D. Góra' - 'D. Grant' - 'P. Gretskov' - 'J. C. Groh' - 'A. Gro[ß]{}' - 'C. Ha' - 'C. Haack' - 'A. Haj Ismail' - 'A. Hallgren' - 'F. Halzen' - 'B. Hansmann' - 'K. Hanson' - 'D. Hebecker' - 'D. Heereman' - 'K. Helbing' - 'R. Hellauer' - 'D. Hellwig' - 'S. Hickford' - 'J. Hignight' - 'G. C. Hill' - 'K. D. Hoffman' - 'R. Hoffmann' - 'K. Holzapfel' - 'A. Homeier' - 'K. Hoshina' - 'F. Huang' - 'M. Huber' - 'W. Huelsnitz' - 'P. O. Hulth' - 'K. Hultqvist' - 'S. In' - 'A. Ishihara' - 'E. Jacobi' - 'G. S. Japaridze' - 'K. Jero' - 'M. Jurkovic' - 'B. Kaminsky' - 'A. Kappes' - 'T. Karg' - 'A. Karle' - 'M. Kauer' - 'A. Keivani' - 'J. L. Kelley' - 'J. Kemp' - 'A. Kheirandish' - 'J. Kiryluk' - 'J. Kläs' - 'S. R. Klein' - 'G. Kohnen' - 'R. Koirala' - 'H. Kolanoski' - 'R. Konietz' - 'A. Koob' - 'L. Köpke' - 'C. Kopper' - 'S. Kopper' - 'D. J. Koskinen' - 'M. Kowalski' - 'K. Krings' - 'G. Kroll' - 'M. Kroll' - 'J. Kunnen' - 'N. Kurahashi' - 'T. Kuwabara' - 'M. Labare' - 'J. L. Lanfranchi' - 'M. J. Larson' - 'M. Lesiak-Bzdak' - 'M. Leuermann' - 'J. Leuner' - 'J. Lünemann' - 'J. Madsen' - 'G. Maggi' - 'K. B. M. Mahn' - 'R. Maruyama' - 'K. Mase' - 'H. S. Matis' - 'R. Maunu' - 'F. McNally' - 'K. Meagher' - 'M. Medici' - 'A. Meli' - 'T. Menne' - 'G. Merino' - 'T. Meures' - 'S. Miarecki' - 'E. Middell' - 'E. Middlemas' - 'J. Miller' - 'L. Mohrmann' - 'T. Montaruli' - 'R. Morse' - 'R. Nahnhauer' - 'U. Naumann' - 'H. Niederhausen' - 'S. C. Nowicki' - 'D. R. Nygren' - 'A. Obertacke' - 'A. Olivas' - 'A. Omairat' - 'A. O’Murchadha' - 'T. Palczewski' - 'H. Pandya' - 'L. Paul' - 'J. A. Pepper' - 'C. Pérez de los Heros' - 'C. Pfendner' - 'D. Pieloth' - 'E. Pinat' - 'J. Posselt' - 'P. B. Price' - 'G. T. Przybylski' - 'J. Pütz' - 'M. Quinnan' - 'L. Rädel' - 'M. Rameez' - 'K. Rawlins' - 'P. Redl' - 'R. Reimann' - 'M. Relich' - 'E. Resconi' - 'W. Rhode' - 'M. Richman' - 'S. Richter' - 'B. Riedel' - 'S. Robertson' - 'M. Rongen' - 'C. Rott' - 'T. Ruhe' - 'D. Ryckbosch' - 'S. M. Saba' - 'L. Sabbatini' - 'H.-G. Sander' - 'A. Sandrock' - 'J. Sandroos' - 'S. Sarkar' - 'K. Schatto' - 'F. Scheriau' - 'M. Schimp' - 'T. Schmidt' - 'M. Schmitz' - 'S. Schoenen' - 'S. Schöneberg' - 'A. Schönwald' - 'A. Schukraft' - 'L. Schulte' - 'D. Seckel' - 'S. Seunarine' - 'R. Shanidze' - 'M. W. E. Smith' - 'D. Soldin' - 'G. M. Spiczak' - 'C. Spiering' - 'M. Stahlberg' - 'M. Stamatikos' - 'T. Stanev' - 'N. A. Stanisha' - 'A. Stasik' - 'T. Stezelberger' - 'R. G. Stokstad' - 'A. Stö[ß]{}l' - 'E. A. Strahler' - 'R. Ström' - 'N. L. Strotjohann' - 'G. W. Sullivan' - 'M. Sutherland' - 'H. Taavola' - 'I. Taboada' - 'S. Ter-Antonyan' - 'A. Terliuk' - 'G. Te[š]{}ić' - 'S. Tilav' - 'P. A. Toale' - 'M. N. Tobin' - 'D. Tosi' - 'M. Tselengidou' - 'A. Turcati' - 'E. Unger' - 'M. Usner' - 'S. Vallecorsa' - 'N. van Eijndhoven' - 'J. Vandenbroucke' - 'J. van Santen' - 'S. Vanheule' - 'J. Veenkamp' - 'M. Vehring' - 'M. Voge' - 'M. Vraeghe' - 'C. Walck' - 'M. Wallraff' - 'N. Wandkowsky' - 'Ch. Weaver' - 'C. Wendt' - 'S. Westerhoff' - 'B. J. Whelan' - 'N. Whitehorn' - 'C. Wichary' - 'K. Wiebe' - 'C. H. Wiebusch' - 'L. Wille' - 'D. R. Williams' - 'H. Wissing' - 'M. Wolf' - 'T. R. Wood' - 'K. Woschnagg' - 'D. L. Xu' - 'X. W. Xu' - 'Y. Xu' - 'J. P. Yáñez' - 'G. Yodh' - 'S. Yoshida' - 'P. Zarzhitsky' - 'M. Zoll' title: Characterization of the Atmospheric Muon Flux in IceCube --- atmospheric muons ,cosmic rays ,prompt leptons Introduction ============ IceCube is a particle detector with an instrumented volume of about one cubic kilometer, located at the geographic South Pole [@Karle:2014bta]. The experimental setup consists of 86 cables (“strings”), each supporting 60 digital optical modules (“DOMs”). Every DOM contains a photomultiplier tube and the electronics required to handle data acquisition, digitization and transmission. The main active part of the detector is deployed at a depth of 1450 to 2450 meters below the surface of the ice, which in turn lies at an altitude of approximately 2830 meters above sea level. The volume detector is supplemented by the surface array IceTop, formed by 81 pairs of tanks filled with - due to ambient conditions solidified - water. The main scientific target of IceCube is the search for astrophysical neutrinos. At the time of design, the most likely path to discovery was expected to be the detection of upward-going tracks caused by Earth-penetrating muon neutrinos interacting shortly before the detector volume. All DOMs were consequently oriented in the downward direction, such that Cherenkov light emission from charged particles along muon tracks can be registered after minimal scattering in the surrounding ice. The first indication for a neutrino signal exceeding the expected background from cosmic ray-induced atmospheric fluxes came in the form of two particle showers with a total visible energy of approximately 1 PeV [@Aartsen:2013bka]. Detailed analysis of their directionality strongly indicated an origin from above the horizon. The result strengthened the case for the astrophysical nature of the events, since no accompanying muons were seen, as would be expected for neutrinos produced in air showers. This serendipitous detection motivated a dedicated search for high-energy neutrinos interacting within the detector volume, which led first to a strong indication [@Aartsen:2013jdh] and later, after evaluating data taken during three full years of detector operation, to the first discovery of an astrophysical neutrino flux [@Aartsen:2014gkd]. In each case, the decisive contribution to the event sample were particle showers pointing downward. Despite the large amount of overhead material, the deep IceCube detector is triggered at a rate of approximately 3000 $\textrm{s}^{-1}$ by muons produced in cosmic ray-induced air showers. Formerly regarded simply as an irksome form of background, these have since proved to be an indispensable tool to tag and exclude atmospheric neutrino events in the astrophysical discovery region [@Gaisser:2014bja]. Apart from their application in neutrino searches, muons can be used for detector verification and a wide range of physics analyses. Examples are the measurement of cosmic ray composition and flux in coincidence with IceTop [@IceCube:2012vv], the first detection of an anisotropy in the cosmic ray arrival direction in the southern hemisphere [@Abbasi:2010mf; @Abbasi:2011ai; @Abbasi:2011zka], investigation of QCD processes producing high-$p_{\rm{t}}$ muons [@Abbasi:2012kza] and the evaluation of track reconstruction accuracy by taking advantage of the shadowing of cosmic rays by the moon [@Aartsen:2013zka]. Remaining to be demonstrated is the possibility to develop a comprehensive and consistent picture of atmospheric muon physics in IceCube. The goal of this paper is to outline how this could be accomplished, illustrate the scientific potential and discuss consequences of the actual measurement for the understanding of detector systematics. Physics ======= Cosmic Rays in the IceCube Energy Range --------------------------------------- ![Atmospheric muon event yield in IceCube in dependence of primary type simulated with CORSIKA [@corsika]. The cosmic ray flux was weighted according to the H3a model [@Gaisser:2013bla].[]{data-label="fig-allev"}](allev_eprim.eps){width="220pt"} The energy range of cosmic ray primaries producing atmospheric muons in IceCube is limited by the minimum muon energy required to penetrate the ice at the low, and the cosmic ray flux rate at the high end. Predicted event yields are shown in Fig. \[fig-allev\]. Since the muon energy is related to the energy per nucleon $E_{\rm{prim}}/A$, threshold energies increase in proportion to the mass of the primary nucleus. The energy range of atmospheric muon events in IceCube covers more than six orders of magnitude. Neutrinos, not attenuated by the material surrounding the detector, can reach even lower. With a ratio between lepton and parent nucleon energy of about one order of magnitude [@tomsbook], the lowest primary energies relevant for neutrinos in IceCube fall in the region around 100 GeV. Coverage of this vast range of energies by specialized detectors varies considerably, and overlapping measurements are not always consistent. At energies well below 1 TeV, important for production of atmospheric neutrinos in oscillation measurements [@Aartsen:2013jza], both PAMELA [@Adriani:2011cu] and AMS-02 [@Aguilar:2015ooa] find a clear break in the proton spectrum at about 200 GeV. The exact behavior of the primary spectrum should be an important factor in upcoming precision measurements of oscillation parameters by the planned IceCube sub-array PINGU [@Aartsen:2013aaa]. In the energy region where the bulk of atmospheric muons triggering the IceCube detector are produced, the most recent measurement was performed by the balloon-borne CREAM detector [@Ahn:2010gv]. In the range from 3-200 TeV, proton and helium spectra are found to be consistent with power laws of the form $E^{-\gamma}$. The proton spectrum with $\gamma_{\rm{p}}=2.66\pm0.02$ is somewhat softer than that of helium with $\gamma_{\rm{He}}=2.58\pm0.02$. The cross-over between the two fluxes lies at approximately 10 TeV. Between a few hundred GeV and 3 TeV, and again from 100 TeV to 1 PeV, there are large gaps where experimental measurements of individual primary fluxes are sparse and contain substantial uncertainties [@Kochanov:2008pt]. Especially the second region is of high importance to IceCube physics, because it corresponds to neutrino energies of tens of TeV where indications for astrophysical fluxes start to become visible. The situation improves around the “knee” located at about 4 PeV, which has long been a major focus of cosmic ray physics. The well constrained overall primary flux has been resolved into its individual components by the KASCADE array [@Antoni:2005wq], although the result depends strongly on the model used to describe nuclear interactions within the air shower. There is a general consensus that the primary composition changes towards heavier elements in the range between the knee and 100 PeV, confirmed by various measurements [@Bluemer:2009zf], including IceCube [@IceCube:2012vv]. An exact characterization of the all-nucleon spectrum around the knee is necessary to constrain the contribution to atmospheric lepton fluxes from prompt hadron decays and accurately describe backgrounds in diffuse astrophysical neutrino searches. Between 100 PeV and approximately 1 EeV lies another region with sparse coverage, which has only recently begun to be filled. In the past, data taken near the threshold of very large surface arrays indicated a “second knee” at about 300 PeV [@Bergman:2007kn]. Approaching from the other side, KASCADE-Grande found evidence for a knee-like structure closer to 100 PeV, along with a hardening of the all-primary spectrum around 15 PeV [@Apel:2012rm]. This result confirms earlier tentative indications from the Tien-Shan detector using data taken before 2001, but published only in 2009 [@Shaulov:2009zzd] and is supported by subsequent measurements using the TUNKA-133 [@Prosin:2014dxa] detector. The currently most precise spectrum in terms of statistical accuracy and hadronic model dependence was derived from data taken by the IceTop surface array [@Aartsen:2013wda]. KASCADE-Grande later extended the original result by indications for a light element ankle [@Apel:2013ura], a heavy element knee [@Apel:2011mi] and separate spectra for elemental groups [@Apel:2013dga]. The emergent picture has yet to be theoretically interpreted in a comprehensive manner. The data indicate that several discrete components are present in the cosmic ray flux, and that the behavior of individual nuclei closely corresponds to a power law followed by a spectral cutoff at an energy proportional to their magnetic rigidity $R=E_{\rm{prim}}/Z$. This explanation was first proposed by Peters in 1961 [@Peters:1961] and later elaborated by, among others, Ter-Antonyan and Haroyan [@TerAntonyan:2000hh] as well as Hörandel [@Hoerandel:2002yg]. Exactly how many components there are, where they originate, and the precise values and functional dependence of their transition energies are still open questions. A well-known proposal by Hillas postulates two galactic sources, one accounting for the knee, the other for the presumptive knee-like feature at 300 PeV [@Hillas:2005cs]. Another model, by Zatsepin and Sokolskaya, identifies three distinct types of galactic sources to account for the flux up to 100 PeV [@Zatsepin:2006ci]. The hardening of the spectrum around the “ankle” at several EeV can be described elegantly by a pure protonic flux and its interaction with CMB radiation [@Berezinsky:2005cq] or, more in line with recent experimental results, in terms of separate light and heavy components [@Aloisio:2009sj]. The consensus is in either case that the origin of the highest-energy cosmic rays is extragalactic. This paper, like other IceCube analyses, relies for purposes of model testing mainly on the parametrizations by Gaisser, Stanev and Tilav [@Gaisser:2013bla]. These incorporate various basic features of the models described above, while updating numerical values to conform with the latest available measurements. Specifically, the “Global Fit” (GF) parametrization introduces a second distinct population of cosmic rays before the knee with a transition energy of 120 TeV. The knee itself, and the feature at 100 PeV, are interpreted as helium and iron components with a common rigidity-dependent cutoff, eliminating the need for an intermediate galactic flux component as in the H(illas) 3a and 4a parametrizations. The difference between H3a and H4a lies in the composition of the highest-energy component which becomes dominant at energies beyond 1 EeV, which is mixed in the former, and purely protonic in the latter case. In the region around the knee, the two models are for practical purposes indistinguishable. Muons vs. Neutrinos ------------------- The flux of atmospheric neutrinos in IceCube is modeled using extrapolated parametrizations based on a Monte Carlo simulation for energies up to 10 TeV [@Honda:2006qj]. To account for the influence of uncertainties of the cosmic ray nucleon flux, the energy spectrum is adjusted by a correction factor [@Gaisser:2013ira]. The result can be demonstrated to agree reasonably well with full air shower simulations [@Fedynitch:2012fs], but necessarily contains inaccuracies, for example by neglecting variations in the atmospheric density profile at the site and time of production. Atmospheric muon events on the other hand are simulated through detailed modeling of individual cosmic ray-induced air showers. In standard simulation packages such as CORSIKA [@corsika], specific local conditions like the direction of the magnetic field and the profile of the atmosphere including seasonal variations can be fully taken into account. Energy spectra for each type of primary nucleus are separately adjustable. Hadronic interaction models can be varied and their influence quantified in terms of a systematic uncertainty. In contrast to neutrinos, astrophysical fluxes, flavor-changing effects and hypothetical exotic phenomena do not affect muons. All observations can be directly related to the primary cosmic ray flux and the detailed mechanisms of hadron collisions. Due to the close relation between neutrino and charged lepton production, high-statistics measurements using muon data are therefore invaluable to constrain atmospheric neutrino fluxes. Perhaps most importantly, atmospheric muons represent a high-quality test beam for the verification of detector performance, because the variety of possible measurements along with high event statistics permit detailed consistency checks. A particular advantage in the case of IceCube is that muons probe the region above the horizon for which the down-looking detector configuration is not ideal, but where contrary to original expectation the bulk of astrophysical detections has taken place. Primary Flux and Atmospheric Muon Characteristics {#sec:prim-muchar} ------------------------------------------------- [220pt]{} ![Contribution of individual elemental components to overall flux spectra relevant for atmospheric muon measurements, here shown for the Gaisser/Hillas model with mixed-composition extragalactic component (H3a) [@Gaisser:2013bla]. For definition of $E_{\rm{mult}}$, see Section \[sec:prim-muchar\].[]{data-label="fig-h3aspecs"}](eprim_5c_h3a.eps "fig:"){width="220pt"} [220pt]{} ![Contribution of individual elemental components to overall flux spectra relevant for atmospheric muon measurements, here shown for the Gaisser/Hillas model with mixed-composition extragalactic component (H3a) [@Gaisser:2013bla]. For definition of $E_{\rm{mult}}$, see Section \[sec:prim-muchar\].[]{data-label="fig-h3aspecs"}](emult_5c_h3a.eps "fig:"){width="220pt"} [220pt]{} ![Contribution of individual elemental components to overall flux spectra relevant for atmospheric muon measurements, here shown for the Gaisser/Hillas model with mixed-composition extragalactic component (H3a) [@Gaisser:2013bla]. For definition of $E_{\rm{mult}}$, see Section \[sec:prim-muchar\].[]{data-label="fig-h3aspecs"}](enuc_5c_h3a.eps "fig:"){width="220pt"} The connection between the measurable quantities of atmospheric muon events and the properties of the primary cosmic ray flux is illustrated in Fig. \[fig-h3aspecs\]. The relation of muon multiplicity to primary type and energy is expressed in terms of the parameter $E_{\rm{mult}}$, defined such that $E_{\rm{mult}}=E_{\rm{prim}}$ for iron primaries. The average number of muons in a bundle can then be expressed as $<N_{\mu}> = \kappa \cdot E_{\rm{mult}}$, where the proportionality factor $\kappa$ depends on the specific experimental circumstances. Due to fluctuations in the atmospheric depth of shower development and the total amount of hadrons produced in nuclear collisions, the variation in the number of muons is slightly wider than a Poissonian distribution [@Boziev:1991xw]. Since the muon multiplicity itself is a function of zenith angle, atmospheric conditions, detector depth and surrounding material, it is convenient to re-scale it such that the derived quantity is directly related to primary mass and energy. This study uses the parameter $$\label{emultdef} E_{\rm{mult}}\equiv E_{\rm{prim}}\cdot (A/56)^{\frac{1-\alpha}{\alpha}}.$$ The definition was chosen such that $E_{\rm{mult}}$ is equal to $E_{\rm{prim}}$ in the case of iron primaries with atomic mass number $A=56$, which will in practice dominate the multiplicity spectrum above a few PeV, as shown in Fig. \[fig-h3aspecs\] (b). Exact definition and construction of $E_{\rm{mult}}$ are discussed in Section \[sec:emult\]. As the ratio of muons to electromagnetic particles in an air shower increases with the primary mass, the contribution of light elements to the multiplicity spectrum is suppressed. For a power law spectrum of the form $E^{-\gamma}$, the contribution of individual elements to the muon multiplicity, here expressed in terms of a flux $\Phi_{\rm{mult}}$, scales as: $$\label{emult_eprimrel} \frac{\Phi_{\rm{mult}}(A)}{\Phi_{\rm{mult}}(1)}\cdot\frac{\Phi_{\rm{prim}}(1)}{\Phi_{\rm{prim}}(A)}\simeq A^{\frac{1-\alpha}{\alpha}\cdot(\gamma-1)},$$ where $\alpha \approx 0.79$ is an empirical parameter derived from simulation [@tomsbook]. Single-particle atmospheric lepton fluxes, on the other hand, are related to the nucleon spectrum. Under the same assumptions as above, the relation between all-nucleon and primary flux is: $$\label{enuc_eprimrel} \frac{\Phi_{\rm{nuc}}(A)}{\Phi_{\rm{nuc}}(1)}\cdot\frac{\Phi_{\rm{prim}}(1)}{\Phi_{\rm{prim}}(A)} = A^{2-\gamma}.$$ For a power law with an index of approximately -2.6 to -2.7, such as the cosmic ray spectrum before the knee, the nucleon spectrum therefore becomes strongly dominated by light elements. Prompt Muon Production ---------------------- A particular difficulty in the description of atmospheric lepton fluxes is the emergence at high energies of a component originating from prompt hadron decays. The reason is the harder spectrum compared to the light meson contribution, which is the consequence of the lack of re-interactions implicit in the definition. An important source of prompt atmospheric lepton fluxes is the decay of charmed hadrons. While it is possible to estimate their production cross section using theoretical calculations based on perturbative QCD, substantial contributions from non-perturbative mechanisms cannot be excluded. The problem can therefore at the moment only be resolved experimentally [@Lipari:2013taa]. One major open question currently under investigation [@Bednyakov:2014pqa] is whether nucleons contain “Intrinsic Charm” quarks, which might considerably increase charmed hadron production [@Brodsky:1980pb]. Inclusive charm production cross sections were measured during recent LHC runs by the collider detectors LHCb [@Britsch:2013lca], ATLAS [@Mountricha:2011zz], and ALICE [@ALICE:2011aa; @Abelev:2012vra], and previously by the RHIC collaborations PHENIX [@Adare:2006hc] and STAR [@Adams:2004fc]. Data points are consistently located at the upper end of the theoretical uncertainty, which covers about an order of magnitude [@Cacciari:2012ny]. On a qualitative level, the new results suggest that charm-induced atmospheric neutrino fluxes could be somewhat stronger than previously assumed. A straightforward translation is however difficult. Although collider measurements probe similar center-of-mass energies, they are for technical reasons restricted to central rapidities of approximately $|y| \leq 1$. For lepton production in cosmic ray interactions, forward production is much more important. A variety of descriptions for the flux of atmospheric leptons from charm have been proposed in the past [@Costa:2000jw]. In recent years, the model by Enberg, Reno and Sarcevic [@Enberg:2008te] has become the standard, especially within the IceCube collaboration, which usually expresses prompt fluxes in “ERS units”. For muons, electromagnetic decays of unflavored vector mesons make a significant additional contribution not present in neutrinos [@Illana:2010gh], and at the very highest energies di-muon pairs are produced by Drell-Yan processes [@Illana:2009qv]. Especially the first process should lead to a substantial enhancement of the prompt muon flux compared to neutrinos [@Fedynitch:2015zma]. A detailed discussion can be found in \[sec:simple-prompt\]. It has long been suggested to use large-volume neutrino detectors to constrain the prompt component of the atmospheric muon flux directly [@Gelmini:2002sw]. Apart from the aspect of particle physics, the approximate equivalence between prompt muon and neutrino fluxes would help to constrain atmospheric background in the energy region critical for astrophysical searches. Past measurements of the muon energy spectrum in volume detectors were not able to identify the prompt component. Usually based on the zenith angle distribution alone, the upper end of their energy range fell one order of magnitude or more below the region where the prompt flux is expected to become measurable [@Kochanov:2009rn]. The LVD collaboration, by exploiting azimuthal variations in the density of the surrounding material, was able to set a weak limit [@Aglietta:1999ic]. The Baksan Underground Scintillation Telescope reported a significant excess above even the most optimistic predictions [@Bogdanov:2009ny], but the result has not yet been confirmed independently. Data Samples ============ Experimental Data ----------------- The data used in this study were taken during two years of detector operation from 2010 to 2012. Originally the analysis was developed for the first year only, but problems related to simulation methods as discussed in Section \[sec:simulation\] made it necessary to base the high-energy muon measurement on the subsequent year instead. Time Period Detector Configuration Livetime ------------------------- ------------------------ ------------ 05-31-2010 - 05-13-2011 79 Strings (IC79) 313.3 days 05-13-2011 - 05-15-2012 86 Strings (IC86) 332.1 days : Experimental Data Sets.[]{data-label="dataset_livetime"} The main IceCube trigger requires four or more pairs of neighboring or next-to-neighboring DOMs to register a signal within a time of 5 s. Full event information is read out for a window extending from 10 s before to 22 s after the moment at which the condition was fulfilled. Including events triggered by the surface array IceTop and the low-energy extension DeepCore, for which special conditions are implemented, this results in a total event rate of approximately 3000 $\textrm{s}^{-1}$ for the full 86-string detector configuration. As data transfer from the South Pole is constrained by bandwidth limitations, only specific subsets are available for offline analyses. The main requirement in the studies presented here was an unbiased base sample. The main physics analyses therefore use the filter stream containing all events with a total of more than 1,000 photo-electrons. Additionaly, minimum bias data corresponding to every 600th trigger were applied to evaluate detector systematics. Reconstruction of track direction and quality parameters followed the standard IceCube procedure for muon candidate events [@Aartsen:2014cva], based on multiple photo-electron information and including isolated DOMs registering a signal. In addition, various specific energy reconstruction algorithms were applied. For all data, the differential energy deposition was calculated using the deterministic method discussed in \[sec:ddddr\], and the track energy was estimated by a truncation method [@Abbasi:2012wht]. Likelihood-based energy reconstructions [@Aartsen:2013vja] were applied to the first year of data only, primarily for evaluation purposes. Simulation {#sec:simulation} ---------- The standard method used for simulation of cosmic ray-induced air showers in IceCube is the CORSIKA software package [@corsika], in which the physics of hadronic interactions are implemented via externally developed and freely interchangeable modules. In this study, as in all IceCube analyses, mass air shower simulation production was based on SIBYLL 2.1 [@Ahn:2009wx]. To investigate systematic variations, smaller sets of simulated data were produced using the QGSJET-II [@Ostapchenko:2010vb] and EPOS 1.99 [@Werner:2009zzc] models. In the current version of CORSIKA (7.4), the contribution from prompt decays of charmed hadrons and short-lived vector mesons to the muon flux is usually neglected. An accurate simulation would in any case be difficult due to strong uncertainies on production and re-interaction cross sections. For this study, the prompt component of the atmospheric muon flux was modeled by re-weighting events produced in decays of light mesons. The exact procedure is described in \[sec:simple-prompt\]. ![Energy spectra of discrete stochastic energy losses along muon tracks simulated using the mmc code [@Chirkin:2004hz]. The data sample corresponds to events with more than 1,000 registered photo-electrons in the IceCube detector. For demonstration purposes, the primary cosmic ray spectrum is modeled as an unbroken $E^{-2.7}$ power law.[]{data-label="fig-losspec"}](losspec.eps){width="220pt"} High-energy muons passing through matter lose their energy through a variety of specific processes [@Koehne:2013gpa], which in IceCube are modeled by a dedicated simulation code [@Chirkin:2004hz]. The energy spectra of discrete catastrophic losses along atmospheric muon tracks predicted to occur within the IceCube detector volume are shown in Fig. \[fig-losspec\]. For all energy loss processes, the corresponding Cherenkov photon emission is calculated. Every photon is then tracked through the detector medium until it is either lost due to absorption or intersects with an optical module [@Chirkin:2013tma]. This detailed procedure is necessary to account for geometrically complex variations in the optical properties of the ice, but has the disadvantage of being computationally intensive, limiting the amount of simulated data especially for bright events. The variations between direct photon propagation and the tabular method previously used in IceCube simulations were evaluated for each of the studies presented in this paper. It was found that in the case of high-multiplicity bundles the difference can be accounted for by a simple correction factor, while for high-energy tracks the distortion was so severe that simulations produced with the obsolete method were unusable. Simulation mass production based on direct photon propagation is only available for the 86-string detector configuration, requiring the use of a corresponding experimental data set. In order to reduce computational requirements, the measurement of bundle multiplicity was not duplicated and instead solely relies on data from the 79-string configuration. The low cosmic ray flux rate at the highest primary energies means that even relatively few events correspond to large amounts of equivalent livetime. Accordingly, for the measurement of the bundle multiplicity spectrum simulation statistics are not a limiting factor. In the region before and at the knee, where the dominant part of high-energy muons are produced, far more showers need to be simulated. For this reason, the statistical accuracy of the single muon energy spectrum measurement is limited by the amount of simulated livetime, generally corresponding to substantially less than one year. The calculation of detector acceptance and conversion of muon fluxes from South Pole to standard conditions for high energy muons as described in Section \[sec-hemu-espec\] made use of an external simulated data set produced for a dedicated study on the effect of hadronic interaction models on atmospheric lepton fluxes [@Fedynitch:2012fs]. Low-Energy Muons {#sec:le-muons} ================ Observables {#sec:lolev-obs} ----------- A comprehensive verification of detector performance requires the demonstration that atmospheric muon data are understood at a basic level. Sufficient statistics for this purpose are in IceCube provided by the minimum bias sample, consisting of every 600th event triggering the detector. Two simple parameters were used in the evaluation. These are the zenith angle $\theta_{\rm{zen}}$ of the reconstructed track, with $\theta_{\rm{zen}} = 0$ for vertically down-going muons from zenith, and the total number of photo-electrons $Q_{\rm{tot}}$ registered in the event. [220pt]{} ![Relation between reconstructed zenith angle and energy for simulated muon showers triggering the IceCube detector. The distributions correspond to minimum bias data after track quality selection described in Sec. \[sec:lolev-result\]. Superimposed are mean and spread of the distribution.[]{data-label="fig-zenang-emu"}](zenang_emu.eps "fig:"){width="220pt"} [220pt]{} ![Relation between reconstructed zenith angle and energy for simulated muon showers triggering the IceCube detector. The distributions correspond to minimum bias data after track quality selection described in Sec. \[sec:lolev-result\]. Superimposed are mean and spread of the distribution.[]{data-label="fig-zenang-emu"}](zenang_enuc.eps "fig:"){width="220pt"} The angular dependence of the muon flux can be directly related to the energy spectrum in the TeV range, because the threshold increases as a function of the amount of matter that a muon has to traverse before reaching the detector. The limiting factors near the horizon are the rapid increase of the mean surface energy approximately proportional to $\exp(\sec\theta_{\rm{zen}})$, the corresponding decrease in flux, and eventually the irreducible background from atmospheric muon neutrinos. Purely angular-based muon energy spectra therefore only reach up to energies of 20-30 TeV, depending on the depth of the detector and the type of surrounding material. For the specific case of IceCube, the relation of zenith angle to muon and primary nucleon energy is shown in Fig. \[fig-zenang-emu\]. ![Top: Simulated distributions of total number of photo-electrons in event, separated in dependence of number of muons in bundle at closest approach to the center of the IceCube detector. The functional dependence of the fit is described in the text. Bottom: Change of data/simulation ratio for different assumptions about the light yield, effectively corresponding to the relation between energy deposition and number of registered photo-electrons. The simulation was weighted according to the H3a primary flux model.[]{data-label="fig-12many"}](onetwomany.eps "fig:"){width="220pt"} ![Top: Simulated distributions of total number of photo-electrons in event, separated in dependence of number of muons in bundle at closest approach to the center of the IceCube detector. The functional dependence of the fit is described in the text. Bottom: Change of data/simulation ratio for different assumptions about the light yield, effectively corresponding to the relation between energy deposition and number of registered photo-electrons. The simulation was weighted according to the H3a primary flux model.[]{data-label="fig-12many"}](qtot_lightyield.eps "fig:"){width="220pt"} The total number of photo-electrons (“brightness”) of atmospheric muon events is closely related to the muon multiplicity, as demonstrated in Fig. \[fig-12many\], where events with photons registered by the DeepCore array were excluded to avoid minor biases at the very low end of the distribution. In the experimental measurements described below, all events were included. The emitted Cherenkov light is in good approximation proportional to the total energy loss, and the multiplicity spectrum can therefore be measured even at low energies, although its interpretation is difficult because of the varying threshold for the individual components of the cosmic ray flux. The distribution for a fixed number of muons can be described by a transition from a Gaussian distribution to an exponential in terms of the parameter $q\equiv \log_{\rm{10}}(Q_{\rm{tot}}/\textrm{p.e.})$: $$\label{singmufit_param} \frac{\Delta n_{\rm{event}}}{\Delta q} = N\cdot\exp\left(\frac{-\frac{1}{2\sigma^{2}}(q-q_{\rm{peak}})^2}{1+\exp^{a(q-q_{\rm{peak}})}}+\frac{\beta_{\mu}(q-q_{\rm{peak}})}{1+\exp^{-a(q-q_{\rm{peak}})}}\right)$$ Fit Parameter Value Interpretation ----------------- ----------------- ----------------------- $q_{\rm{peak}}$ $1.615\pm0.002$ 42.2 p.e a $5.35\pm0.34$ Transition Smoothness $\sigma$ $0.160\pm0.004$ Width of Gaussian $\beta_{\mu}$ $-6.23\pm0.07$ Power Law Index N arbitrary normalization : Parameters and values for the fit to the single muon distribution shown in Fig. \[fig-12many\]. The $\chi^{2}/$dof of the fit is 26.75/16, where the main deviation from the fit is found in the first three bins of the histogram.[]{data-label="onemuon_table"} The free fit parameters for the case of single muon events are described in Table \[onemuon\_table\]. While all values depend on the exact detector setup and event sample and have no profound physical meaning, the description nevertheless provides valuable insights. For example, the peak position corresponds to the average number of photo-electrons detected from a minimum ionizing track crossing the full length of the detector, and represents an approximate calorimetric scale from which the response to a given energy deposition can be estimated. The lower, Gaussian half of the one-muon distribution only depends on the experimental setup and shows minimal sensitivity to physics effects in simulations. In particular, the peak value $q_{\rm{peak}}$ varies as a function of the optical efficiency, a scalar parameter which expresses the effects of a wide variety of underlying phenomena [@Aartsen:2013eka]. As shown in the lower panel of Fig. \[fig-12many\], above a certain threshold only the flux level, not the shape of the distribution is affected by detector systematics. This is a common observation for energy-related observables and a simple consequence of the effect of a slight offset on a power law function. Note that the measured distribution is fully consistent with expectation within the 10% light yield variation usually assumed as systematic uncertainty in IceCube. Connection to Primary Flux {#sec:primflux-conn} -------------------------- ![Low-level observables for IceCube atmospheric muon events at trigger level, separated by cosmic ray primary type. The simulated data were generated with CORSIKA [@corsika] and weighted according to the H3a model [@Gaisser:2013bla].[]{data-label="fig-lolev-primtype"}](allev_coszen.eps "fig:"){width="220pt"} ![Low-level observables for IceCube atmospheric muon events at trigger level, separated by cosmic ray primary type. The simulated data were generated with CORSIKA [@corsika] and weighted according to the H3a model [@Gaisser:2013bla].[]{data-label="fig-lolev-primtype"}](allev_qtot.eps "fig:"){width="220pt"} The consistency of measurements on separate observables can be checked by relating them to the primary cosmic ray flux. Assuming that the current understanding of muon production in air showers is correct, there should be a model which describes both energy and multiplicity spectra of atmospheric muons. Figure \[fig-lolev-primtype\] shows the two proxy variables described in the previous section, separated by elemental type of the cosmic ray primary. At all angles, the muon flux is strongly dominated by proton primaries. This is a simple consequence of the connection between muon energy and energy per nucleon of the primary particle, and does not depend strongly on the specific cosmic ray flux model [@Gaisser:2012zz]. Likewise, the multiplicity-related brightness distribution is for low values dominated by light primaries, a consequence of the varying threshold energies shown in Fig. \[fig-allev\]. The cosmic ray flux models best reproducing the latest direct measurement in the relevant energy region from 10 to 100 TeV [@Ahn:2010gv] are GST-GF and H3a [@Gaisser:2013bla]. For the comparisons between data and simulation in the following section, they are used as benchmark models representing the best prediction at the current time. In addition, toy models corresponding to straight power law spectra are discussed to illustrate the effect of variations in the primary nucleon index. In these, elemental composition and absolute flux levels at 10 TeV primary energy correspond to the rigidity-dependent poly-gonato model [@Hoerandel:2002yg], used as default setting for the production of IceCube atmospheric muon systematics data sets. Experimental Result {#sec:lolev-result} ------------------- ![Angular distribution of true and reconstructed atmospheric muon tracks in simulation compared to experimental data. Top: Trigger Level, Bottom: High-Quality Selection. The event sample corresponds to minimum bias data encompassing all trigger types. The ratio of experimental data to simulation is shown in Figs. \[fig-minbias-paramrat\] (a) and (c).[]{data-label="fig-angdest-true-reco"}](coszen_minbias_triglev.eps "fig:"){width="220pt"} ![Angular distribution of true and reconstructed atmospheric muon tracks in simulation compared to experimental data. Top: Trigger Level, Bottom: High-Quality Selection. The event sample corresponds to minimum bias data encompassing all trigger types. The ratio of experimental data to simulation is shown in Figs. \[fig-minbias-paramrat\] (a) and (c).[]{data-label="fig-angdest-true-reco"}](coszen_minbias_aftercut.eps "fig:"){width="220pt"} For the study presented in this section, minimum bias data and simulation were compared at trigger level and for a sample of high-quality tracks requiring: - Reconstructed track length within the detector exceeding 600 meters. - $llh_{\rm{reco}}/(N_{\rm{DOM}}-2.5) < 7.5$, where $llh_{\rm{reco}}$ corresponds to the likelihood value of the track reconstruction and $N_{\rm{DOM}}$ to the number of optical modules registering a signal. The stringency of the quality selection is slightly weaker than in typical neutrino analyses. For tracks reconstructed as originating from below the horizon, the contribution from mis-reconstructed atmospheric muon events amounts to about 50%. Simulated and experimental zenith angle distributions are shown in Fig. \[fig-angdest-true-reco\]. Even at trigger level, the influence of mis-reconstructed tracks can be neglected in the region above 30 degrees from the horizon ($\cos\textrm{ }\theta_{\rm{zen}} = 0.5$). For the high-quality data set, true and reconstructed distributions are approximately equal down to angles of $\cos\textrm{ }\theta_{\rm{zen}} = 0.15$, or 80 degrees from zenith. [220pt]{} ![image](minbias_coszen_triglev.eps){width="220pt"} [220pt]{} ![image](minbias_qtot_triglev.eps){width="220pt"} [220pt]{} ![image](minbias_coszen_qcut.eps){width="220pt"} [220pt]{} ![image](minbias_qtot_qcut.eps){width="220pt"} Type Variation $\gamma_{\rm{CR,Trigger}}$ $\gamma_{\rm{CR,High-Q}}$ $\Delta\gamma_{\rm{CR}}$ ------------------------ --------------------- ---------------------------- --------------------------- -------------------------- Hole Ice Scattering 30cm/100cm $\pm 0.03$ $+0.03/-0.05$ $+0.01/-0.02$ Bulk Ice Absorption $\pm 10\%$ $\pm 0.03$ $\pm 0.02$ $\pm 0.05$ Bulk Ice Scattering $\pm 10\%$ $<0.01$ $\pm 0.01$ $<0.015$ Primary Composition p/He $<0.01$ $+0.03/-0.10$ $-0.03/+0.10$ Hadronic Model QGSJET-II/EPOS 1.99 $+0.02/<0.01$ $+0.03/<0.02$ $<0.02$ DOM Efficiency $\pm 10\%$ $<0.02$ $+<0.02/-0.04$ $+0.02/-<0.02$ **Experimental Value** Statistical Error $2.715 \pm 0.003$ $2.855 \pm 0.007$ $0.140 \pm 0.008$ Figure \[fig-minbias-paramrat\] shows comparisons between data and simulation weighted according to several primary flux predictions. The total number of photo-electrons is described reasonably well by the simulation weighted according to the H3a model. Application of quality criteria does not lead to any visible distortion. The angular distribution, on the other hand, shows substantial inconsistencies. At trigger level, the spectrum is clearly harder than for the high-quality sample. The discrepancy does not depend on the particular track quality parameters used in the selection. It is important to note that the absolute level of the ratio is not a relevant quantity for the evaluation. Consistency between measurement and expectation within the range of systematic uncertainties on the photon yield was demonstrated for the brightness distribution in Section \[sec:lolev-obs\]. Also, absolute primary flux levels derived from direct measurements are typically constrained no better than to several tens of percents. For the toy models, the normalization was consciously chosen to produce a clear separation from the realistic curves in the interest of clarity. The trigger-level angular distribution in the region near the horizon becomes dominated by mis-reconstructed events consisting of two separate showers crossing the detector in close succession. The frequency of these “coincident” events scales with the square of the overall shower rate, leading to a spurious distortion of the ratio between data and simulation in cases where the absolute normalization is not exactly equal. This effect is visible in Fig. \[fig-minbias-paramrat\] (a) at values below 0.3. To quantify the discrepancy between trigger and high-quality level and investigate the influence of systematic uncertainties, the toy model simulation was fitted to data for $1>\cos\textrm{ }\theta_{\rm{zen}}>0.5$. In this region, influences of mis-reconstructed tracks are negligible even at trigger level, as demonstrated in Fig. \[fig-angdest-true-reco\]. From Fig. \[fig-zenang-emu\] it can be seen that this corresponds to a relatively small energy range for muons and parent nuclei, over which the power law index of the cosmic ray all-nucleon spectrum can be assumed to be approximately constant and used as sole fit parameter. As the normalization was left free, the best result simply corresponds to a flat curve for the ratio between data and simulation. Possible effects of variations in the primary elemental composition can be taken into account as a systematic error. The numerical results of the fit to the angular distribution is shown in Table \[angsyst\_table\]. Note that for cases where the statistical error due to limited simulated data exceeds the absolute value of the variation, only an upper limit is given. The best fit results for the spectral index at trigger and high-quality level, 2.715 and 2.855, are illustrated by the toy model curves in Fig. \[fig-minbias-paramrat\]. Both measurements are softer than those of the realistic models, in which $\gamma_{\rm{nucleon}}\approx2.64$. Interpretation -------------- For the strong discrepancy between the measurements at trigger and high-quality level of $\Delta\gamma_{\rm{CR}} = 0.140\pm0.008 \textrm{(stat.)}$, the following explanations can be proposed: - A global adjustment to the bulk ice absorption length of more than 20%. This explanation would imply a major flaw in the method used to derive the optical ice properties [@Aartsen:2013rt], and is strongly disfavored by the good agreement between the effective attenuation length in data and simulation demonstrated in \[sec:ddddr\]. - A substantial change of the primary cosmic ray composition towards heavier elements. In an event sample entirely excluding proton primaries, the observed effect can be approximately reproduced. However, the increased threshold energy would require the overall primary flux to be more than three times higher than in the default assumption to produce the observed event rate. An explanation based purely on a heavier cosmic ray composition therefore appears highly unlikely. - A major inaccuracy of hadronic interaction simulations common to SIBYLL, QGSJET-II and EPOS. While this explanation seems improbable, especially given the almost perfect agreement between SIBYLL and EPOS, it should be noted that the models used in the IceCube CORSIKA simulation were developed before LHC data became available. Improved models are in preparation [@Pierog:2013ria] and it should be possible to evaluate them in the near future. - An unsimulated detector effect with a significant influence on the behavior of track quality parameters. Detectors using naturally grown ice are inherently difficult to model in simulations. The optical properties of the medium are inhomogeneous and photon scattering has a substantial influence on the data. The situation is complicated further by the placement of the active elements in re-frozen “hole ice” columns containg sizable amounts of air bubbles. Studies on possible error sources are ongoing at the time of writing, but currently there is no indication for a major oversight. While the presence of an inconsistency is clear, from IceCube data alone there is no strict way to conclude whether the brightness or the angular measurement is more reliable. However, the evidence strongly points to an unrecognized angular-dependent effect introduced by track quality-related observables. The reasons are: - The brightness distributions are consistent both between the two data samples and with direct measurements of the cosmic ray flux. - At trigger level, there are no major contradictions between brightness and zenith angle distributions. - The angular spectrum for the high-quality data set is significantly steeper than both the neutrino-derived result [@Aartsen:2013eka] and direct measurements. In comparisons to the latter, the error from the variation in primary composition does not apply, as proton and helium fluxes are constrained individually. The total systematic uncertainty on the all-nucleon power law index would in this case be reduced to about $\pm 0.06$, whereas the difference in measurement is larger than 0.2. On the other hand, it is interesting to note that the LVD detector found a value of $\gamma_{\rm{cr}} = 2.78 \pm 0.05$ [@Aglietta:1998nx], closer to the IceCube high-quality sample result. Even though angular distributions of atmospheric muons have been published by practically all large-volume neutrino detectors and prototypes [@Babson:1989yy; @Bakatanov:1992gp; @Ambrosio:1995cx; @Belolaptikov:1997ry; @Andres:1999hm; @Aggouras:2005bg; @Aiello:2009uh; @Aguilar:2010kg], none of the measurements is accurate enough to provide a strict external constraint. For the time being, there is no other choice than to note the effect and continue to investigate possible explanations. In the main physics analyses described in the subsequent sections, the possible presence of an angular distortion was taken into account as a systematic error on the result. Physics Analyses {#sec:phys-mu} ================ ![Distribution of muon energies in individual air showers at the IceCube detector depth simulated with CORSIKA/SIBYLL [@corsika; @Ahn:2009wx], averaged over all angles. Top: $E_{\rm{prim}}$ = 3 PeV. Bottom: $E_{\rm{prim}}$ = 100 PeV. The threshold effect visible at high muon energies in the top plot is due to the lower energy per nucleon in iron primaries. As the total energy increases, this effect becomes less and less visible and the spectra are identical except for a scaling factor.[]{data-label="fig-emupdf"}](iimu_muspec_3e6.eps "fig:"){width="220pt"} ![Distribution of muon energies in individual air showers at the IceCube detector depth simulated with CORSIKA/SIBYLL [@corsika; @Ahn:2009wx], averaged over all angles. Top: $E_{\rm{prim}}$ = 3 PeV. Bottom: $E_{\rm{prim}}$ = 100 PeV. The threshold effect visible at high muon energies in the top plot is due to the lower energy per nucleon in iron primaries. As the total energy increases, this effect becomes less and less visible and the spectra are identical except for a scaling factor.[]{data-label="fig-emupdf"}](iimu_muspec_1e8.eps "fig:"){width="220pt"} While the study of low-energy atmospheric muons is instructive for detector verification and the evaluation of systematic uncertainties, the main physics potential lies in the measurement of events at higher energies. Here it is necessary to distinguish two main categories: - **High-Multiplicity Bundles**, in which muons conform to typical energy distributions as shown in Fig. \[fig-emupdf\]. The total energy $\sum{E_{\mu}}$ contained in the bundle is approximately proportional to the number of muons $N_{\mu}$, and related to primary mass $A$ and energy $E_{\rm{prim}}$ as $$\sum{E_{\mu}}\propto N_{\mu}\propto E_{\rm{prim}}^{\alpha}\cdot A^{1-\alpha},$$ with $\alpha\approx0.79$. The dependence of the muon multiplicity on the mass of the cosmic ray primary is the main principle underlying composition analyses using deep detector and surface array in coincidence [@IceCube:2012vv]. Low-energy muons lose their energy smoothly, and fluctuations in the energy deposition are usually negligible. - **High-Energy Muons** with energies significantly exceeding the main bundle distribution. Their production is dominated by exceptionally quick decays of pions and kaons at an early stage in the development of the air shower. Figure \[fig-leadmuons\] shows that showers with more than one muon with an energy above several tens of TeV are very rare. Any muon with an energy of 30 TeV or more will therefore very likely be the leading one in the shower, although this does not exclude the presence of other muons at lower energies. The primary nucleus can in this case be approximated as a superposition of individual nucleons, each carrying an energy of $E_{\rm{nucleon}}=E_{\rm{prim}}/A$. High-energy lepton spectra are therefore a function of the primary nucleon flux. Hadronic models, cosmic ray spectrum and composition all have a significant influence on TeV muons [@Lipari:1993ty]. In addition, at muon energies approaching 1 PeV prompt decays of short-lived hadrons play a significant role. The result is a complex picture with substantial uncertainties, as neither the exact behavior of the nucleon spectrum at the knee nor the production of heavy quarks in air showers is fully understood. A schematic illustration of the muon flux above 100 TeV is given in Fig. \[fig-promptsketch\]. Charged leptons and neutrinos are usually produced in the same hadron decay. The energy spectrum of single muons is therefore the quantity most relevant for the constraint of atmospheric neutrino fluxes. Since the stochasticity of energy losses in matter increases with the muon energy, the signal registered in the detector can vary substantially, as in the case of neutrino-induced muons. ![Surface energy distribution for all and most energetic (“leading”) muons in simulated events with a total of more than 1,000 registered photo-electrons in IceCube.[]{data-label="fig-leadmuons"}](lead_muen_surf.eps){width="220pt"} ![Sketch illustrating the contribution to the single muon spectrum at energies beyond 100 TeV. The “conventional” component from light mesons is sensitive to atmospheric density and varies as a function of the zenith angle [@Illana:2010gh], that from prompt decays of short-lived hadrons is isotropic. Re-interactions cause the non-prompt spectrum to be steeper. The exact spectral shape depends on the all-nucleon cosmic ray flux, with a significant steepening expected due to the cutoff at the “knee”.[]{data-label="fig-promptsketch"}](promptsketch.pdf){width="220pt"} The transition between the two atmospheric muon event types is gradual. High-energy events rarely consist of single particles, and the characteristics of the accompanying bundle of low-energy muons could in principle for some cases be determined and used to extract additional information about the primary nucleus. At low energies the distinction becomes meaningless, as events are usually caused by single or very few muons with energies below 1 TeV. ![Event samples used for the measurements described in Sec. \[sec:bundles\] and \[sec:hemu\]. Shown are true parameter distributions for simulated data with more than 1,000 registered photo-electrons. Top: Fraction of total bundle energy carried by the leading muon. Bottom: Energy of CR primary. The bimodal shape of the distributions becomes more pronounced with increasing brightness.[]{data-label="fig-evsamp-trupar"}](evsamp_emufrac.eps "fig:"){width="220pt"} ![Event samples used for the measurements described in Sec. \[sec:bundles\] and \[sec:hemu\]. Shown are true parameter distributions for simulated data with more than 1,000 registered photo-electrons. Top: Fraction of total bundle energy carried by the leading muon. Bottom: Energy of CR primary. The bimodal shape of the distributions becomes more pronounced with increasing brightness.[]{data-label="fig-evsamp-trupar"}](evsamp_eprim.eps "fig:"){width="220pt"} Two separate analysis samples were extracted from the data, corresponding to high-energy muon and high-multiplcity bundle event types. Figure \[fig-evsamp-trupar\] illustrates their characteristics in terms of true event parameters derived from Monte-Carlo simulations. High-energy events, in which the total muon energy is dominated by the leading particle, are outnumbered by a factor of approximately ten. The corresponding need for more rigorous background suppression leads to a lower selection efficiency than in the case of large bundles. The details of the selection methods are described in the following sections. Muon Bundle Multiplicity Spectrum {#sec:bundles} ================================= Principle {#sec:emult} --------- The altitude of air shower development, and with it the fraction of primary energy going to muons, decreases as a function of parent energy $E_{\rm{prim}}$, but increases with the nuclear mass $A$. The relation between the energy of the cosmic ray primary and the number of muons above a given energy $E_{\rm{\mu,min}}$ is therefore not linear. A good approximation is given by the “Elbert formula”: $$\label{elbert} \small N_{\mu}(E>E_{\rm{\mu,min}})=A\cdot\frac{E_{0}}{E_{\rm{\mu,min}}\cos\textrm{ }\theta}\cdot\left(\frac{E_{\rm{prim}}}{AE_{\mu}}\right)^{\alpha}\cdot\left(1-\frac{AE_{\mu}}{E_{\rm{prim}}}\right)^{\beta},$$ where $\cos\textrm{ }\theta$ is the incident angle of the primary particle, and $\alpha$, $\beta$ and $E_{0}$ are empirical parameters that need to be determined by a numerical simulation [@tomsbook]. The index $\beta$ describes the cutoff near the production threshold, and $E_{\rm{0}}$ is a proportionality factor applicable to the number of muons at the surface. In this analysis, only the parameter $\alpha$, describing the increase of muon multiplicity as a function of primary energy and mass, is relevant. For energies not too close to the production threshold $E_{\rm{prim}}/A$, the relation can be simplified to: $$\label{elbsimp} N_{\mu}\propto A^{1-\alpha}\cdot E_{\rm{prim}}^{\alpha}$$ For deep underground detectors, $E_{\rm{\mu,min}}$ corresponds to the threshold energy for muons penetrating the surrounding material. In the case of IceCube, this corresponds to about 400 GeV for vertical showers, increasing exponentially as a function of $\sec\theta_{\rm{zen}}$. Equation \[elbert\] implies that the distribution of muon energies within a shower is independent of type and energy of the primary nucleus, except at the very highest end of the spectrum, as demonstrated in Fig. \[fig-emupdf\]. The total energy of the muon bundle, as well as its energy loss per unit track length, is therefore in good approximation simply proportional to the muon multiplicity. After excluding rare events where the muon energy deposition is dominated by exceptional catastrophic losses, the muon multiplicity can therefore be measured simply from the total energy deposited in the detector. The experimental data can be directly related to any flux model expressed in terms of the parameter $E_{\rm{mult}}$ introduced in Sec. \[sec:prim-muchar\], as long as the measured number of muons remains proportional to the overall multiplicity in the air shower. In the case of IceCube, the corresponding threshold for iron nuclei lies at about 1 PeV. For lower primary energies, Equation \[emultdef\] is not applicable, and the multiplicity distribution can only be used for model testing, as in Sec. \[sec:lolev-result\]. Event Selection --------------- High-multiplicity bundles account for the dominant part of bright events in IceCube. The goal of quality selection is therefore not the isolation of a rare “signal”, but the reduction of tails and improvement in resolution. The criteria for the high-multiplicity bundle sample are shown in Table \[mult\_cut\_table\]. Selection Events ($\times10^{6}$) Rate \[$s^{-1}$\] Comment Effect --------------------------------------- ------------------------- ------------------- ---------------------------------------- ---------------- All $Q_{\rm{tot}}>1,000$ p.e. 29.10 1.075 Base Sample (79-String Configuration) n/a $\cos\textrm{ }\theta_{\rm{zen}}>0.3$ 28.54 1.054 Track Zenith Angle low $N_{\mu}$ $L_{\rm{dir}}>$600 m 24.09 0.890 Track Length high $N_{\mu}$ $q_{\rm{max}}/Q_{\rm{tot}}<0.3$ 20.66 0.763 Brightness dominated by single DOM low $N_{\mu}$ $d_{\rm{mpe,cod}}<$ 425 m 18.22 0.673 Closest approach to center of detector high $N_{\mu}$ $dE/dx$ peak/median $< 8$ 12.34 0.456 See \[sec:ddddr\] low $N_{\mu}$ ![Muon bundle multiplicity at closest approach to the center of the detector ([*cod*]{}) for simulated events with 3,000 to 4,000 registered photo-electrons. Distributions are shown for trigger level and final high-multiplicity bundle selection.[]{data-label="fig-bundlecuteffect"}](bundlecuteffect.eps){width="220pt"} ![Top: Relation between number of muons at closest approach to the center of the detector and total energy loss of muon bundle within detector volume. Bottom: Total muon energy loss vs. sum of muon energies at entry into detector volume. Data samples correspond to CORSIKA simulation after application of bundle selection quality criteria. The black curve represents a profile of the colored histogram. The error bars indicate the spread of the value.[]{data-label="fig-bundleparamdep"}](nmu_elosstot.eps "fig:"){width="220pt"} ![Top: Relation between number of muons at closest approach to the center of the detector and total energy loss of muon bundle within detector volume. Bottom: Total muon energy loss vs. sum of muon energies at entry into detector volume. Data samples correspond to CORSIKA simulation after application of bundle selection quality criteria. The black curve represents a profile of the colored histogram. The error bars indicate the spread of the value.[]{data-label="fig-bundleparamdep"}](elosstot_emutot.eps "fig:"){width="220pt"} ![Resolution of muon multiplicity estimators based on four different energy reconstructions. The analysis threshold of 1,000 photo-electrons corresponds to 20-30 muons.[]{data-label="fig-nest-res"}](bundle_nestres.eps){width="220pt"} ![Unfolded spectra of simulated data compared to analytic form of spectra for three benchmark models [@Gaisser:2013bla; @Hoerandel:2002yg]. The size of the error bars corresponds to the expected statistical uncertainty for one year of IceCube data.[]{data-label="fig-fullcircle-mult"}](fullcirc_emult_1k.eps){width="220pt"} ![Ratio of multiplicity spectrum unfolded separately for three zenith angle regions to all-sky result.[]{data-label="fig-emult-angerr"}](angular_86_dr.eps){width="220pt"} Figure \[fig-bundlecuteffect\] shows the true simulation-derived number of muons at closest approach to the center of the detector for events with a fixed total number of registered photo-electrons. On the right hand side of the distribution, the selection criteria eliminate very energetic tracks that pass through an edge or just outside the detector. On the left, the tail of low-multiplicity tracks containing high-energy muons, which are bright mainly because of exceptional catastrophic losses, is reduced. Derivation of Experimental Measurement -------------------------------------- The relation between the scaled parameter $E_{\rm{mult}}$ and the actual muon multiplicity in a specific detector $N_{\rm{\mu,det}}$ can be expressed as $$\label{emultrel} E_{\rm{mult}}=g_{\rm{scale}}(\cos\textrm{ }\theta)\cdot N_{\rm{\mu,det}}^{1/\alpha},$$ where $g_{\rm{scale}}(\cos\textrm{ }\theta)$ is a simulation-derived function accounting for angular dependence of muon production and absorption in the surrounding material. The effects of local atmospheric conditions and selection efficiency are accounted for by a separate acceptance correction term. For the experimental measurement of the parameter $E_{\rm{mult}}$, it is first necessary to derive expressions for the terms on the right hand side of of Eq. \[emultrel\]. The resulting parameter can then be related to the analytical form of the bundle multiplicity spectrum by spectral unfolding. A numerical value of $0.79\pm0.02$ for the parameter $\alpha$ was determined by fitting a power law function to the relation between primary energy and muon multiplicity. The difference to the original description [@tomsbook], which gives a somewhat surprisingly accurate estimate of 0.757, is likely a consequence of advances in the understanding of air shower physics during the last three decades. Recent calculations finding a lower value for $\alpha$ are only applicable in the small region of phase space of $A\cdot E_{\mu}/E_{prim} > 0.1$, where energy threshold effects become dominant [@Gaisser:2014bja]. In the analysis sample, the energy loss of muons in the detector is in good approximation proportional to the number of muons $N_{\mu}$, and to the total energy of muons contained in the bundle, as illustrated in Fig. \[fig-bundleparamdep\]. An experimental observable corresponding to the muon multiplicity can therefore be constructed through a parametrization of the detector response based on a Monte-Carlo simulation, in the simplest case using the proportionality between energy deposition and total amount of registered photo-electrons described in Sec. \[sec:lolev-obs\]. To reduce biases and take advantage of the opportunity to investigate systematic effects, the procedure was performed for four different muon energy estimators. These are: ![image](emult_errors_uncorrel.eps){width="220pt"} ![image](emult_errors.eps){width="220pt"} Source Type Variation Effect Comment --------------------- -------------- ------------------------------------------- --------------------------- ---------------------------------------- Composition uncorrelated Fe, protons variable Residual bias near threshold Energy Estimator uncorrelated 4 discrete values variable Derived from data Angular Acceptance uncorrelated 3 zenith regions $\pm 10\%$ Flux Scaling Estimated from data Light Yield correlated $\pm 10\%$ $\pm 13\%$ Energy Shift Composite Scalar Factor Ice Optical correlated 10% Scattering, Absorption $ \pm 25\% $ Flux Scaling Global variations around default model Hadronic Model correlated discrete $\pm 10\%$ Flux Scaling EPOS/QGSJET/SIBYLL Seasonal Variations correlated Summer vs. Winter $\pm 5\%$ Flux Scaling Estimated from data Muon Energy Loss correlated Theoretical uncertainty [@Koehne:2013gpa] $\pm 1\%$ Official IceCube Value ![image](emult_models.eps){width="220pt"} ![image](emult_hypoth.eps){width="220pt"} - The total event charge $Q_{\rm{tot}}$, measured in photo-electrons. Charge registered by DeepCore was excluded to avoid biases due to closer DOM spacing and higher PMT efficiency in the sub-array. - The truncated mean of the muon energy loss [@Abbasi:2012wht]. - The mean energy deposition calculated with the DDDDR method described in \[sec:ddddr\]. - The likelihood-based energy estimator *MuEx* [@Aartsen:2013vja]. The resolution of the multiplicity proxies in dependence of the true number of muons at closest approach to the center of the detector is shown in Figure \[fig-nest-res\]. Except for the raw total number of photo-electrons, all estimators perform in a remarkably similar way in simulation. The presence of individual outliers illustrates the motivation to use more than one method to ensure stability of the result. The angular-dependent scaling function $g_{\rm{scale}}(\cos\textrm{ }\theta_{\rm{zen}})$ was parametrized based on simulated data. Using the RooUnfold algorithm [@Adye:2011gm], a spectral unfolding was applied to the measured distribution of $E_{mult}$. The differential flux was then related to the unfolded and histogrammed experimental data as: $$\label{emultparam_difflux} \frac{d\Phi}{dE_{\rm{mult}}}=c(\Delta E_{\rm{bin}},t_{\rm{exp}})\cdot\eta(E_{\rm{mult}})\cdot\frac{\Delta N_{\rm{ev}}}{\Delta \log_{\rm{10}}E_{\rm{mult}}}$$ Here the proportionality constant *c* accounts for the effective livetime of the data sample and the bin size of the histogram. The detector acceptance $\eta(E_{\rm{mult}})$, whose exact form depends on atmospheric conditions, needs to be derived from simulation. The approach can be verified by a full-circle test, as shown in Fig. \[fig-fullcircle-mult\]. Each of the benchmark models, chosen to reflect extreme assumptions about the behavior of the cosmic ray flux, can be reproduced by applying the analysis procedure to simulated data. Result {#sec:multresult} ------ Systematic uncertainties applying to the experimental measurement are summarized in Table \[mult\_syst\_table\]. The categorization by type corresponds to bin-wise fluctuations (*uncorrelated*) and overall scaling effects (*correlated*). Of special interest is the angular variation, which dominates the total bin-wise uncertainty over a wide range. The effect is illustrated in Fig. \[fig-emult-angerr\]. Splitting the data according to the reconstructed zenith angle into three separate event samples results in spectra that are similar in shape, but whose absolute normalization varies within a band of approximately $\pm 10\%$. As the difference appears to be not uniform, it has been conservatively assumed to lead to uncorrelated bin-wise variations in the all-sky spectrum. Notwithstanding, magnitude and direction are similar to the unexplained effect described in Section \[sec:le-muons\], suggesting a possible common underlying cause. The final result, after successive addition of systematic error bands in quadrature, is shown in Fig. \[fig-emult-errors\]. Since the muon multiplicity is not a fundamental parameter of the cosmic ray flux, it is important to find an appropriate way for its interpretation. Two possibilities are illustrated in Fig. \[fig-emult-interp\]. The first is by expressing cosmic ray flux models in terms of $E_{\rm{mult}}$ through application of Eq. \[emultdef\]. Experimental result and prediction can then be directly related. The second is to translate the multiplicity distribution to an energy spectrum under a particular hypothesis for the elemental composition. By default, the scaling of $E_{\rm{mult}}$ corresponds to iron, but changing it to any other primary nucleus type is straightforward. ![Top: Relation between most energetic single energy loss and leading muon energy within the IceCube detector volume. Middle: Distribution of true energy parameters for two slices in top histogram. Bottom: Fraction of muon surface energy remaining at point of entry into detector volume in dependence of zenith angle. Figures are based on simulated events with primary flux weighted to $E^{-2.7}$ power law spectrum and correspond to final analysis sample before application of minimum shower energy criterion. The black curves represent mean and spread of the distribution.[]{data-label="fig-hemu_enrel"}](cmuenrel.eps "fig:"){width="220pt"} ![Top: Relation between most energetic single energy loss and leading muon energy within the IceCube detector volume. Middle: Distribution of true energy parameters for two slices in top histogram. Bottom: Fraction of muon surface energy remaining at point of entry into detector volume in dependence of zenith angle. Figures are based on simulated events with primary flux weighted to $E^{-2.7}$ power law spectrum and correspond to final analysis sample before application of minimum shower energy criterion. The black curves represent mean and spread of the distribution.[]{data-label="fig-hemu_enrel"}](lossmuonslice.eps "fig:"){width="220pt"} ![Top: Relation between most energetic single energy loss and leading muon energy within the IceCube detector volume. Middle: Distribution of true energy parameters for two slices in top histogram. Bottom: Fraction of muon surface energy remaining at point of entry into detector volume in dependence of zenith angle. Figures are based on simulated events with primary flux weighted to $E^{-2.7}$ power law spectrum and correspond to final analysis sample before application of minimum shower energy criterion. The black curves represent mean and spread of the distribution.[]{data-label="fig-hemu_enrel"}](smuenrel.eps "fig:"){width="220pt"} The result can then be overlayed by independent cosmic ray flux measurements. An unambiguous derivation of the average mass as a function of the primary energy is not possible due to the degeneracy between mass and energy in the multiplicity measurement. However, the qualitative variation of composition with energy is consistent with a gradual change towards heavier elements in the range between the knee and 100 PeV. If the current description of muon production in air showers is correct, and the external measurements are reliable, a purely protonic flux would be strongly disfavored up until the ankle region. High-Energy Muons {#sec:hemu} ================= Principle {#principle} --------- The presence of a single exceptionally strong catastrophic loss can be used both for tagging high-energy muons and to estimate their energy. The first part is obvious: An individual particle shower along a track can only have been caused by a parent of the same energy or above. Simulated data indicate that instances in which two or more muons in the same bundle suffer a catastrophic loss simultaneously in a way that is indistiguishable in the energy reconstruction are exceedingly rare. The quantification is based on the close relation between the energy of the catastrophic loss used to identify the event and that of the leading muon, a consequence of the steeply falling spectrum. Once the muon energy at the point of entry into the detector volume has been determined, the most likely energy at the surface of the ice can be estimated by taking into account the zenith angle, as illustrated in Fig. \[fig-hemu\_enrel\]. This method was developed specifically for the purpose of measuring the energy spectrum of atmospheric muons. As shown in Fig. \[fig-evsamp-trupar\], the leading particle typically only accounts for a limited fraction of the total event energy, and the application of energy measurement techniques optimized for single neutrino-induced muon tracks could lead to substantial biases in the case of a large accompanying bundle. Higher-order corrections are necessary to account for correlations and the effect of variations in the distance to the surface due to the vertical extension of the detector. All relations in this study were based on parametrizations using simulated events. A full multi-dimensional unfolding would be preferable, but requires a substantial increase in simulation statistics. Event Selection --------------- ![Example for peak to median energy loss ratio in high-energy muon candidate event found in experimental data. Top: Reconstructed differential energy loss in dependence of distance to surface, measured along the reconstructed track. Details of the method are described in \[sec:ddddr\]. Bottom: Image of the event. The volume of each sphere is proportional to the signal registered by a given DOM. The color scheme corresponds to the arrival time of the first photon (red: earliest, blue: latest). Reconstructed event parameters are: $E_{\rm{loss}} = 550^{+220}_{-160} \textrm{ TeV}$, $E_{\rm{\mu,surf}} = 1.03^{+0.62}_{-0.39} \textrm{ PeV}$, $\theta_{\rm{zen}} = 45.1 \pm 0.2^{\circ}$ []{data-label="fig-pkmedrat"}](bigdatamu_loss.eps "fig:"){width="220pt"} ![Example for peak to median energy loss ratio in high-energy muon candidate event found in experimental data. Top: Reconstructed differential energy loss in dependence of distance to surface, measured along the reconstructed track. Details of the method are described in \[sec:ddddr\]. Bottom: Image of the event. The volume of each sphere is proportional to the signal registered by a given DOM. The color scheme corresponds to the arrival time of the first photon (red: earliest, blue: latest). Reconstructed event parameters are: $E_{\rm{loss}} = 550^{+220}_{-160} \textrm{ TeV}$, $E_{\rm{\mu,surf}} = 1.03^{+0.62}_{-0.39} \textrm{ PeV}$, $\theta_{\rm{zen}} = 45.1 \pm 0.2^{\circ}$ []{data-label="fig-pkmedrat"}](bigdatamu.pdf "fig:"){width="180pt"} The selection of muon events with exceptional stochastic energy losses is primarily based on reconstructing the differential energy deposition and selecting tracks according to the ratio of peak to median energy loss as illustrated in Fig. \[fig-pkmedrat\]. All other criteria are ancillary, and are only applied to minimize a possible contribution from misreconstructed tracks. An overview of the selection is given in Table \[hemu\_cut\_table\]. Quality Level Events ($\times10^{6}$) Rate \[$s^{-1}$\] Comment ----------------------------------------- ------------------------- ------------------- ----------------------------------------- All $Q_{\rm{tot}}>1,000$p.e. 38.28 1.334 Base Sample (86-String Configuration) $\cos\textrm{ }\theta_{\rm{zen}} > 0.1$ 37.99 1.324 Track zenith angle $q_{\rm{max}}/Q_{\rm{tot}}<0.5$ 34.46 1.201 Brightness dominated by single DOM $L_{\rm{dir}}>800\rm{m}$ 27.55 0.960 Track length in detector $N_{\rm{DOM, 150m}} > 40$ 24.71 0.861 Stochastic loss containment peak/median $dE/dx > 10$ 2.795 0.0974 **Exceptional energy loss along track** median $dE/dx > 0.2 \rm{GeV/m}$ 2.769 0.0965 Exclude dim tracks $E_{\rm{casc}} >$ 5 TeV 0.769 0.0268 *Exclude threshold region* A special case is the exclusion of events with a reconstructed shower energy of less than 5 TeV. This requirement was added to reduce uncertainties in the threshold region, which may not be well described by current understanding of systematic detector effects. The reason to choose a value of 5 TeV is that a typical electromagnetic shower of that energy will produce a signal of about 1,000 photo-electrons, coinciding with the base sample selection. Energy Estimator Construction ----------------------------- ![Relation between reconstructed and true surface energy for simulated atmospheric muon data before excluding events with reconstructed shower energy of less than 5 TeV. The primary particle flux in the simulation was weighted according to a power law of the form $E^{-2.7}$. Also shown are mean and spread of the distribution.[]{data-label="fig-hemu_eneres"}](emuest_ereco.eps){width="220pt"} The energy reconstruction is based on the deterministic reconstruction method discussed in \[sec:ddddr\], which was designed specifically for this purpose. Subsequently developed likelihood methods [@Aartsen:2013vja] were evaluated, but gave no improvement in resolution while introducing a tail of substantially overestimated energies. In the first step, the energy $E_{\rm{casc,reco}}$ of the strongest loss (“cascade”) along the track was determined. The exact value is almost identical to the raw reconstructed energy $E_{\rm{casc,raw}}$ from the DDDDR algorithm, except for a minor correction factor of the form: $$\label{eq-cascest-atmu} %\small \log_{\rm{10}}E_{\rm{casc,reco}}/\rm{GeV} = 1.6888\cdot e^{0.214\cdot \log_{\rm{10}}E_{\rm{casc,raw}}/\rm{GeV}}$$ In the energy region between 5 TeV and 1 PeV, the difference between raw and final value is smaller than 0.1 in $\log_{\rm{10}}E$. The stochastic energy loss $E_{\rm{casc,reco}}$ was then used to estimate the most likely energy of the leading muon at the surface $E_{\rm{\mu,true}}^{\rm{surf}}$ in dependence of zenith angle $\theta_{\rm{zen}}$ and slant depth $d_{\rm{slant}}=z_{\rm{vert}}/\cos\textrm{ }\theta_{\rm{zen}}$, where $z_{\rm{vert}}$ is the vertical distance to the surface at the point of closest approach to the center of the detector. ![All-Sky surface flux predictions [@Fedynitch:2012fs] for three different cosmic ray models and spectrum extracted from full IceCube detector simulation with same primary weight. The error bars on the measured spectrum are the consequence of limited statistics.[]{data-label="fig-hemu-fullcirc"}](hemu_fullcirc.eps){width="220pt"} The parametrized form of the measured muon surface energy is: $$%\tiny \label{musurf_param} \begin{split} \log_{\rm{10}}E_{\rm{\mu,reco}}^{\rm{surf}}/\rm{GeV}= 0.554+0.884\cdot \\ \left(\log_{\rm{10}}(3.44\cdot E_{\rm{casc,reco}}/\rm{GeV}) + \textit{f}_{\rm{corr}}(\cos \textrm{ }\theta_{\rm{zen}}, \textit{d}_{\rm{slant}}) \right) %+\frac{0.242}{1+e^{-9.63\cdot \log_{10} (d_{slant}/m)-3.88}}) \end{split}$$ where $f_{\rm{corr}}(\cos \textrm{ }\theta_{\rm{zen}}, d_{\rm{slant}})$ is a fifth-order polynomial. This relation represents a purely empirical parametrization based on the interpolation of detector-specific simulated data. The relation between the experimental muon surface energy estimator defined in Eq. \[musurf\_param\] and the true energy of the leading muon at the surface is shown in Fig. \[fig-hemu\_eneres\]. It is important to note that the definition is only valid for spectra reasonably close to that used in the construction. ![image](hemu_allsky_errcomp.eps){width="220pt"} ![image](hemuflux_trip.eps){width="220pt"} Source Type Variation Effect Comment --------------------- -------------- -------------------------------------------------- --------------------------- ------------------------- Composition uncorrelated Fe, protons variable Negligible Above 25 TeV Angular Acceptance uncorrelated $0.2\cdot (\cos\textrm{ }\theta_{\rm{zen}}-0.5)$ See Text Unknown Cause DOM Efficiency correlated $\pm 10\%$ $\pm 10\%$ Energy Shift Effective light yield Optical Ice correlated 10% Scattering, Absorption $ \pm 10\% $ Energy Shift Global variations Seasonal Variations correlated Summer vs. Winter $\pm 5\%$ Flux Scaling Prompt Invariant Muon Energy Loss correlated Theoretical uncertainty [@Koehne:2013gpa] $\pm 1\%$ Official IceCube Value Energy Spectrum {#sec-hemu-espec} --------------- The final muon energy spectrum was calculated by dividing the histogrammed number of measured events $N_{\rm{data}}$ by a generic prediction from a full detector simulation $N_{\rm{detMC}}$, and then multiplying the ratio with the corresponding flux $\Phi_{\rm{surfMC}}$ at the surface. Specifically, IceCube detector simulation and external surface data set [@Fedynitch:2012fs] were weighted according to a power law of the form $E^{-2.7}$: $$\frac{d\Phi_{\rm{\mu,exp}}}{dE_{\mu}}=\frac{\Delta N_{\rm{data}}}{\Delta E_{\rm{\mu,reco}}^{\rm{surf}}}\cdot \left( \frac{\Delta N_{\rm{detMC2.7}}}{\Delta E_{\rm{\mu,reco}}^{\rm{surf}}} \right )^{-1}\cdot \frac{d\Phi_{\rm{\mu,surfMC2.7}}}{dE_{\mu,\rm{true}}^{surf}}$$ Figure \[fig-hemu-fullcirc\] demonstrates the validity of the analysis procedure, and the robustness of the energy estimator construction against small spectral variations. The surface flux for different primary model assumptions can be extracted accurately from simulated experimental data. While a full unfolding would be preferable, the currently available simulated data statistics do not allow for the implementation of such a procedure. ![image](hemu_modflux_h3a_all.eps){width="220pt"} ![image](hemu_modflux_gst_all.eps){width="220pt"} CR Model Best Fit (ERS) $\chi^{2}$/dof 1$\sigma$ Interval (90% CL) Pull ($\Delta\gamma$) $\sigma(\Phi_{\rm{Prompt}} > 0$) ------------------------------------------------ ---------------- ---------------- ----------------------------- ----------------------- ---------------------------------- GST-Global Fit [@Gaisser:2013bla] 2.14 7.96/9 1.27 - 3.35 (0.77 - 4.30) 0.01 2.64 H3a [@Gaisser:2013bla] 4.75 9.09/9 3.17 - 7.16 (2.33 - 9.34) -0.03 3.97 Zats.-Sok. [@Zatsepin:2006ci] 6.23 13.98/9 4.55 - 8.70 (3.59 - 10.68) -0.23 5.24 PG Constant $\Delta\gamma$ [@Hoerandel:2002yg] 0.94 9.07/9 0.36 - 1.63 ($< 2.15$) 0.03 1.52 PG Rigidity [@Hoerandel:2002yg] 6.97 5.86/9 4.73 - 10.61 (3.53 - 13.83) -0.06 4.35 In the derivation of the experimental result, the systematic uncertainties listed in Table \[he\_syst\_table\] were applied. The classification according to correlation is the same as in Section \[sec:multresult\]. Except for a small effect due to primary composition near threshold, all experimental uncertainties lead to correlated errors. A special case is the angular acceptance. In light of the low-energy muon and multiplicity spectrum studies described in Sec. \[sec:lolev-result\], it is necessary to take into account the possibility of an unidentified error source distorting the distribution. This was done by calculating the energy spectrum once for the default angular acceptance and once with simulated events re-weighted by an additional factor $w_{\rm{corr}} = \alpha\cdot(\cos\textrm{ }\theta_{\rm{zen}}-0.5)$, where $\alpha$ corresponds to an ad-hoc linear correction parameter. The value $\alpha=0.2$, corresponding to the variation of $\pm 10\%$ seen in the other analyses, reflects the assumption that the effect is independent of the event sample. The experimentally measured muon energy spectrum is shown in Fig. \[fig-hemuflux-trip\]. Distortion due to possible angular effects are small compared to the statistical uncertainty. Within the present accuracy, the average all-sky flux above 15 TeV can be approximated by a simple power law: $$\label{muflux_pl} \begin{split} \frac{d\Phi_{\mu}}{dE_{\mu}}=1.06^{+0.42}_{-0.32}\times10^{-10}\textrm{s}^{-1}\textrm{cm}^{-2}\textrm{srad}^{-1}\textrm{TeV}^{-1}\\ \cdot\left(\frac{E_{\mu}}{\textrm{10 TeV}}\right)^{-3.78\pm0.02(stat.)\pm0.03(syst.)} \end{split}$$ The translation to a vertical flux as commonly used in the literature is not trivial, since the angular dependence of the contribution from prompt hadron decays is different from that of light mesons, and its magnitude a priori unknown. The almost featureless shape of the measured spectrum might appear as a striking contradiction to the naive expectation of seeing a clear signature of the sharp cutoff of the primary nucleon spectrum at the knee. However, closer examination reveals that this is very likely a simple coincidence resulting from the fact the the prompt contribution approximately compensates for the effect of the knee if the flux is averaged over the whole sky. ![image](hemu_modflux_h3a_vert.eps){width="180pt"} ![image](hemu_modflux_gst_vert.eps){width="180pt"} ![image](hemu_modflux_h3a_horiz.eps){width="180pt"} ![image](hemu_modflux_gst_horiz.eps){width="180pt"} Calculating the spectra separately for angles above and below 60 degrees from zenith shows the expected increase of the muon flux toward the horizon. Beyond approximately 300 TeV, the two curves appear to converge, consistent with the emergence of an isotropic prompt component. A quantitative discussion of the angular distribution is given in the following section. The final all-sky spectrum was then fitted to a combination of “conventional” light meson and prompt components, with a Gaussian prior of $\Delta\gamma = 0.1$ applied to the spectral index. The result in the case of H3a and GST-GF models is illustrated in Fig. \[fig-hemuflux-allsky\]. The difference between the two measurements is due to the presence of a spectral component in the GST-GF model with a power-law index of -2.3 to -2.4 compared to about -2.6 in H3a. Even though the exponential cutoff energy of 4 PeV is identical in both cases, the influence of the steepening at the knee is effectively reduced in the harder spectrum. The best fit values for the prompt contribution are listed in the second column of Table \[modeldep\_table\] relative to the ERS flux [@Enberg:2008te]. Note that unlike the theoretical prediction, which applies specifically to neutrinos from charm, the experimental result presented here is the sum of heavy quark and light vector meson decays. A detailed discussion can be found in \[sec:simple-prompt\]. Since only the energy spectrum is used here, the partial degeneracy between the behavior of the all-nucleon flux at the knee and the prompt contribution is preserved. Consequently, the magnitude of the prompt component strongly depends on the primary model. Except for the proposal by Zatsepin and Sokolskaya [@Zatsepin:2006ci], each of the flux assumptions can be reconciled with the data without a major spectral adjustement. Angular Distribution -------------------- The ambiguity between nucleon flux and prompt contribution can be resolved by the addition of angular information. Figure \[fig-hemuflux-region\] shows the best fit results from the previous section compared to data separately for angles above and below 60 degrees from zenith. While neither of the two models shown here is obviously favored, it is clear that a substantial prompt contribution is needed in either case to explain the difference between the two regions. A quantitative treatment can be derived from the different behavior of light meson and prompt components. The prompt flux is isotropic, whereas the contribution from light meson decays is in good approximation inversely proportional to $\cos\textrm{ }\theta_{\rm{zen}}$ [@Illana:2010gh]. Using the prompt flux description derived in \[sec:simple-prompt\], the experimentally measured fraction of prompt muons as a function of muon energy and zenith angle is: $$\label{eq-promptzen} \begin{split} f_{\rm{prompt}}(E_{\mu},\cos\textrm{ }\theta)\equiv\frac{\Phi_{\rm{prompt}}(E_{\mu},\cos\textrm{ }\theta)}{\Phi_{\rm{total}}(E_{\mu},\cos\textrm{ }\theta)}\\ \simeq\left(1+\frac{E_{\rm{1/2}}\cdot\cos\textrm{ }\theta}{E_{\mu}\cdot f_{\rm{corr}}(E_{\mu})}\right)^{-1} \end{split}$$ In this approximation, the prompt contribution is described independent of the muon flux $\Phi_{\mu}(E_{\mu})$. The repartition between the two components at a given energy can therefore be measured from the angular distribution alone. The effect of higher order terms, such as departure of the angular distribution from a pure $\sec\theta_{\rm{zen}}$ dependence due to the curvature of the Earth and deviations of the nucleon spectrum from a simple power law, have been estimated as less than 10% using a full DPMJET [@Berghaus:2007hp] simulation of the prompt component. ![Ratio parameter $r_{\rm{hor,vert}}$ expressing deviation of angular distribution from purely conventional flux for various prompt levels in simulation. The size of the error bars corresponds to the statistical uncertainty due to limited availability of simulated data.[]{data-label="fig-hemu-rhorvert-illu"}](rhorvert_illu.eps){width="220pt"} In this study, the measurement of the prompt flux was based on splitting the event sample into two separate sets according to the reconstructed zenith angle. The ratios between experimental data and Monte-Carlo simulation were then combined into a single parameter defined as: $$\label{eq-rhorvert} r_{\rm{hor,vert}}=\frac{N_{\rm{\mu,data}}(\theta_{\rm{zen}}>60^{\circ})}{N_{\rm{\mu,MC}}(\theta_{\rm{zen}}>60^{\circ})}\cdot\left(\frac{N_{\rm{\mu,data}}(\theta_{\rm{zen}}<60^{\circ})}{N_{\rm{\mu,MC}}(\theta_{\rm{zen}}<60^{\circ})}\right)^{-1}$$ The variation as a function of muon energy is illustrated in Fig. \[fig-hemu-rhorvert-illu\], where $N_{\rm{\mu,MC}}$ represents the purely conventional flux, and $N_{\rm{\mu,data}}$ is derived from simulation weighted according to two assumptions about the prompt flux level. Using two discrete samples is not the most statistically powerful way to exploit the angular information, but minimizes fluctuations resulting from limited simulation availability. ![Best angular prompt fit using default assumptions about systematic uncertainties. Expressed in multiples of the ERS flux [@Enberg:2008te], the result is $4.9\pm0.9$, with $\chi^{2}$/dof=20.0/15.[]{data-label="fig-hemu-rhorvert"}](rhorvert_nocorr.eps){width="220pt"} The experimental result is shown in Fig. \[fig-hemu-rhorvert\]. The best estimate for the prompt flux is significantly higher than the theoretical prediction, but well within the margin permitted by the model-dependent fits to the energy spectrum discussed in the previous section. ![Top: Two-dimensional probability distribution function of angular prompt fit results in the presence of an ad-hoc correction term as described in Section \[sec-hemu-espec\]. The y-axis corresponds to the angular adjustment parameter $\alpha$. Bottom: Result for best overall fit with $\chi^{2}/$dof=14.9/15, located at $(2.41; 0.18)$.[]{data-label="fig-rhorvert-corr"}](marg_pdf.eps "fig:"){width="220pt"} ![Top: Two-dimensional probability distribution function of angular prompt fit results in the presence of an ad-hoc correction term as described in Section \[sec-hemu-espec\]. The y-axis corresponds to the angular adjustment parameter $\alpha$. Bottom: Result for best overall fit with $\chi^{2}/$dof=14.9/15, located at $(2.41; 0.18)$.[]{data-label="fig-rhorvert-corr"}](rhorvert_corr.eps "fig:"){width="220pt"} Given the presence of an unknown systematic error in the low-level and high-multiplicity atmospheric muon samples as described in Sec. \[sec:lolev-result\], it is necessary to take into account the possibility that the angular distribution might be distorted. As the source of the effect is still unknown, the only choice is to evaluate the influence on the measurement by applying a generic correction term. Figure \[fig-rhorvert-corr\] shows the consequence of re-weighting the simulated data by a linear term of the form $1+\alpha\cdot(\cos\textrm{ }\theta_{\rm{zen}})$. The two-dimensional distribution demonstrates that an imbalance between horizontal and vertical tracks with a magnitude of 18% describes the data best. This value is suggestively close to the distortions observed in Sec. \[sec:le-muons\] and \[sec:bundles\], although the limited statistical significance does not permit a firm conclusion. Sample Best Fit (ERS) 1$\sigma$ Interval (90% CL) $\sigma(\Phi_{\rm{prompt}} > 0$) ------------------------- ---------------- ----------------------------- ---------------------------------- Uncorrected 4.93 4.05-5.87 (3.55-6.56) 9.43 Marginalized Ang. Corr. 3.19 1.64-5.48 (0.98-7.26) 3.46 ![Significance of prompt flux measurement based on angular information. The individual curves correspond to different assumptions about systematic effects as described in the text. Also shown is the hypothetical result which could be achieved with one year of experimental data given unlimited availability of simulated events, assuming a best fit value of 1.8 ERS consistent with theoretical predictions for inclusive prompt muon flux.[]{data-label="fig-hemu-potential"}](heresult_angular.eps){width="220pt"} Discussion ---------- A definite measurement of the prompt flux is not yet possible. Depending on which assumption is chosen for the systematic error, the final result varies considerably. Figure \[fig-hemu-potential\] shows the significance levels for default assumption and full marginalization over the linear correction factor. Best fit values and confidence intervals for each case are listed in Table \[angleprompt\_table\]. At present, the best neutrino-derived limit for the atmospheric prompt flux is 2.11 ERS at 90% confidence level [@Aartsen:2015knd]. This result was derived by a likelihood fit combining four independent measurements from IceCube, and includes both track-like ($\nu_{\mu}$ charged current) and shower-like ($\nu_{\rm{e}}$ and $\nu_{\tau}$ charged current, all-flavor neutral current) neutrino event topologies. For comparisons it is important to keep in mind that the atmospheric muon measurement result represents the inclusive prompt flux, potentially including a substantial contribution from electromagnetic decays of unflavored vector mesons [@Fedynitch:2015zma]. It is also worth noting that recent studies show that the uncertainty of theoretical models for atmospheric lepton production in charm decays are larger than previously assumed [@Garzelli:2015psa]. None of the model fluxes selected for the fit to the muon energy spectrum requires a prompt flux in disagreement with the neutrino measurement, with the exception of the proposal by Zatsepin and Sokolskaya. The rigidity-dependent poly-gonato model lacks an extragalactic component whose inclusion would lead to a higher nucleon flux and therefore a lower estimate for the prompt contribution. The result based on the angular distribution alone is almost independent of the nucleon flux and would even at the present stage be statistically powerful enough to constrain competing primary nucleon flux models around the knee. Unfortunately this possibility is precluded by the likely presence of an unidentified systematic error source. Both uncorrected and ad-hoc corrected measurements could be reconciled with different predictions based on data from air shower arrays, notably the H3a and Global Fit models [@Gaisser:2013bla]. At present, the angular measurement is also fully consistent with constraints derived from neutrino data. Conclusion and Outlook ====================== The influence of cosmic rays on IceCube data is significant and varied. Given the presence of several energy regions where external measurements by direct detection or air shower arrays are sparse, it is necessary to develop a comprehensive picture including neutrinos, muons and surface measurements. Atmospheric muons play a privileged role, as they cover the largest energy range and provide the highest statistics. A consistent description of all experimental results will be an important contribution for the understanding of cosmic rays in general. The studies presented in this paper have outlined the opportunities to extract meaningful results from atmospheric muon data in a large-volume underground particle detector. Once systematic effects are fully understood and controlled, it will be possible to measure the muon energy spectrum from 1 TeV to beyond 1 PeV by combining measurements based on angular distribution and catastrophic losses. Agreement between the two methods can then be verified in the overlap region around 10-20 TeV. There is a strong indication for the presence of a component from prompt hadron decays in the muon energy specrum, with best fit values generally falling at the higher side of theoretical predictions. In the future, it will be possible for the IceCube detector to precisely measure the prompt contribution and to constrain the all-nucleon primary flux before and around the knee. With more data accumulating, independent verification of the prompt measurement based on seasonal variations of the muon flux [@Desiati:2010wt] will soon become feasible as well. The muon multiplicity spectrum provides access to the cosmic ray energy region beyond the knee. Even though a direct translation of the result to primary energy and average mass is impossible, combination with results from surface detectors or comparisons to model predictions provide valuable insights. In coming years, the measurement can be extended further into the transition region around the ankle. A possible contribution from heavy elements to the cosmic ray flux at EeV energies should then be discernible. An important goal of this study was to verify the current understanding of systematic uncertainties. An unexplained effect was demonstrated using low-level data, and appears to be present in the other analysis samples as well. In order to improve the quality of future atmospheric muon measurements with IceCube, it will be essential to determine whether the observed discrepancy requires better understanding of the detector, or of the production mechanisms of muons in air showers. Comparisons with measurements from the upcoming water-based KM3NeT detector [@Margiotta:2014eaa] will be invaluable to decide whether the inconsistencies seen in IceCube data are due to the particular detector setup, or represent unexplained physics effects. Acknowledgements ================ We acknowledge the support from the following agencies: U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, the Grid Laboratory Of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin - Madison, the Open Science Grid (OSG) grid infrastructure; U.S. Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; Natural Sciences and Engineering Research Council of Canada, WestGrid and Compute/Calcul Canada; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Research Department of Plasmas with Complex Interactions (Bochum), Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); University of Oxford, United Kingdom; Marsden Fund, New Zealand; Australian Research Council; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; National Research Foundation of Korea (NRF); Danish National Research Foundation, Denmark (DNRF) Data-Derived Deterministic Differential Deposition Reconstruction (DDDDR) {#sec:ddddr} ========================================================================= Concept ------- The energy deposition of muons at TeV energies passing through matter is not continuous and uniform, but primarily a series of discrete catastrophic losses. In order to exploit the information contained in the stochasticity of muon events, it is necessary to reconstruct the differential energy loss along their tracks. The study presented in this paper requires a robust method for identification and energy measurement of major stochastic losses. Its principle is to use muon bundles in experimental data to characterize photon propagation in the detector and apply the result to the construction of a deterministic energy estimator. ![Sketch of light attenuation around muon track in ice.[]{data-label="fig-d4r-lightatten"}](ddddr_lightatt.pdf){width="220pt"} Figure \[fig-d4r-lightatten\] shows a sketch of the photon intensity distribution around the reconstructed track of a muon bundle. In the ideal case of a perfectly transparent homogeneous medium and a precisely defined infinite one-dimensional track of arbitrarily high brightness, the light intensity would fall off as $1/d_{\rm{IP}}$, where the impact parameter $d_{\rm{IP}}$ is defined as the perpendicular distance to the track. Assuming the measured charge $q_{\rm{DOM}}$ in a given DOM to be proportional to the light density, and the emitted number of photons $N_{\rm{phot}}$ to be proportional to the energy deposition $\Delta E_{\mu}$, the relation between muon energy deposition and measurement then takes the form: $$\label{d4r_ideal} \Delta E_{\mu}/\Delta x \sim N_{\rm{phot}} \sim q_{\rm{DOM}} \cdot d_{\rm{IP}}$$ In reality, scattering and absorption in the detector medium require the addition of an exponential attenuation term $\exp(-d_{\rm{IP}}/\lambda_{\rm{att}})$: $$\label{d4r_att} N_{\rm{phot}} \sim q_{\rm{DOM}} \cdot d_{\rm{IP}} \cdot exp(d_{\rm{IP}}/\lambda_{\rm{att}})$$ where the attenuation length $\lambda_{\rm{att}}$ depends on the local optical properties in a given part of the detector. Approximating the structure of individual ice layers as purely horizontal, $\lambda_{\rm{att}}$ is simply a function of the vertical depth $z_{\rm{vert}}$. ![Top: Lateral attenuation of photon intensity along muon bundle tracks in experimental data. The vertical depth ranges, corresponding to DOM position relative to the center of the detector 1949 m below the surface, were chosen to illustrate the strongly varying optical properties of the ice. Bottom: Effective attenuation parameter $\lambda_{\rm{att}}$ derived from exponential fit to the data distribution. Experimental values are compared to Monte-Carlo simulation using reconstructed and true track parameters for calculation of the impact parameter $d_{\rm{IP}}$.[]{data-label="fig-d4r-lightatt"}](ddddr_trilayer.eps "fig:"){width="220pt"} ![Top: Lateral attenuation of photon intensity along muon bundle tracks in experimental data. The vertical depth ranges, corresponding to DOM position relative to the center of the detector 1949 m below the surface, were chosen to illustrate the strongly varying optical properties of the ice. Bottom: Effective attenuation parameter $\lambda_{\rm{att}}$ derived from exponential fit to the data distribution. Experimental values are compared to Monte-Carlo simulation using reconstructed and true track parameters for calculation of the impact parameter $d_{\rm{IP}}$.[]{data-label="fig-d4r-lightatt"}](ddddr_attpar_truereco.eps "fig:"){width="220pt"} The validity of this hypothesis is demonstrated in Fig. \[fig-d4r-lightatt\]. A sample of bright downgoing tracks with $Q_{\rm{tot}}>1,000 pe$ was selected to obtain an unbiased data set fully covered by the online event filters. For each DOM within a given vertical depth range, the quantity $$\label{d4r_dommeas} \tilde{n}_{\rm{phot,ideal}}=\epsilon_{\rm{DOM}}^{-1}\cdot q_{\rm{DOM}}\cdot d_{\rm{IP}}$$ is calculated, corresponding to the photon yield adjusted for the distance from the track and relative quantum efficiency $\epsilon_{\rm{DOM}}$ of the PMT, which is 1 in standard and about 1.35 for high-efficiency DeepCore DOMs. The curves are averaged over the entire event sample and include DOMs that did not register a signal. The solid lines shows the result of a fit to the function $$\label{d4r_fdip} f(d_{\rm{IP}}) = c\cdot exp(-d_{\rm{IP}}/\lambda_{\rm{att}})$$ with the effective attenuation length $\lambda_{\rm{att}}$ and the data sample-dependent normalization constant c as free fit parameters. Exponential attenuation as a function of the impact parameter is a valid assumption over a wide range, breaking down only for very close distances and in the layer with high dust concentration at $z_{\rm{vert}}\approx -100 \textrm{ m}$, where the vertical gradient of the optical ice properties is exceptionally steep. The experimental result is well reproduced by the simulation, as illustrated in the lower plot. The very small difference between the curves using true and reconstructed track parameters means that track reconstruction inaccuracies can be neglected. Construction of Energy Observable --------------------------------- Once the effective attenuation length has been determined, it can be used to construct a simple differential energy loss parameter. For each DOM within a given distance from the reconstructed track, an approximation for the photon yield corrected for PMT efficiency and ice attenuation can be calculated. The actual differential energy loss at the position of the DOM projected is related to the experimental observable by: $$\label{d4r_dedx} \begin{split} \left (\frac{dE_{\mu}}{dx}\right)_{\rm{reco}}= \epsilon_{\rm{DOM}}^{-1}\cdot q_{\rm{DOM}}\cdot \\ f_{\rm{scale}} \cdot \begin{cases} % d_{0}\cdot e^{d_{0}/\lambda_{\rm{att}}(z)} ,& d_{\rm{IP}} < d_{0} \\ d_{0} ,& d_{\rm{IP}} < d_{0} \\ d_{\rm{IP}}\cdot e^{(d_{\rm{track}}-d_{0})/\lambda_{\rm{att}}(z)} ,& d_{\rm{IP}} > d_{0} \end{cases} \end{split}$$ where $f_{\rm{scale}} \simeq 0.020 \textrm{GeV}\cdot(\textrm{p.e}\cdot m^{2})^{-1}$ is a simple scaling factor that can be derived from a Monte Carlo simulation and $d_{0}(z) = 19 m+0.01\cdot z$ expresses the mild depth dependence of the point of transition from flat to exponential behavior. The vertical coordinate z is measured from the center of the detector at 1949 meters below the surface. ![Top: Construction of differential energy deposition estimator. DOMs are represented by circles. The maximum lateral distance from the track up to which individual data points are included in the reconstruction can be varied depending on specific requirements. Bottom: Comparison between true and reconstructed energy loss in simulated event with parameters: $E_{\rm{shower,reco}}$ = 1165 TeV (True Value: 852 TeV), $\cos\textrm{ }\theta_{\rm{zen,reco}}$ = 0.556 (True Value: 0.551) $E_{\rm{\mu,reco}}$ = 2493 TeV (True Value: 1854 TeV). The shower energy corresponds to the highest single stochastic loss at approximately 3000 m slant depth. Reconstructions using two different likelihood methods [@Aartsen:2013vja] are shown for comparison.[]{data-label="fig-d4r-paracons"}](ddddr_binsketch.pdf "fig:"){width="200pt"} ![Top: Construction of differential energy deposition estimator. DOMs are represented by circles. The maximum lateral distance from the track up to which individual data points are included in the reconstruction can be varied depending on specific requirements. Bottom: Comparison between true and reconstructed energy loss in simulated event with parameters: $E_{\rm{shower,reco}}$ = 1165 TeV (True Value: 852 TeV), $\cos\textrm{ }\theta_{\rm{zen,reco}}$ = 0.556 (True Value: 0.551) $E_{\rm{\mu,reco}}$ = 2493 TeV (True Value: 1854 TeV). The shower energy corresponds to the highest single stochastic loss at approximately 3000 m slant depth. Reconstructions using two different likelihood methods [@Aartsen:2013vja] are shown for comparison.[]{data-label="fig-d4r-paracons"}](bigmcevent_loss.eps "fig:"){width="220pt"} To account for fluctuations affecting individual measurements and DOMs that did not register a signal, the track is subdivided into longitudinal bins with a width of 50 meters, over which the measured parameter is averaged. The lateral limit for the inclusion of DOMs can be adjusted to find a compromise between sufficient statistics and adequate longitudinal resolution. The principle is illustrated in Fig. \[fig-d4r-paracons\]. Note that the exact value of dE/dx is only calculated for demonstration purposes and should be considered approximate. The measured observables, like any energy-dependent observable, are in practical applications directly related to physical parameters such as shower energy and muon multiplicity, where the exact conversion depends on the spectrum of the data distribution. The energy of the strongest stochastic loss in the event could be derived immediately from the highest bin value in the profile. However, this estimate is often imprecise. Better results can be achieved by a dedicated reconstruction for the individual loss energy. The origin of the shower is assumed to coincide with the position of the DOM with the highest $dE/dx$ value projected on the track. Its energy is then calculated in a similar way as for the track, except that the photon emission is assumed to be point-like and isotropic. Instead of falling off linearly, the light intensity falls off quadratically as a function of distance, and the energy estimate becomes: $$\label{d4r_lossen} \begin{split} E_{\rm{loss,reco}}= \epsilon_{\rm{DOM}}^{-1}\cdot q_{\rm{DOM}}\cdot \\f_{\rm{scale}} \cdot \begin{cases} % r_{0}^{2}\cdot e^{r_{0}/\lambda_{\rm{att}}(z)} ,& r_{\rm{loss}} < r_{0} \\ r_{0}^{2} ,& r_{\rm{loss}} < r_{0} \\ r_{\rm{loss}}^{2}\cdot e^{(r_{\rm{loss}}-r_{0})/\lambda_{\rm{att}}(z)} ,& r_{\rm{loss}} > r_{0} \end{cases} \end{split}$$ The shower energy can then be determined by calculating the mean of the values for the individual DOMs. The energy resolution for events selected by the method described in Section \[sec:hemu\] is shown in Fig. \[fig-cascest\_resol\]. ![Ratio between reconstructed and true shower energy for simulated events weighted to an $E^{-2.7}$ power-law primary cosmic ray flux spectrum. Around the peak the distribution can be closely approximated by a Gaussian distribution with a width varying between approximately 0.16 and 0.14.[]{data-label="fig-cascest_resol"}](cascest_resol.eps){width="220pt"} Prompt Flux Calculation {#sec:simple-prompt} ======================= Prompt Muon Flux Approximation ------------------------------ The characteristics of the atmospheric muon energy spectrum at energies beyond 100 TeV are influenced by prompt hadron decays. In neutrino analyses, these can be taken into account by applying a simple weighting function to simulated data. Muons, on the other hand, are always part of a bundle, and in principle it would be necessary to generate a full air shower simulation including prompt lepton production. The hadronic interaction generators integrated into the CORSIKA simulation package as of version 7.4 are not adequate for a prompt muon simulation mass production. QGSJET and DPMJET [@Berghaus:2007hp] are slow, and charm production in QGSJET is very small compared to theoretical predictions. The core CORSIKA propagator does not handle re-interaction effects for heavy hadrons, which become important at energies approaching 10 PeV. A version of SIBYLL that includes charm is at the development stage [@Engel:2015dxa]. The updated code also takes into account production and decay of unflavored light mesons, which form an important part of the prompt muon flux [@Illana:2010gh]. First published simulated prompt atmospheric muon spectra indicate consistency with the ERS model for charmed mesons, and an unflavored component of approximately equal magnitude [@Fedynitch:2015zma]. In this paper, the prompt flux is expressed in dependence of the “conventional’ flux from light meson decays. In this way it can be modeled using simulated events from the standard IceCube CORSIKA mass production, including detector simulation and information about the primary cosmic ray composition. ![Muon flux predictions from full shower CORSIKA simulation [@Fedynitch:2012fs] and parametrization of theoretical calculation [@Enberg:2008te].[]{data-label="fig-pseudoprompt"}](bochum_sarccompare.eps){width="220pt"} Construction of the simulated prompt flux is based on the following assumptions: - The spectral index of the prompt component $\gamma_{\rm{prompt}}$ is related to the conventional index $\gamma_{\rm{conv}}$ as $\gamma_{\rm{prompt}}=\gamma_{\rm{conv}}+1$. Higher-order effects, such as the varying cross section of charm production and re-interaction in the atmosphere, can be accounted for by a corrective term $f_{\rm{corr}}(E_{\mu})$. - The prompt flux is isotropic, the conventional flux increases proportional to $\sec\theta_{\rm{zen}}$ in the analysis region above $\cos\textrm{ }\theta_{\rm{zen}}=0.1$. Variations due to the curvature of the Earth [@Illana:2010gh] are neglected. - The influence of changes in the nucleon spectrum on the prompt flux is the same as on the conventional flux. Based on estimates using prompt muons simulated with DPMJET, this assumption is valid within 10% for spectra with an exponential cutoff at the knee. - The contribution from light vector meson di-muon decays is small compared to that from heavy hadrons and/or has the same energy spectrum. For prompt muon fluxes simulated with the newest development version of SIBYLL, charm and unflavored spectra are almost identical in shape between 10 TeV and 1 PeV [@Fedynitch:2015zma]. The approximated prompt flux is then: $$\label{eq-pseudoprompt} \begin{split} \Phi_{\rm{\mu,prompt}} (E_{\mu},\theta_{\rm{zen}}) \simeq \Phi_{\rm{\mu,conv}}(E_{\mu},\theta_{\rm{zen}}) \\ \cdot \frac{E_{\mu}\cdot \cos\textrm{ }\theta_{\rm{zen}}}{E_{\rm{1/2}}}\cdot f_{\rm{corr}}(E_{\mu}) \end{split}$$ The relative flux normalization is expressed in terms of $E_{\rm{1/2}}$, the crossover energy for prompt and conventional fluxes in vertical air showers. This parameter provides a simple and intuitively clear way to express the magnitude of the prompt flux, and can easily be estimated. ![Effect of higher-order prompt flux correction factor on all-sky muon flux derived from simulation using CORSIKA. The separation into cross section and re-interaction correction should be considered approximate.[]{data-label="fig-pseudoprompt_corr"}](simpleprompt_hordercorr.eps){width="220pt"} To calculate the crossover energy $E_{\rm{1/2}}$ for a specific prediction, it is sufficient to compare conventional muon simulations with a prompt flux parametrization, as illustrated in Fig. \[fig-pseudoprompt\]. The crossover energy can then be determined in a straightforward way by a fit to their ratio. Note that here the primary nucleon spectrum corresponds to the naïve TIG model [@Gondolo:1995fq] used in the theoretical calculation. Since the full air shower simulation only needs to provide an estimate for the conventional flux, this procedure can be repeated for any interaction model. In this study, as in most IceCube analyses, the prompt prediction is based on the calculation by Enberg, Reno and Sarcevic [@Enberg:2008te]. The corresponding values are listed in Table \[e12\_erstable\]. Hadronic Model ERS (max) ERS (default) ERS (min) ---------------- ----------------- ------------------ ----------------- SIBYLL $5.71 \pm 0.02$ $5.82 \pm 0.03$ $5.99 \pm 0.03$ QGSJET-II $5.62 \pm 0.02$ $5.72 \pm 0.03 $ $5.90 \pm 0.03$ QGSJET-01c $5.65 \pm 0.02$ $5.75 \pm 0.03 $ $5.93 \pm 0.03$ : Vertical crossover energy $\log_{\rm{10}}E_{\rm{1/2}}/\textrm{GeV}$ for ERS flux and CORSIKA non-prompt muon simulation.[]{data-label="e12_erstable"} Detailed features of a theoretical model are taken into account by a higher-order correction. In particular, those are the increase of the prompt production cross section as a function of primary energy and the appearence of re-interaction effects at energies of several PeV. Since the latter is negligible in the range covered by the study in this paper, its angular dependence was omitted. The parametrized form of the correction factor is: $$\label{pseudoprompt-hecorr} \begin{split} f_{corr}(E_{\mu}) = f_{\rm{corr}}(c.s.)\cdot f_{\rm{corr}}(int.) =\\ \left[(3.74-0.461 \cdot \log_{\rm{10}}E_{\mu}/\rm{GeV})\cdot(1+e^{2.13 \cdot \log_{\rm{10}}E_{\mu}/4.9\rm{PeV}})\right]^{-1} \end{split}$$ After application of the correction, simulation-based flux prediction and theoretical model agree well, as illustrated in Fig. \[fig-pseudoprompt\_corr\]. Translation to Neutrino Flux ---------------------------- Prompt muon and neutrino fluxes are not strictly identical. In particular, muons can originate in electromagnetic di-muon decays of vector mesons. The muon-derived measurement is a combination of unflavored and heavy quark-induced fluxes: $$\label{frac12_total} \Phi_{\rm{prompt,\mu}}=\Phi_{\rm{\mu,heavy}}+\Phi_{\rm{unflav}}$$ Whereas previous estimates based on theoretical calculations indicated an unflavored contribution of 0.3-0.4 times the ERS flux [@Illana:2010gh], recent numerical simulations result in a higher value, almost approaching the flux from heavy hadron decays [@Fedynitch:2015zma]. The contribution from vector meson decays is partially compensated by a relative suppression of the muon flux with respect to neutrinos of 15-20% originating in the physics of $c\to s$ decay [@Lipari:2013taa], here represented by the conversion factor $\zeta_{\nu,\mu}$. The resulting neutrino flux is therefore: $$\label{frac12_nu} \Phi_{\rm{prompt,\nu}}=\zeta_{\mu,\nu}\cdot(\Phi_{\rm{prompt},\mu}-\Phi_{\rm{unflav}})$$ An exact translation requires precise determination of spectrum and magnitude of the unflavored contribution and evaluation of the weak matrix element responsible for $\zeta_{\nu,\mu}$. At the moment, the calculation of a reliable estimate for the prompt atmosperic neutrino flux is precluded by the substantial uncertainties on the experimental measurement. Influence of Bundle in High-Energy Muon Events {#sec:he-bundle} ============================================== ![Top: Reconstructed muon surface energy and truncated mean [@Abbasi:2012wht] for experimental data. The sample corresponds to tracks with reconstructed angle within 37 degrees from zenith ($\cos\textrm{ }\theta_{\rm{zen}}>0.8$) in selection described in \[sec:hemu\], before exclusion of events with shower energies below 5 TeV. Red and blue boxes illustrate selection of data with approximately constant energy measurement. Middle: Number of IceTop tanks registering a signal in coincidence with muon track for fixed reconstructed muon surface energy (blue box). Bottom: Same for fixed truncated mean (red box). []{data-label="fig-bundtest-ithits"}](pheit_bandillu.eps "fig:"){width="180pt"} ![Top: Reconstructed muon surface energy and truncated mean [@Abbasi:2012wht] for experimental data. The sample corresponds to tracks with reconstructed angle within 37 degrees from zenith ($\cos\textrm{ }\theta_{\rm{zen}}>0.8$) in selection described in \[sec:hemu\], before exclusion of events with shower energies below 5 TeV. Red and blue boxes illustrate selection of data with approximately constant energy measurement. Middle: Number of IceTop tanks registering a signal in coincidence with muon track for fixed reconstructed muon surface energy (blue box). Bottom: Same for fixed truncated mean (red box). []{data-label="fig-bundtest-ithits"}](pheit_etrunc.eps "fig:"){width="180pt"} ![Top: Reconstructed muon surface energy and truncated mean [@Abbasi:2012wht] for experimental data. The sample corresponds to tracks with reconstructed angle within 37 degrees from zenith ($\cos\textrm{ }\theta_{\rm{zen}}>0.8$) in selection described in \[sec:hemu\], before exclusion of events with shower energies below 5 TeV. Red and blue boxes illustrate selection of data with approximately constant energy measurement. Middle: Number of IceTop tanks registering a signal in coincidence with muon track for fixed reconstructed muon surface energy (blue box). Bottom: Same for fixed truncated mean (red box). []{data-label="fig-bundtest-ithits"}](pheit_emu.eps "fig:"){width="180pt"} ![Parameter distributions separated by primary cosmic ray type for simulated high-energy muon events with reconstructed surface energies between 30 and 50 TeV. True primary energy (top) and muon bundle multiplicity at detector depth (bottom).[]{data-label="fig-bundtest-separ"}](bundep_eprim.eps "fig:"){width="220pt"} ![Parameter distributions separated by primary cosmic ray type for simulated high-energy muon events with reconstructed surface energies between 30 and 50 TeV. True primary energy (top) and muon bundle multiplicity at detector depth (bottom).[]{data-label="fig-bundtest-separ"}](bundep_nmmc.eps "fig:"){width="220pt"} ![Top: Truncated Energy observable in CORSIKA simulation weighted to GST-Global Fit flux and experimental data. Event selection criteria are the same as in Fig. \[fig-bundtest-separ\]. Bottom: Mean Truncated Energy observable in dependence of reconstructed leading muon surface energy for simulated and experimental data.[]{data-label="fig-bundtest-compo"}](bundep_etrunc.eps "fig:"){width="220pt"} ![Top: Truncated Energy observable in CORSIKA simulation weighted to GST-Global Fit flux and experimental data. Event selection criteria are the same as in Fig. \[fig-bundtest-separ\]. Bottom: Mean Truncated Energy observable in dependence of reconstructed leading muon surface energy for simulated and experimental data.[]{data-label="fig-bundtest-compo"}](helfrac_3comp.eps "fig:"){width="220pt"} High-energy muon events rarely consist of single particles. Usually there is an accompanying bundle of low-energy muons, whose multiplicity depends on the primary type and energy. It is possible to demonstrate that the influence of secondary particles on the leading muon energy reconstruction is negligible, and that information about the cosmic ray primary can be extracted using an additional observable. The accuracy of typical muon energy measurements can be increased by excluding exceptional catastrophic losses using the truncated mean of the energy deposition [@Abbasi:2012wht]. Since the high-energy muon energy estimate used in this paper relies only on the single strongest shower, the information used in the two reconstruction methods is fully independent. The approximate orthogonality of the two observables can be demonstrated using only experimental data by including information from the surface array IceTop. Since the leading muon rarely takes away more than 10% of the primary cosmic ray energy, its presence has almost no influence on the surface size of the air shower. The signal registered by IceTop should therefore only be correlated with the properties of the cosmic ray primary. In Fig. \[fig-bundtest-ithits\], truncated mean and reconstructed muon surface energy are shown for the high-energy muon event sample as described in \[sec:hemu\]. The lower two panels show the number of IceTop tanks registering a signal in coincidence with the air shower. The effect of varying the muon surface energy for a constant truncated mean is negligible, while in the inverse case a strong increase can be seen at the higher end. The result demonstrates that the total energy of the air shower, and consequently the size of the muon bundle, is not correlated with the measurement of the muon energy. On a qualitative level, it can also be seen that the truncated mean is related to the properties of the parent cosmic ray nucleus. For the quantitative interpretation of the truncated mean measurement, it is necessary to rely on simulated data, as illustrated in Fig. \[fig-bundtest-separ\]. The true primary energy distributions for proton and helium are clearly separated. For the same nucleon energy, helium nuclei are four times more energetic than protons. The consequence is a substantially larger bundle multiplicity in the detector. To be distinguishable in the truncated mean observable, the energy deposition from the muon bundle needs to be comparable to that from leading muon. The relation between muon multiplicity and truncated mean is therefore less clear than in the muon multiplicity measurement as described in Section \[sec:bundles\]. A comparison between simulation and experimental data is shown in Fig. \[fig-bundtest-compo\]. The simulated curves are based on the simplified assumption of a straight power law primary spectrum. While a detailed analysis goes beyond the scope of this paper, the quantitative behavior of the experimental data conforms to the expectation that the average mass of the parent cosmic ray flux falls in between proton and helium. [00]{} A. Karle \[IceCube Collaboration\], arXiv:1401.4496 \[astro-ph.HE\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Phys. Rev. Lett.  [**111**]{} (2013) 021103 \[arXiv:1304.5356 \[astro-ph.HE\]\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Science [**342**]{} (2013) 1242856 \[arXiv:1311.5238 \[astro-ph.HE\]\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Phys. Rev. Lett.  [**113**]{} (2014) 101101 \[arXiv:1405.5303 \[astro-ph.HE\]\]. T. K. Gaisser, K. Jero, A. Karle and J. van Santen, Phys. Rev. D [**90**]{} (2014) 2, 023009 \[arXiv:1405.0525 \[astro-ph.HE\]\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Astropart. Phys.  [**42**]{} (2013) 15 \[arXiv:1207.3455 \[astro-ph.HE\]\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Astrophys. J.  [**718**]{} (2010) L194 \[arXiv:1005.2960 \[astro-ph.HE\]\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Astrophys. J.  [**740**]{} (2011) 16 \[arXiv:1105.2326 \[astro-ph.HE\]\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Astrophys. J.  [**746**]{} (2012) 33 \[arXiv:1109.1017 \[hep-ex\]\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Phys. Rev. D [**87**]{} (2013) 012005 \[arXiv:1208.2979 \[astro-ph.HE\]\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Phys. Rev. D [**89**]{} (2014) 102004 \[arXiv:1305.6811 \[astro-ph.HE\]\]. https://web.ikp.kit.edu/corsika/ T. K. Gaisser, T. Stanev and S. Tilav, Front. Phys. China [**8**]{} (2013) 748 \[arXiv:1303.3565 \[astro-ph.HE\]\]. T. K. Gaisser, *Cosmic Ray and Particle Physics* (Cambridge University Press, 1990). M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Phys. Rev. Lett.  [**111**]{} (2013) 8, 081801 \[arXiv:1305.3909 \[hep-ex\]\]. O. Adriani [*et al.*]{} \[PAMELA Collaboration\], Science [**332**]{} (2011) 69 \[arXiv:1103.4055 \[astro-ph.HE\]\]. M. Aguilar \[AMS Collaboration\], Phys. Rev. Lett.  [**114**]{} (2015) 17, 171103. M. G. Aartsen [*et al.*]{} \[IceCube and PINGU Collaborations\], arXiv:1306.5846 \[astro-ph.IM\]. H. S. Ahn, P. Allison, M. G. Bagliesi, J. J. Beatty, G. Bigongiari, J. T. Childers, N. B. Conklin and S. Coutu [*et al.*]{}, Astrophys. J.  [**714**]{} (2010) L89 \[arXiv:1004.1123 \[astro-ph.HE\]\]. A. A. Kochanov, T. S. Sinegovskaya and S. I. Sinegovsky, Astropart. Phys.  [**30**]{} (2008) 219 \[arXiv:0803.2943 \[astro-ph\]\]. T. Antoni [*et al.*]{} \[KASCADE Collaboration\], Astropart. Phys.  [**24**]{} (2005) 1 \[astro-ph/0505413\]. J. Blumer, R. Engel and J. R. Horandel, Prog. Part. Nucl. Phys.  [**63**]{} (2009) 293 \[arXiv:0904.0725 \[astro-ph.HE\]\]. D. R. Bergman and J. W. Belz, J. Phys. G [**34**]{} (2007) R359 \[arXiv:0704.3721 \[astro-ph\]\]. W. D. Apel [*et al.*]{} \[Grande Collaboration\], arXiv:1206.3834 \[astro-ph.HE\]. S. B. Shaulov, S. P. Beshchapov, K. V. Cherdyntseva, A. P. Chubenko, E. V. Danilova, Z. K. Zhanseitova, R. A. Nam and N. M. Nesterova [*et al.*]{}, Nucl. Phys. Proc. Suppl.  [**196**]{} (2009) 187. V. V. Prosin, S. F. Berezhnev, N. M. Budnev, A. Chiavassa, O. A. Chvalaev, O. A. Gress, A. N. Dyachok and S. N. Epimakhov [*et al.*]{}, Nucl. Instrum. Meth. A [**756**]{} (2014) 94. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Phys. Rev. D [**88**]{} (2013) 4, 042004 \[arXiv:1307.3795 \[astro-ph.HE\]\]. W. D. Apel, J. C. Arteaga-Velàzquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I. M. Brancus and E. Cantoni [*et al.*]{}, Phys. Rev. D [**87**]{} (2013) 081101 \[arXiv:1304.7114 \[astro-ph.HE\]\]. W. D. Apel [*et al.*]{} \[KASCADE-Grande Collaboration\], Phys. Rev. Lett.  [**107**]{} (2011) 171104 \[arXiv:1107.5885 \[astro-ph.HE\]\]. W. D. Apel, J. C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I. M. Brancus and E. Cantoni [*et al.*]{}, Astropart. Phys.  [**47**]{} (2013) 54 \[arXiv:1306.6283 \[astro-ph.HE\]\]. B. Peters Il Nuovo Cimento  [**22**]{} (1961) 4 S. V. Ter-Antonyan and L. S. Haroyan, hep-ex/0003006. J. R. Hoerandel, Astropart. Phys.  [**19**]{} (2003) 193 \[astro-ph/0210453\]. A. M. Hillas, J. Phys. G [**31**]{} (2005) R95. V. I. Zatsepin and N. V. Sokolskaya, Astron. Astrophys.  [**458**]{} (2006) 1 \[astro-ph/0601475\]. V. Berezinsky, A. Z. Gazizov and S. I. Grigorieva, Phys. Lett. B [**612**]{} (2005) 147 \[astro-ph/0502550\]. R. Aloisio, V. Berezinsky and A. Gazizov, Astropart. Phys.  [**34**]{} (2011) 620 \[arXiv:0907.5194 \[astro-ph.HE\]\]. S. N. Boziev, JETP Lett.  [**54**]{} (1991) 606 \[Pisma Zh. Eksp. Teor. Fiz.  [**54**]{} (1991) 603\]. M. Honda, T. Kajita, K. Kasahara, S. Midorikawa and T. Sanuki, Phys. Rev. D [**75**]{} (2007) 043006 \[astro-ph/0611418\]. T. K. Gaisser, arXiv:1303.1431 \[hep-ph\]. A. Fedynitch, J. Becker Tjus and P. Desiati, Phys. Rev. D [**86**]{} (2012) 114024 \[arXiv:1206.6710 \[astro-ph.HE\]\]. P. Lipari, arXiv:1308.2086 \[astro-ph.HE\]. V. A. Bednyakov, M. A. Demichev, G. I. Lykasov, T. Stavreva and M. Stockton, Phys. Lett. B [**728**]{} (2014) 602. S. J. Brodsky, P. Hoyer, C. Peterson and N. Sakai, Phys. Lett. B [**93**]{} (1980) 451. M. Britsch \[LHCb Collaboration\], Nucl. Phys. Proc. Suppl.  [**234**]{} (2013) 109. E. Mountricha \[ATLAS Collaboration\], Nucl. Phys. Proc. Suppl.  [**210-211**]{} (2011) 37. B. Abelev [*et al.*]{} \[ALICE Collaboration\], JHEP [**1201**]{} (2012) 128 \[arXiv:1111.1553 \[hep-ex\]\]. B. Abelev [*et al.*]{} \[ALICE Collaboration\], JHEP [**1207**]{} (2012) 191 \[arXiv:1205.4007 \[hep-ex\]\]. A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett.  [**97**]{} (2006) 252002 \[hep-ex/0609010\]. J. Adams [*et al.*]{} \[STAR Collaboration\], Phys. Rev. Lett.  [**94**]{} (2005) 062301 \[nucl-ex/0407006\]. M. Cacciari, S. Frixione, N. Houdeau, M. L. Mangano, P. Nason and G. Ridolfi, JHEP [**1210**]{} (2012) 137 \[arXiv:1205.6344 \[hep-ph\]\]. C. G. S. Costa, Astropart. Phys.  [**16**]{} (2001) 193 \[hep-ph/0010306\]. R. Enberg, M. H. Reno and I. Sarcevic, Phys. Rev. D [**78**]{} (2008) 043005 \[arXiv:0806.0418 \[hep-ph\]\]. J. I. Illana, P. Lipari, M. Masip and D. Meloni, Astropart. Phys.  [**34**]{} (2011) 663 \[arXiv:1010.5084 \[astro-ph.HE\]\]. J. I. Illana, M. Masip and D. Meloni, JCAP [**0909**]{} (2009) 008 \[arXiv:0907.1412 \[hep-ph\]\]. A. Fedynitch, R. Engel, T. K. Gaisser, F. Riehn and T. Stanev, arXiv:1503.00544 \[hep-ph\]. G. Gelmini, P. Gondolo and G. Varieschi, Phys. Rev. D [**67**]{} (2003) 017301 \[hep-ph/0209111\]. S. I. Sinegovsky, A. A. Kochanov, T. S. Sinegovskaya, A. Misaki and N. Takahashi, Int. J. Mod. Phys. A [**25**]{} (2010) 3733 \[arXiv:0906.3791 \[astro-ph.HE\]\]. M. Aglietta [*et al.*]{} \[LVD Collaboration\], Phys. Rev. D [**60**]{} (1999) 112001 \[hep-ex/9906021\]. A. G. Bogdanov, R. P. Kokoulin, Y. F. Novoseltsev, R. V. Novoseltseva, V. B. Petkov and A. A. Petrukhin, Astropart. Phys.  [**36**]{} (2012) 224 \[arXiv:0911.1692 \[astro-ph.HE\]\]. M. G. Aartsen [*et al.*]{} \[The IceCube Collaboration\], Phys. Rev. D [**89**]{} (2014) 062007 \[arXiv:1311.7048 \[astro-ph.HE\]\]. M. G. Aartsen [*et al.*]{} \[ IceCube Collaboration\], arXiv:1406.6757 \[astro-ph.HE\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Nucl. Instrum. Meth. A [**703**]{} (2013) 190 \[arXiv:1208.3430 \[physics.data-an\]\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], JINST [**9**]{} (2014) P03009 \[arXiv:1311.4767 \[physics.ins-det\]\]. E. J. Ahn, R. Engel, T. K. Gaisser, P. Lipari and T. Stanev, Phys. Rev. D [**80**]{} (2009) 094003 \[arXiv:0906.4113 \[hep-ph\]\]. S. Ostapchenko, Phys. Rev. D [**83**]{} (2011) 014018 \[arXiv:1010.1869 \[hep-ph\]\]. K. Werner, T. Hirano, I. Karpenko, T. Pierog, S. Porteboeuf, M. Bleicher and S. Haussler, Nucl. Phys. Proc. Suppl.  [**196**]{} (2009) 36. J. H. Koehne, K. Frantzen, M. Schmitz, T. Fuchs, W. Rhode, D. Chirkin and J. Becker Tjus, Comput. Phys. Commun.  [**184**]{} (2013) 2070. D. Chirkin and W. Rhode, hep-ph/0407075. D. Chirkin \[IceCube Collaboration\], Nucl. Instrum. Meth. A [**725**]{} (2013) 141. T. K. Gaisser, Astropart. Phys.  [**35**]{} (2012) 801. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Nucl. Instrum. Meth. A [**711**]{} (2013) 73 \[arXiv:1301.5361 \[astro-ph.IM\]\]. T. Pierog, I. Karpenko, J. M. Katzy, E. Yatsenko and K. Werner, arXiv:1306.0121 \[hep-ph\]. M. Aglietta [*et al.*]{} \[LVD Collaboration\], Phys. Rev. D [**58**]{} (1998) 092005 \[hep-ex/9806001\]. J. Babson [*et al.*]{} \[DUMAND Collaboration\], Phys. Rev. D [**42**]{} (1990) 3613. V. N. Bakatanov, Y. F. Novoseltsev, R. V. Novoseltseva, A. M. Semenov and A. E. Chudakov, Sov. J. Nucl. Phys.  [**55**]{} (1992) 1169 \[Yad. Fiz.  [**55**]{} (1992) 2107\]. M. Ambrosio [*et al.*]{} \[MACRO. Collaboration\], Phys. Rev. D [**52**]{} (1995) 3793. I. A. Belolaptikov [*et al.*]{} \[BAIKAL Collaboration\], Astropart. Phys.  [**7**]{} (1997) 263. E. Andres, P. Askebjer, S. W. Barwick, R. Bay, L. Bergstrom, A. Biron, J. Booth and A. Bouchta [*et al.*]{}, Astropart. Phys.  [**13**]{} (2000) 1 \[astro-ph/9906203, astro-ph/9906203\]. G. Aggouras [*et al.*]{} \[NESTOR Collaboration\], Astropart. Phys.  [**23**]{} (2005) 377. S. Aiello [*et al.*]{} \[NEMO Collaboration\], Astropart. Phys.  [**33**]{} (2010) 263 \[arXiv:0910.1269 \[astro-ph.IM\]\]. J. A. Aguilar [*et al.*]{} \[ ANTARES Collaboration\], Astropart. Phys.  [**34**]{} (2010) 179 \[arXiv:1007.1777 \[astro-ph.HE\]\]. P. Lipari, Astropart. Phys.  [**1**]{} (1993) 399 \[hep-ph/9307289\]. T. Adye, Proceedings of the PHYSTAT 2011 Workshop, CERN, Geneva, Switzerland, January 2011, CERN-2011-006, pp 313-318 \[arXiv:1105.1160 \[physics.data-an\]\]. A. Aab [*et al.*]{} \[Pierre Auger Collaboration\], arXiv:1307.5059 \[astro-ph.HE\]. T. Abu-Zayyad [*et al.*]{} \[Telescope Array and Pierre Auger Collaborations\], arXiv:1310.0647 \[astro-ph.HE\]. P. Berghaus, T. Montaruli and J. Ranft, JCAP [**0806**]{} (2008) 003 \[arXiv:0712.3089 \[hep-ex\]\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Astrophys. J.  [**809**]{} (2015) 1, 98 \[arXiv:1507.03991 \[astro-ph.HE\]\]. M. V. Garzelli, S. Moch and G. Sigl, JHEP [**1510**]{} (2015) 115 doi:10.1007/JHEP10(2015)115 \[arXiv:1507.01570 \[hep-ph\]\]. P. Desiati and T. K. Gaisser, Phys. Rev. Lett.  [**105**]{} (2010) 121102 \[arXiv:1008.2211 \[astro-ph.HE\]\]. A. Margiotta \[KM3NeT Collaboration\], JINST [**9**]{} (2014) C04020 \[arXiv:1408.1132 \[astro-ph.IM\]\]. R. Engel, A. Fedynitch, T. K. Gaisser, F. Riehn and T. Stanev, arXiv:1502.06353 \[hep-ph\]. P. Gondolo, G. Ingelman and M. Thunman, Astropart. Phys.  [**5**]{} (1996) 309 \[hep-ph/9505417\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We represent the slow, glassy equilibrium dynamics of a line in a two-dimensional random potential landscape as driven by an array of asymptotically independent two-state systems, or [*loops*]{}, fluctuating on all length scales. The assumption of independence enables a fairly complete analytic description. We obtain good agreement with Monte Carlo simulations when the free energy barriers separating the two sides of a loop of size $L$ are drawn from a distribution whose width and mean scale as $L^{1/3}$, in agreement with recent results for scaling of such barriers.' address: | $^1$ Nordita, Blegdamsvej 17, DK-2100 Copenhagen, Denmark\ $^2$ Department of Applied Physics, Chalmers University of Technology and Göteborg University,\ S-41296 Göteborg, Sweden author: - 'Anders B. Eriksson$^1$, Jari M. Kinaret$^2$' - 'Lev V. Mikheev$^{1}$[@bylinelm]' title: Fluctuating loops and glassy dynamics of a pinned line in two dimensions --- Slow dynamics is perhaps the most significant characteristic of the glassy state of matter, affecting essentially all experimental measurements. An intuitively appealing picture, which explains this remarkable slowing down, is that the configuration space of a glass consists of many nearly degenerate free energy minima, separated by high potential barriers [@Parisi]. The dynamics is dominated by transitions between configurations whose free energy difference $\Delta E$ is less than the thermal energy $k_BT$, and which are separated by free energy barriers $E_B \gg k_BT$. In a genuine glass such degeneracies occur on all length scales $L$. The dependence of $\Delta E$ and $E_B$, or more precisely, of their probability distributions, on the linear extent $L$ has become the focus of great theoretical interest [@Parisi; @FisherHuseSG; @HuseHenley; @HuHeFi; @Ioffe; @Fisher2Huse; @FisherHuse; @MDK]. If transitions mainly occur between pairs of low-energy configurations which can be regarded asymptotically independent for large $L$, the simple model of a gas of fluctuating two-level systems [@HalperinVarma] allows for a fairly complete description of the dynamics. In general, however, a much more complicated hierarchical interdependence of transitions on different length scales may take place [@Parisi]. The subject of this Letter is an elastic line (henceforth called an interface) in a two-dimensional random potential landscape [@HuseHenley; @FisherHuse; @MDK]. This system combines a fair degree of realism e.g. as a model for a domain boundary [@Ioffe] or a magnetic flux line trapped between two copper-oxide planes in a dirty high-temperature superconductor [@Fisher2Huse; @MDK], with a simplicity that has allowed a substantial body of knowledge to accumulate over the past decade [@KardarRev; @Zhang95]. An almost degenerate two-level system in this case is simply a [*loop*]{}: a segment of the line between two points, which can flip between two low energy paths (valleys in the potential landscape) separated by a barrier (a mountain in the landscape), this is illustrated in the inset of Fig. \[fig:tau\_stat\]. It has been well established [@HuseHenley; @HuHeFi; @FisherHuse] that the transverse size of such a loop scales as $\Delta h\propto L^{\zeta}$ with $\zeta = 2/3$, while the free energy differences between the two valley configurations, $\Delta E$ are distributed with mean zero and variance $\langle\Delta E^2\rangle\propto L^{2\theta}$, where $\theta=1/3$. Recent work [@MDK] provides evidence that the barriers between such configurations are distributed with mean $\langle E_B(L)\rangle\propto L^{\theta}$ and the variance $\langle E_B^2\rangle \propto L^{2\theta}$ with likely logarithmic corrections. These results, when combined, provide all necessary ingredients for developing the dynamic description of the line under the assumption of asymptotic independence of the transitions within large loops. In this Letter we outline such a description, and show that it agrees well with dynamical Monte Carlo simulations. We first numerically confirm that nearly degenerate paths form loops of various sizes, and at first stage we analyze the dynamics of loops of a fixed length. The dynamics of a single loop can be described as a sequence of flips where the interface position moves from one arm of the loop to another. By studying loops of different lengths we determine how the flipping-rate distribution depends on the size of the loop. At the second stage we study the fluctuations of a low-energy interface. Using numerical simulations we determine the time-dependent fluctuations of the interface position, which we compare with an analytic model that assumes that interface fluctuations are due to flipping loops. Since parameters describing loops are determined at the first stage, there are no adjustable parameters left at this point. We find that the prediction of the loop model is in good agreement with the simulations, supporting the conjecture that interface dynamics is due to fluctuating loops. #### The numerical model: {#the-numerical-model .unnumbered} We study a model that excludes overhangs so that the interface height $h(x)$ is at all times a single-valued function of the spatial coordinate $x$. For the numerics we use a lattice model where the interface is discretized to have a unit slope between the lattice points, $|h(x)-h(x+1)| = 1$, and use fixed boundary conditions $h(0)=h(L_0)=0$, where $L_0$ is the interface length. The Hamiltonian of the lattice model for a particular realization of the random medium is $$H_\mu[h] = \sum_{x=1}^{L_0} \mu(h(x),x) ,$$ where $h(x), x=1,\ldots,L_0$ represents the interface, and $\mu$ is the potential landscape. The random potential is uncorrelated and uniformly distributed over the range $0 \leq \mu(h,x) \leq 1$ [@Gaussian]. This Hamiltonian belongs to the universality class of a directed polymer in a random medium (DPRM) [@Zhang95; @Yoshino95], and in this context the requirement of unit slope between lattice points effectively adds an implicit line tension [@Zhang95]. The dynamics is implemented in a spirit similar to that of previously introduced mappings onto spin chains [@GwaSpohn; @HansAndersLev]; the dynamics is described by the master equation based on the transition probability $P[h(x)\to h'(x)] = \exp\{\beta(H_{\mu}[h]-H_{\mu}[h'])/2\} dt$, where $\beta$ is the inverse temperature, $dt$ an infinitesimal time interval, and $h(x)$ and $h'(x)$ are two interface configurations that differ in only one position. Numerically the dynamics of the master equation is exactly modeled by an algorithm that uses time steps sampled from a Poisson distribution [@Binder79]. We have used $\beta=2$ in the simulations that are presented in this Letter. In general the computation time grows very rapidly with increased $\beta$. #### Single loop statistics: {#single-loop-statistics .unnumbered} In order to explore the ideas of loop-based dynamics we use the free energy landscape to define loops. The free energy of a point $(x,h)$ is defined as $F(x,h) = -\beta^{-1} \ln(P(x,h))$, where $P(x,h)$ is the probability that an interface with fixed ends ($(x,h)=$ $(0,0)$ and $(L_0,0)$) crosses this point. The probability $P(x,h)$ is easily calculated using transfer matrix methods [@Zhang95]. A lattice point is defined to be part of an [*island*]{} if it is not part of any interface that includes only points with $P(x,h)>0.1$. The level $0.1$ is chosen to obtain well defined islands, but the specific value does not greatly affect the final results of the analysis. Interface segments encircling an island form a loop, and we measure the loop size in terms of its length $L=x_r-x_l+1$, where $(x_r,h_r)$ and $(x_l,h_l)$ are the right and left ends of the island, respectively. The center height of the island, $h_{is}$, is defined as the $h$-component of its center of mass, where every lattice point in the island is regarded as a mass point. We say that the loop surrounding the island changes its state (flips) when the interface height, averaged over the island length, crosses $h_{is}$. This definition is convenient although it sometimes catches events that would not intuitively be considered as flips. In order to study the relationship between static and dynamic scaling, and to unambiguously determine the numerical values of the amplitudes to be later used in the fit, we collect statistics of the dynamics of individual loops. A loop is inscribed in a bounding box which sets the boundaries within which the interface is free to move (showed with dashed lines in Fig. \[fig:tau\_stat\]). The interface is constrained to pass through the left and right corners of the bounding box, $(x_{l}-2,h_{l})$, and $(x_{r}+2,h_{r})$. We collect statistics of the time between consecutive flips of the loop. In Fig. \[fig:tau\_stat\] we plot a typical example of the measured probability that a loop stays in the upper (lower) state for at least time $t$. The decay of this probability is exponential for sufficiently large $t$, which is consistent with the two-state model discussed below (Eq. (\[eq:twostate\])). The deviation for small $t$ is probably due to our simplified criterion for flipping the loop. Before making a least-square fit to an exponential decay we exclude the part of the short-time data that strongly deviates from the expected form. We also exclude the largest times (0.1% of the data) since the statistics of these very rare events is poor. From this we obtain the characteristic decay times $\tau_{+-}$ and $\tau_{-+}$ for flipping the loop from its upper to lower state and vice versa. For an individual loop we collect data from 10,000 flips, and make the fit described above. The rate constant, $\Gamma$, characteristic for the loop, is calculated by $\Gamma=\tau_{+-}^{-1}+\tau_{-+}^{-1}$ (we assume that the free energy difference between the arms of the loop is unimportant). By collecting statistics from 1000 loops of the same size $L$ we find that $\Gamma$ is log-normally distributed. This is consistent with a simple activated behavior, $\Gamma = \overline{\Gamma}e^{-\beta\Delta}$, where $\Delta$ is a Gaussian-distributed energy barrier separating the two sides of the loop, and $\overline{\Gamma}$ is a constant setting the unit of time. In Fig. \[fig:ln\] we show fits to the log-normal distribution by plotting $Q^{-1}(P(\Gamma))$ against $\ln(\Gamma)$, where $Q$ is the complement of the cumulative normal distribution, and $P(\Gamma)$ is the measured cumulative probability of the rate constant $\Gamma$ (hence, a straight line would correspond to a log-normal distribution). The distribution is characterized by the average barrier height $\Delta(L)$ and the standard deviation $\sigma(L)$. The values for $\beta\sigma(L)$ are obtained as the standard deviations of $\ln(\Gamma)$, which also give the inverse slopes of the lines in Fig. \[fig:ln\]. Similarly, the crossings between the lines and the zero axis represent the averages of $\ln(\Gamma)$, which give values for $\ln(\overline{\Gamma}) - \beta\Delta(L)$. The average barrier height $\Delta(L)$ and the standard deviation $\sigma(L)$ scale with loop size as $L^{1/3}$ [@MDK]. This allows us to determine the time scale $\overline{\Gamma}$ and the multiplicative constants in $\Delta(L)$ and $\sigma(L)$. The mean and standard deviation of $\ln(\Gamma)$ are plotted as functions of $L^{1/3}$ in Fig. \[fig:exp\_1\_3\] confirming the $L^{1/3}$ scaling, and giving for $\beta=2$ the parameter values $\ln(\overline{\Gamma}) = 2.2$, $\beta\Delta(L) = 3.0 L^{1/3}$, and $\beta\sigma(L) = 0.28 L^{1/3}$. #### Full interface numerics: {#full-interface-numerics .unnumbered} The same numerical algorithm is used for studying the dynamics of the full interface but keeping only the ends of the interface fixed. We start the interface from a random initial state (chosen from the equilibrium ensemble), measure how the height $h_c(t)$ of the center point of the interface varies with time, and collect statistics of $\delta h^2(t) = [h_c(t)-h_c(0)]^2$. The equilibrium ensemble is used to normalize $\delta h^2(t)$ with respect to $\delta h^2(\infty)$. The data points in Fig. \[fig:dh2\_of\_t\] show the results of simulating interfaces of lengths 20,40,60, and 128 for $\beta=2$, where $\delta h^2(t)$ has been averaged over 20,000 realizations of the random medium. #### The analytic model: {#the-analytic-model .unnumbered} We model the loop dynamics using a two-state model, where the probability of a given loop being in the upper or lower state, respectively, is given by $P_+(t)$ and $P_-(t)$. The time development of these probabilities is governed by the coupled differential equations $$\begin{array}{lcl} \frac{dP_+}{dt} &=& -\Gamma_{+-}P_{+}(t) + \Gamma_{-+}P_{-}(t) , \vspace{1mm}\\ \frac{dP_-}{dt} &=& \Gamma_{+-}P_{+}(t) - \Gamma_{-+}P_{-}(t) . \end{array} \label{eq:twostate}$$ The fluctuation in the height of the center position of a loop of width $wL^{2/3}$ is given in the two-state model by $$\delta h_{\Gamma,L}^2(t) = \langle \left[h_c(t) - h_c(0)\right]^2\rangle = \frac{w^2}{2}L^{4/3}(1 - e^{-\Gamma t}) ,$$ where we assumed for simplicity $\Gamma_{+-} = \Gamma_{-+} = \Gamma/2$, i.e. that the two arms of the loop are exactly degenerate. We consider an interface of length $L_0$, and denote the average barrier height of the largest loops (i.e. loops of size $L_0$) by $\Delta_0$, and its standard deviation by $\sigma_0$. We assume that the number of loops of a given size scales as $L^{-1}$, and that the fluctuations of different loops are independent and additive. We find that the total fluctuation of the interface, as implied by loop dynamics, is given by $$\label{eq:result} \frac{\delta h^2(t)}{\delta h^2(\infty)} = 1 - \int_{-\infty}^\infty \frac{dy}{\sqrt{2\pi}}e^{-\frac{1}{2}y^2} \int_0^1du4u^3 e^{-\overline{\Gamma}t\exp\left[u\beta(\sigma_0y-\Delta_0)\right]} .$$ The first integral corresponds to integrating over loops of fixed length but variable rate constants, and the second integral adds up the contributions of loops of different lengths. The infinite time fluctuations $\delta h^2(\infty)$ are given by the equilibrium result, and scale as $L_0^{4/3}$ [@FisherHuse]. The two-state model (Eq. (\[eq:result\])) implies that the natural time scale in the problem is given by $\overline{t}=(\overline{\Gamma})^{-1}\ln(2)\exp(\beta\Delta_0/2^{1/4})$ in the sense that at time $t=\overline{t}$ the fluctuations have reached approximately 50% of the equilibrium value; more precisely $0.5 \le \frac{\delta h^2(\overline{t})}{\delta h^2(\infty)} \le 0.58$ for all $\beta$, $\Delta_0$, and $\sigma_0$. Hence, at low temperatures interface dynamics are exponentially slow as is typical for glassy systems [@Hertz]. The solid lines in Fig. \[fig:dh2\_of\_t\] are the results of the two-state model using the parameters determined by studying individual loops. The time scale $\overline{\Gamma}$ determines the position of the curves, the average barrier height, $\Delta_0$, affects both the position and the slope of the curves, and the standard deviation of the barrier heights, $\sigma_0$, has only a minor effect on the results for small $\sigma_0/\Delta_0$ (taking the limit $\sigma_0/\Delta_0\rightarrow 0$ would not significantly change the results). Considering that there are no adjustable parameters, the agreement with the results of the numerical simulations in Fig. \[fig:dh2\_of\_t\] is quite good. This suggests that the hypothesis that interface dynamics is due to fluctuating loops is indeed valid. The deviations in the short time behavior are, we believe, due in part to more complicated loops on small length scales, which cannot be described as independent simple loops. Another contribution to the short time deviations is that in the lattice model the arms of a loop can have a non-zero width (i.e. two adjacent interface positions are not separated by a barrier), which naturally enhances fluctuations in short time scales. The interpretation that small time differences are due to discretization and finite size corrections is further supported by the fact that the deviations are smaller for larger system sizes. Another possible source of deviations is logarithmic corrections to the scaling forms of $\Delta(L)$ and $\sigma(L)$, however, our data on individual loops is insufficient to determine these corrections. In conclusion, we have studied the dynamics of a one-dimensional interface in a two-dimensional random medium. We have shown that the dynamics can be understood quantitatively in terms of loops formed by nearly degenerate interface paths. At low temperatures the dynamics is exponentially slow, which is typical for glassy systems [@Parisi; @FisherHuseSG; @Hertz]. Present address: Prudential Home Mortgage Company, 8000 Maryland Ave., Ste. 1400, Clayton, MO 63105. M. Mezard, G. Parisi, and M. A. Virasoro, [*Spin Glass Theory and Beyond*]{}, World Scientific, Singapore (1987). D. S. Fisher and D. A. Huse, Phys. Rev. B [**38**]{}, 386 (1988); [**38**]{}, 373 (1988). D. A. Huse and C. L. Henley, Phys. Rev. Lett. [**54**]{}, 2708 (1985). D. A. Huse, C. L. Henley, and D. S. Fisher, Phys. Rev. Lett. [**55**]{}, 2924 (1985). L. Ioffe and V. M. Vinokur, J. Phys. C [**20**]{}, 6149 (1987). D. S. Fisher, M. P. A. Fisher, and D. A. Huse, Phys. Rev. B [**43**]{}, 130 (1991). D. S. Fisher and D. A. Huse, Phys. Rev. B [**43**]{}, 10728 (1991). L. V. Mikheev, B. Drossel, and M. Kardar, Phys. Rev. Lett. [**75**]{}, 1170 (1995). P. W. Anderson, B. I. Halperin, and C. M. Varma, Philos. Mag. [**25**]{}, 1 (1972). M. Kardar, in [*Les Houches 1994, Session LXII, fluctuating geometries in statistical mechanics and field theory*]{}, edited by F.  David, P. Ginsparg, and J. Zinn-Justin (http://xxx.lanl.gov/lh94/e-book/ or cond-mat/9411022). T. Halpin-Healy and Y. C. Zhang, Phys. Rep. [**254**]{}, 215 (1995). A Gaussian distribution with the same mean and standard deviation gives results that marginally differ from those of the uniform distribution. H. Yoshino, unpublished (cond-mat/9510024). L.-H. Gwa and H. Spohn, Phys. Rev. A [**46**]{}, 844 (1992). H. C. Fogedby, A. B. Eriksson, and L. V.  Mikheev, Phys. Rev. Lett. [**75**]{}, 1883 (1995). K. Binder, Sec. 1 in [*Monte Carlo Methods in Statistical Physics*]{}, edited by K. Binder (Springer-Verlag, Berlin 1979). K. H. Fisher and J. A. Hertz, [*Spin Glasses*]{} (Cambridge University Press, Cambridge, 1991). = = = =
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the role of intrinsic charm (IC) in the nucleon for forward production of $c$-quark (or $\bar c$-antiquark) in proton-proton collisions for low and high energies. The calculations are performed in collinear-factorization approach with on-shell partons, $k_T$-factorization approach with off-shell partons as well as in a hybrid approach using collinear charm distributions and unintegrated (transverse momentum dependent) gluon distributions. For the collinear-factorization approach we use matrix elements for both massless and massive charm quarks/antiquarks. The distributions in rapidity and transverse momentum of charm quark/antiquark are shown for a few different models of IC. Forward charm production is dominated by $gc$-fusion processes. The IC contribution dominates over the standard pQCD (extrinsic) $gg$-fusion mechanism of $c\bar c$-pair production at large rapidities or Feynman-$x_F$. We perform similar calculations within leading-order and next-to-leading order $k_T$-factorization approach. The $k_T$-factorization approach leads to much larger cross sections than the LO collinear approach. At high energies and large rapidities of $c$-quark or $\bar c$-antiquark one tests gluon distributions at extremely small $x$. The IC contribution has important consequences for high-energy neutrino production in the Ice-Cube experiment and can be, to some extent, tested at the LHC by the SHIP and FASER experiments by studies of the $\nu_{\tau}$ neutrino production.' author: - 'Rafa[ł]{} Maciu[ł]{}a' - 'Antoni Szczurek[^1]' title: 'Intrinsic charm in the nucleon and charm production at large rapidities in collinear, hybrid and -factorization approaches' --- Introduction ============ The text-book proton consists of $u u d$ valence quarks. This picture is by far too simplified. In fact there is strong evidence for internal strangeness and somewhat smaller for internal charm content of the nonperturbative proton. Different pictures of nonperturbative $c \bar c$ content were proposed in the past. A first example is relatively old BHPS model [@BHPS1980] which assumes $u u d c \bar c$ 5-parton Fock configurations (see also Refs. [@BHK1982; @VB1996]). Another picture proposed in the literature is a meson cloud model (MCM) [@NNNT1996; @MT1997; @SMT1999; @CDNN2001; @HLM2014], where the $p \to {\bar D}^0 \Lambda_c$ or $D \Sigma_c$ fluctuations of the proton are considered. While in the first model $c(x) = {\bar c}(x)$ in the MCM $c(x) \ne {\bar c}(x)$. The models do not allow to predict precisely the absolute probability for the $c$-quark or $\bar c$-antiquark content of the proton. Experimental data put only loose constraints on the charm content: $$\int_0^1 c(x) \; dx = \int_0^1 {\bar c}(x) \; dx < 0.01 \; . \label{charm_probability}$$ It is rather upper limit but this value depends somewhat on the model of charm content of a proton. In general, for sea-like models the probability can be slightly larger than for the BHPS one. In the sea-like case the charm is concentrated at lower values of $x$. Very recent lattice study of charm quark electromagnetic form factors suggested asymmetry of $c$-quark and $\bar c$-antiquark distributions [@latticeQCD]. Recently there is a renewed interest in the intrinsic charm (IC) which is related to experiments being performed at the LHC [@BBLS2015; @RKAG2016; @LLSB2016; @BBLLMST2018]. The intrinsic charm is often included in global parton analyses of world experimental data [@BKLSSV2015; @Ball:2014uwa; @Hou:2017khm]. The highly energetic neutrino experiments, such as IceCube, could put further constraints on the intrinsic charm [@ERS2008; @LB2016; @GGN2018]. Here, however, the IC contribution may compete with a concept of the subleading fragmentation [@Maciula:2017wov]. Similarly, future LHC high and low energy forward experiments like FASER and SHIP could also be very helpful in this context (see e.g. Ref. [@Bai:2020ukz] and Ref. [@Bai:2018xum], respectively). Also the LHCb experiment in its fixed-target mode could be sensitive to the contributions coming from intrinsic charm in a proton, especially in the case of open charm production [@Aaij:2018ogq], where some problems with a satisfactory theoretical description of the experimental data were reported (see also discussion in Ref. [@Maciula:2020cfy]). In this paper we concentrate therefore on forward production of charm quarks/antiquarks. There were already some studies performed with color glass condensate approach and compared to the dipole approach at forward directions [@GNU2010; @CGGN2017]. In our approach we will use instead collinear, hybrid and $k_T$-factorization approach. The latter two were not studied so far in the context of IC and forward production of charm. Models of intrinsic charm in a nucleon ====================================== In the five-quark Fock component $u u d c \bar c$ heavy quark/antiquark carries rather large fraction of the mother proton. In the BHPS model, after some approximations the probability to find $c$ or $\bar c$ (the same for both) can be expressed via a simple formula: $$\frac{dP}{dx} = c(x) = {\bar c}(x) = A x^2 \left( 6 x (1+x) ln(x) + (1-x)(1+10x+x^2) \right) \; .$$ The normalization constant $A$ depends on integrated probability for $c \bar c$ component and is 6 for 1 % probability. Please note that the quark mass is not explicit in this simplified formula. In the meson cloud models $c$ is in the baryon-like object and $\bar c$ in the meson-like object. Then the probabilistic distribution can be obtained as $$\begin{aligned} \frac{dP_c}{dx} &=& \int_x^1 \frac{dy}{y} f_B(y) f_{c/B}(x/y) \; , \\ \frac{dP_{\bar c}}{dx} &=& \int_x^1 \frac{dy}{y} f_M(y) f_{\bar c/M}(x/y) \; . \label{MCM}\end{aligned}$$ The $f_B$ and $f_M$ functions, the probability to find meson or baryon in proton, can be calculated from corresponding Lagrangians supplemented by a somewhat arbitrary and poorly known vertex form factors and can be found *e.g.* in Ref. [@HLM2014]. In general, such an approach leads to $c(x) \ne {\bar c}(x)$. In practice both models give rather similar distributions as will be shown in the following, so using one of them as an example is representative and sufficient. These are models of large-$x$ components of IC. In principle, the IC may have also small-$x$ component known under the name of sea-like, however, only simple *ad hoc* parametrizations were used in the literature. ![A dynamical process leading to sea-like IC. []{data-label="fig:sea-like"}](sea-like.eps){width="100.00000%"} There is another category of processes leading to sea-like IC (see Fig. \[fig:sea-like\] where an example of corresponding dynamical processes is shown). Using intrinsic glue in the nucleon (see *e.g.* Ref. [@EI1998]) one can generate intrinsic charm sea. The intrinsic gluon distribution fulfil by construction the relation: $$\int_0^1 \left( x u_v(x) + x d_v(x) + x g(x) \right) dx = 1 \; .$$ For massless charm the intrinsic charm can be calculated as the convolution with initial (intrinsic) glue $$c(x) = {\bar c}(x) = \alpha_s(4 m_c^2) / (2 \pi) \int_x^1 dy \left( \frac{1}{y} \right) P_{q g}\left( \frac{x}{y} \right) g(y) \; ,$$ where $g$ is the intrinsic gluon distribution. With the model from Ref. [@EI1998] the $c$ and $\bar c$ distributions are integrable, concentrated at $x \sim$ 0.1-0.2 and corresponding probability is 2.7 %. It should be less for massive quarks/antiquarks. In Fig. \[fig:xcharm\_IC\] we show $x$-distribution of the IC for the BHPS and for the sea-like model described above. ![Charm quark/antiquark distribution for the two different models of IC. The solid line represents the BHPS model while the dashed line is for sea-like glue as obtained in a way described above. In this calculation BHPS model with 1% probability was used for illustration. []{data-label="fig:xcharm_IC"}](xc_BHPS_vs_sea-like.eps){width="100.00000%"} In the GRV approach [@Gluck:1994uf] the charm contribution is calculated fully radiatively as the convolution of gluon distribution with appropriate mass-dependent splitting function: $$x c(x,Q^2) = \frac{\alpha_s(\mu'^2)}{2 \pi} \int_{a x}^1 dy \left( \frac{x}{y} \right) C_{g,2}^c \left(\frac{x}{y}, \ \frac{m_c^2}{Q^2} \right) g(y,\mu'^2) \; . \label{GRV_charm}$$ The explicit formula for $C_{g,2}^c$ and $a$, including mass of quarks/antiquarks, can be found in Ref. [@Gluck:1994uf]. In the following calculations we will use more modern gluon distributions. In this paper we concentrate on large-$x$ component and completely ignore the sea-like component(s). Charm can also be generated by evolution equations via $g \to c \bar c$ transition (splitting). Often it was included in the evolution as a massless parton with zero as initial condition at the starting scale $\mu^2 \sim m_c^2$. In a dedicated fits, the intrinsic charm distribution is used as initial condition for DGLAP evolved charm distributions (see *e.g.* Ref. [@PLT2007]). In the right panel Fig. \[fig:xc\] we show charm distribution in a proton without (dashed line) and with (solid line) the IC distribution taken as initial condition of the evolution. Cross section for associated charm production ============================================= The collinear approach ---------------------- In the present study we discuss production of the final states with one charm quark or charm antiquark. In the collinear approach [@Collins:1989gx] the final state charm must be associated with at least one additional gluon or (light) quark. Typical leading-order mechanisms for charm production initiated by charm quark in a initial state are shown in Fig. \[fig1\]. The diagrams correspond to the $gc \to gc$ (or $g \bar c \to g\bar c$) subprocesses that are expected to be dominant at high energies, however, the $q c \to q c$ and $\bar q c \to \bar q c$ (or $q \bar c \to q \bar c$ and $\bar q \bar c \to \bar q \bar c$) mechanisms with $q = u,d,s$ are also possible and will be taken into account in the following numerical calculations. ![ Typical leading-order (2 $\to$ 2) mechanisms of production of $c$ quarks or $\bar c$ antiquarks in the collinear parton model. []{data-label="fig1"}](2to2-IC-a.eps){width="100.00000%"} ![ Typical leading-order (2 $\to$ 2) mechanisms of production of $c$ quarks or $\bar c$ antiquarks in the collinear parton model. []{data-label="fig1"}](2to2-IC-b.eps){width="100.00000%"} ![ Typical leading-order (2 $\to$ 2) mechanisms of production of $c$ quarks or $\bar c$ antiquarks in the collinear parton model. []{data-label="fig1"}](2to2-IC-c.eps){width="100.00000%"} In the collinear approach the differential cross section for forward charm production within the $gc \to gc$ mechanism[^2] can be calculated as $$\begin{aligned} \frac{d \sigma}{d y_1 d y_2 d^2 p_t} = \frac{1}{16 \pi {\hat s}^2} \overline{| {\cal M}_{g c \to g c} |^2} x_{1} g(x_1, \mu^2) x_{2} c(x_2, \mu^2) \; ,\end{aligned}$$ where ${\cal M}_{g c \to g c}$ is the on-shell matrix element for $gc \to gc$ subprocesses and $g(x_1, \mu^2)$ and $c(x_2, \mu^2)$ are the collinear gluon and charm quark PDFs evaluated at longitudinal momentum fractions $x$ and factorization scale $\mu^{2}$. Including the mass of charm quark the on-shell matrix element takes the following form: $$\begin{aligned} \overline{|{\cal M}_{gc \to gc}|^2} &=& g_s^4 \left[ \left( - m_c^4 ( 3\hat{s}^2 + 14 \hat{s}\hat{u} + 3\hat{u}^2 ) + m_c^2 ( \hat{s}^3 + 7 \hat{s}^2\hat{u} + 7 \hat{s} \hat{u}^2 + \hat{u}^3) \right. \right. \nonumber \\ && \left. \left. + 6m_c^8-\hat{s}\hat{u} ( \hat{s}^2+\hat{u}^2 ) \right) \left( -18m_{c}^2 (\hat{s}+\hat{u}) +18m_c^4+9\hat{s}^2+9\hat{u}^2-\hat{t}^2 \right) \right] \nonumber \\ && / \left( 18\hat{t}^2 ( \hat{u}-m_c^2)^2 ( \hat{s}-m_c^2) \right)^2 ,\end{aligned}$$ where $g_s^2 = 4 \pi \alpha_{s}(\mu)$. In the massless limit $m_c \to 0$ one recovers the known textbook formula: $$\overline{|{\cal M}_{gc \to gc}|^2} = g_s^4 \left( -\frac{4}{9} \left( \frac{{\hat u}^2 + {\hat s}^2}{{\hat u}{\hat s}} \right) + \left( \frac{{\hat u}^2+{\hat s}^2}{{\hat t}^2} \right) \right) \; .$$ A role of the charm quark mass in the matrix element will be discussed when presenting numerical results. ![ Charm quark distributions in a proton as a function of longitudinal momentum fraction $x$. Here different models for initial intrinsic charm quark distributions are shown (left panel) and a comparison between charm quark distributions obtained with and without concept of intrinsic charm in the proton. []{data-label="fig:xc"}](fig1a-IC.eps){width="100.00000%"} ![ Charm quark distributions in a proton as a function of longitudinal momentum fraction $x$. Here different models for initial intrinsic charm quark distributions are shown (left panel) and a comparison between charm quark distributions obtained with and without concept of intrinsic charm in the proton. []{data-label="fig:xc"}](fig1b-IC.eps){width="100.00000%"} In the numerical calculations below the intrinsic charm PDFs are taken at the initial scale $m_{c} = 1.3$ GeV, so the perturbative charm contribution is intentionally not taken into account. We apply four different grids of the intrinsic charm distributions from the CT14nnloIC PDF [@Hou:2017khm] that correspond to the BHPS 1% and BHPS 3.5% as well as the sea-like LS (low-strength) and sea-like HS (high-strength) models for initial intrinsic charm distribution. The distributions are compared with each other in the left panel of Fig. \[fig:xc\]. In the right panel we present in addition the difference between the CT14nnloIC charm PDF obtained with and without intrinsic-charm concept. On the other hand the collinear gluon PDFs $g(x, \mu^2)$ are taken at the running factorization scale related to the averaged transverse momentum of the outgoing particles, i.e. $\mu = \sqrt{\frac{p_{t1}^2 + p_{t2}^2}{2} + m_c^2}$. The charm quark mass $m_{c} = 1.3$ GeV plays here a role of the minimal scale and ensures that we are not going beyond the fitted PDF grids where unconstrained extrapolation procedures are applied. We keep the charm quark mass here even when the massless matrix element and/or kinematics are used. As will be shown later, the numerical results strongly depend on how the longitudinal momentum fractions $x_1$ and $x_2$ (arguments of parton distributions) are calculated. In the massive scheme of the calculations the quantities are defined as follows: $$\begin{aligned} x_1 &=& \frac{p_{t1}}{\sqrt{s}} \exp(+y_1) + \frac{m_{t2}}{\sqrt{s}} \exp(+y_2) \; , \nonumber \\ x_2 &=& \frac{p_{t1}}{\sqrt{s}} \exp(-y_1) + \frac{m_{t2}}{\sqrt{s}} \exp(-y_2) \; . \end{aligned}$$ In this equations $p_{t1}$ is transverse momentum of the outgoing gluon (or light quark/antiquark) and the $m_{t2}$ is $c$ quark ($\bar c$ antiquark) transverse mass defined as $m_{t} = \sqrt{p_t^2+m_c^2}$. As will be discussed further it is crucial to include in kinematics the mass of the final charm, while the initial charm can be considered massless. In the following numerical studies all the calculations in the massless limit with massless matrix elements will be done within the kinematics corrected in the above manner. The effect of the correction will be also explicitly shown. Considering forward production of charm at the LHC energies one is exploring asymmetric kinematical regions where $x_1$ is very small (down to $10^{-5}$) and $x_2$ is rather large (about $10^{-1}$). Thus in this reaction small-$x$ gluon PDF and intrinsic large-$x$ charm content of the proton are probed simultaneously. As it is shown in Fig. \[fig:xcxg\] both distributions are not well constrained by the global experimental data. In the left panel we show the central fits for intrinsic charm distribution from the CT14nnloIC and the NNPDF30nloIC PDF sets [@Ball:2016neh] together with $1\sigma$ standard deviation. In the right panel we compare gluon PDF fits from different collaborations, including MMHT2014nlo [@Harland-Lang:2014zoa], JR14NLO08FF [@Jimenez-Delgado:2014twa] and CT14lo/nnlo sets. Clearly the current level of knowledge of both distributions is rather limited and the large uncertainties prevent definite conclusions. In principle, a study of far-forward production of charm may improve the situation by exploring unconstrained areas. ![ The intrinsic charm (left panel) and gluon (right panel) distributions in a proton as a function of longitudinal momentum fraction $x$. Here different sets of collinear PDFs are shown including uncertainties. []{data-label="fig:xcxg"}](fig2a-IC.eps){width="100.00000%"} ![ The intrinsic charm (left panel) and gluon (right panel) distributions in a proton as a function of longitudinal momentum fraction $x$. Here different sets of collinear PDFs are shown including uncertainties. []{data-label="fig:xcxg"}](fig2b-glue.eps){width="100.00000%"} In the present study we go beyond the leading-order mechanisms and include also higher-order processes that are expected to play important role. We take into account all $2\to 3$ and $2\to 4$ processes at tree-level that lead to a production of charm quark or antiquark and are driven by the $gc$ and $qc$ (or $\bar q c$) initial state interactions. Examples of the diagrams corresponding to the processes are shown in Fig. \[figHO\]. The relevant cross sections are calculated with the help of the <span style="font-variant:small-caps;">KaTie</span> Monte Carlo generator [@vanHameren:2016kkz]. ![ Examples of the $2\to3$ (left panel) and the $2\to4$ (right panel) mechanisms of production of $c$ quarks or $\bar c$ antiquarks in the collinear parton model. []{data-label="figHO"}](2to3-IC.eps){width="100.00000%"} ![ Examples of the $2\to3$ (left panel) and the $2\to4$ (right panel) mechanisms of production of $c$ quarks or $\bar c$ antiquarks in the collinear parton model. []{data-label="figHO"}](2to4-IC.eps){width="100.00000%"} Having massless partons (minijets) in the final states considered in the present work it is necessary to regularize the cross section that has a singularity in the $p_{t} \to 0$ limit. We follow here the known prescription adopted in <span style="font-variant:small-caps;">Pythia</span> where a special suppression factor is introduced at the cross section level [@Sjostrand:2014zea]: $$F(p_t) = \frac{p_t^2}{ p_{t0}^2 + p_t^2 } \; \label{Phytia_formfactor}$$ for each of the outgoing massless partons with transverse momentum $p_t$, where $p_{t0}$ is a free parameter of the form factor. The hybrid model ---------------- Within the asymmetric kinematic situation $x_1 \ll x_2$ described above the cross section for the processes under consideration can be also expressed in the so-called hybrid factorization model motivated by the works in Refs. [@Deak:2009xt; @Kutak:2012rf]. In this framework the small-$x$ gluon is taken to be off mass shell and the differential cross section e.g. for $pp \to g c X$ via $g^* c \to g c$ mechanism reads: $$\begin{aligned} d \sigma_{pp \to gc X} = \int d^ 2 k_{t} \int \frac{dx_1}{x_1} \int dx_2 \; {\cal F}_{g^{*}}(x_1, k_{t}^{2}, \mu^2) \; c(x_2, \mu^2) \; d\hat{\sigma}_{g^{*}c \to gc} \; ,\end{aligned}$$ where ${\cal F}_{g^{*}}(x_1, k_{t}^{2}, \mu^2)$ is the unintegrated gluon distribution in one proton and $c(x_2, \mu^2)$ a collinear PDF in the second one. The $d\hat{\sigma}_{g^{*}c \to gc}$ is the hard partonic cross section obtained from a gauge invariant tree-level off-shell amplitude. In the present paper we shall not discuss the validity of the hybrid model on the theoretical level and concentrate only on its phenomenological application in forward production. A derivation of the hybrid factorization from the dilute limit of the Color Glass Condensate approach can be found in Ref. [@Kotko:2015ura]. The gluon uPDF depends on gluon longitudinal momentum fraction $x$, transverse momentum squared $k_t^2$ of the gluons entering the hard process, and in general also on a (factorization) scale of the hard process $\mu^2$. In the numerical calculations we take different models of unintegrated parton densities from the literature: the JH-2013-set2 [@Hautmann:2013tba] model obtained from the CCFM evolution equations, the Kutak-Sapeta (KS) [@Kutak:2014wga] model being a solution of linear and non-linear BK evolution, the DGLAP-based PB-NLO-set1 [@Martinez:2018jxt] model from the parton-branching (PB) method and the Kimber-Martin-Ryskin (KMR) prescription [@Watt:2003mx]. All of the models, except the PB-NLO-set1, are constructed in the way that allows for resummation of extra hard emissions from the uPDFs. It means that in the hybrid model already at leading-order some part of radiative higher-order corrections can be effectively included via uPDFs. However, it is true only for those uPDF models in which extra emissions of soft and even hard partons are encoded, including $k_{t}^{2} > \mu^{2}$ configurations. Then, when calculating the charm production cross section via *e.g.* the $g^* c \to g c$ mechanism one could expect to effectively include contributions related to an additional extra partonic emission (*i.e.* $g^* c \to g g c$) which in some sense plays a role of the initial state parton shower. In Fig. \[fig-uPDFs\] we plot the gluon transverse momentum dependence of the different gluon uPDFs from the literature. At the small $x$-values and low scales the differences between the model are quite significant. ![ The ingoing gluon transverse momentum distributions from the different models of unintegrated gluon densities in a proton. []{data-label="fig-uPDFs"}](uPDFs_kT.eps){width="100.00000%"} There are ongoing intensive works on construction of the full NLO Monte Carlo generator for off-shell initial state partons that are expected to be finished in near future [@private-Hameren]. This framework seems to be necessary in phenomenological studies that are based on the PB uPDFs [@Maciula:2019izq]. The extra hard emissions from the DGLAP-based uPDFs are usually strongly suppressed which leaves a room for higher-order terms. Therefore, in this case one needs to include usual leading order subprocesses properly matched with a number of additional higher-order radiative corrections at the level of hard matrix elements. In the moment, it can be done only at tree-level. In consequence, the numerical calculations with the PB-NLO-set1 uPDFs, are done including in addition all $2\to 3$ and $2\to 4$ channels of partonic subprocesses that lead to a production of charm quark or antiquark and are driven by the $g^*c$ and $q^*c$ (or $\bar q^* c$) initial state interactions, similarly as in the collinear case. Here we follow a dedicated matching procedure to avoid double-counting as introduced in Ref. [@Maciula:2019izq], and further used in Refs. [@Lipatov:2019izq; @Maciula:2020cfy]. The $\bm{k_{T}}$-factorization ------------------------------ Another possible theoretical approach to perform the calculations for the processes considered here is the $k_{T}$-factorization [@kTfactorization]. This framework extends the hybrid model formalism and includes in addition effects related to off-shellness of the initial state charm quark. In principle, it allows to study intrinsic charm contribution to charm production via mechanisms where both incident partons are off mass shell. A topology of possible diagrams present in the $k_{T}$-factorization in the case of intrinsic charm studies is not the same as in the collinear case. Here one can follow two different ways of calculation and consider: - $g^* c^* \to gc$ (and/or $q^* c^* \to qc$, $\bar{q}^* c^* \to \bar qc$ ) mechanism, - $g^* c^* \to c$ mechanism. The second one is not present in other approaches and can be treated as leading-order. The first mechanism directly corresponds to the scheme of the calculations applied in the hybrid model and can be classified as higher-order. However, their mutual coincidence is not clear and strongly depends on the model of unintegrated PDFs used in the numerical calculations. ### The $2 \to 2$ partonic mechanism The $k_{T}$-factorization cross section for the $pp \to gc X$ reaction driven by the typical $2\to2$ mechanisms, e.g. like the $g^* c^* \to gc$, can be expressed as follows: $$\begin{aligned} \label{LO_kt-factorization} \frac{d \sigma(p p \to g c \, X)}{d y_1 d y_2 d^2p_{1,t} d^2p_{2,t}} &=& \int \frac{d^2 k_{1,t}}{\pi} \frac{d^2 k_{2,t}}{\pi} \frac{1}{16 \pi^2 (x_1 x_2 s)^2} \; \overline{ | {\cal M}^{\mathrm{off-shell}}_{g^* c^* \to g c} |^2} \\ && \times \; \delta^{2} \left( \vec{k}_{1,t} + \vec{k}_{2,t} - \vec{p}_{1,t} - \vec{p}_{2,t} \right) \; {\cal F}_g(x_1,k_{1,t}^2,\mu^2) \; {\cal F}_c(x_2,k_{2,t}^2,\mu^2) \; \nonumber , \end{aligned}$$ where ${\cal F}_g(x_1,k_{1,t}^2,\mu^2)$ and ${\cal F}_c(x_2,k_{2,t}^2,\mu^2)$ are the gluon and intrinsic charm quark uPDFs, respectively, for both colliding hadrons and ${\cal M}^{\mathrm{off-shell}}_{g^* c^* \to g c}$ is the off-shell matrix element for the hard subprocess. Here the Feynmann diagrams are the same as shown in Fig. \[fig1\]. The extra integration is over transverse momenta of the initial partons. Here, one keeps exact kinematics from the very beginning and additional hard dynamics coming from transverse momenta of incident partons. Explicit treatment of the transverse momenta makes the approach very efficient in studies of correlation observables. Considering forward production of charm one should not expect that the initial state (intrinsic) charm quark could have large transverse momenta. Rather small deviations from the collinear limit are more physically motivated here. Therefore, for the unintegrated charm distribution ${\cal F}_c(x,k_{t}^2,\mu^2)$ we will assume Gaussian distributions with rather small smearing parameter $\sigma_{0}$. The unintegrated $c$ ($\bar c$) distributions are constructed as: $${\cal F}_c(x,k_t^2) = \pi \; G(k_t^2) \cdot x c ( x,\mu^2 ), \label{UPDF_c}$$ where $$G(k_t^2) = \frac{1}{2 \pi \sigma_0^2} \exp\left( \frac{-k_t^2}{2 \sigma_0^2} \right)$$ is a standard two-dimensional Gaussian distribution and $\sigma_0$ is in principle a free parameter which governs the nonperturbative effects in the proton wave function. The factor $\pi$ is because of our normalization of unintegrated parton distributions: $$\int d k_t^2 {\cal F}_c(x,k_t^2) = x c(x) \; . \label{normalization_of_UPDFs}$$ The hard off-shell matrix element ${\cal M}^{\mathrm{off-shell}}_{g^* c^* \to g c}$ is known only in the massless limit. Within this limit the relevant calculations can be done in the <span style="font-variant:small-caps;">KaTie</span> Monte Carlo code, where the matrix element is computed numerically. Its analytic form can be obtained according to Parton-Reggeization-Approach (PRA) and was published in Ref. [@Nefedov:2013ywa]. For the higher-order tree-level diagrams with off-shell initial state partons and with extra partonic legs in the final state one can also use <span style="font-variant:small-caps;">KaTie</span>, which is very efficient in this type of calculations and which was very recently equipped with tools that allow in addition for generation of analytic form of matrix-elements for a given hard multileg processes [@vanHameren:2016kkz]. ### The $2\to 1$ partonic mechanism In the $k_T$-factorization framework the charm quark/antiquark can be created at one-order higher approach. A relevant formalism was used previously for production of forward pions in Ref. [@CS2006]. In Fig. \[fig:kt-factorization\_diagrams\] we show basic graphs for charm quark production within the $2 \to 1$ mechanisms. ![Two leading-order diagrams for charm quark (antiquark) production relevant for $k_t$-factorization approach. The extra explicit gluonic emissions suggest the use of unintegrated gluon distributions. []{data-label="fig:kt-factorization_diagrams"}](2to1-IC-a.eps){width="100.00000%"} ![Two leading-order diagrams for charm quark (antiquark) production relevant for $k_t$-factorization approach. The extra explicit gluonic emissions suggest the use of unintegrated gluon distributions. []{data-label="fig:kt-factorization_diagrams"}](2to1-IC-b.eps){width="100.00000%"} The emitted charm-quark (or antiquark) momentum-space distribution can be written as: $$\begin{aligned} \frac{d \sigma(p p \to c \, X)}{d y d^2 p_t}&& = \frac{16 N_c}{N_c^2 - 1} \cdot \frac{4}{9} \cdot \frac{1}{m_t^2} \times \; \nonumber \\ &&\int \alpha_s(\Omega^2) f_g(x_1,k_{1,t}^2,\mu^2) f_c(x_2,k_{2,t},\mu^2) \delta\left( \vec{k}_{1,t} + \vec{k}_{2,t} - \vec{p}_t \right) d^2 k_{1,t} d^2 k_{2,t} \; . \nonumber \\ \label{kt_factorization_at_LO}\end{aligned}$$ In the formula above $f$’s are unintegrated gluon or charm quark/antiquark distributions. For unintegrated gluon distributions we will take the ones used recently in the literature in the context of $\eta_c$ or $\chi_c$ production [@BPSS2019; @BPSS2020] where the kinematics is similar. For $\Omega^2$ we can take $\Omega^2 = \min(m_t,k_{1t}^2,k_{2t}^2)$ or just $\Omega^2 = m_t^2$. The longitudinal momentum fractions are calculated as $$\begin{aligned} x_1 &=& \frac{m_t}{\sqrt{s}} \exp(+y) \; , \nonumber \\ x_2 &=& \frac{m_t}{\sqrt{s}} \exp(-y) \; .\end{aligned}$$ Results ======= We divide the section with numerical results to four subsections. First three of them are devoted to numerical calculations obtained with the collinear-, hybrid- and the $k_T$-factorization approach, respectively. The last subsection contains explicit predictions for impact of intrinsic charm mechanism on forward production of charm in different experiments, including low energy LHC experiments like fixed-target LHCb and SHIP, as well as high energy FCC and LHC experiments, like proposed recently LHC-FASER. The collinear approach ---------------------- We start presentation of numerical predictions with the results for $pp \to gc X$ reaction driven by the $gc \to gc$ leading-order mechanism calculated in the collinear framework within massive matrix element and kinematics for the energy $\sqrt{s} = 7$ TeV. Here we take the gluon and the intrinsic charm distributions as encoded in the CT14nnloIC collinear PDFs. The three different lines in Fig. \[fig1\] correspond to a different choice of the $p_{t0}$ parameter used for the regularization of the cross section. We see that the predictions for charm quark transverse momentum (left panel) and rapidity (right panel) distributions are very sensitive to the choice of this parameter, especially, at small charm quark transverse momenta, which also affects the rapidity spectrum. In the numerical studies below $p_{t0}=1.0$ GeV will be taken as a default choice which leads to a central value of the uncertainty related to the choice of the parameter. ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g c \rightarrow g c$ mechanism calculated within the intrinsic charm concept in the collinear-approach with matrix element and kinematics for massive charm quark. Here three different values of the regularization parameter $p_{T0}$ are used. Details are specified in the figure. []{data-label="fig1"}](dsig_dpt_coll_pT0.eps){width="100.00000%"} ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g c \rightarrow g c$ mechanism calculated within the intrinsic charm concept in the collinear-approach with matrix element and kinematics for massive charm quark. Here three different values of the regularization parameter $p_{T0}$ are used. Details are specified in the figure. []{data-label="fig1"}](dsig_dy_coll_pT0.eps){width="100.00000%"} In Fig. \[fig2\] we present again collinear results for the leading-order $gc \to gc$ mechanism but here we applied four different sets of the intrinsic charm distribution in a proton at initial scale $\mu = 1.3$ GeV as incorporated in the CT14nnloIC PDFs. Again, we show the differential cross sections as a function of the charm quark transverse momentum (left panel) and rapidity (right panel). The solid, long-dashed, dotted and dash-dotted lines correspond to the BHPS 1%, BHPS 3.5%, sea-like LS and sea-like HS models, respectively. The sea-like models lead to a larger cross section than in the case of the BHPS model in the midrapidty region. On the other hand, a larger cross section in the forward direction is obtained within the BHPS models. Clearly, large uncertainties due to the intrinsic charm input are found. ![ The same as in Fig. \[fig1\] but here results obtained with the four different scenarios for intrinsic charm content in a proton are shown. Details are specified in the figure. []{data-label="fig2"}](dsig_dpt_coll_pT01_diffIC.eps){width="100.00000%"} ![ The same as in Fig. \[fig1\] but here results obtained with the four different scenarios for intrinsic charm content in a proton are shown. Details are specified in the figure. []{data-label="fig2"}](dsig_dy_coll_pT01_diffIC.eps){width="100.00000%"} The intrinsic charm component in the proton is not the only source of uncertainties related to the collinear PDFs. As it is shown in Fig. \[fig3\] the gluon PDF also leads to a significant uncertainties of the predictions. Here we show a comparison of the predictions obtained with the default CT14nnloIC (solid lines), the JR14NLO08VF (dotted lines) and the MMHT2014nlo (dashed lines) PDF sets. The gluon PDFs provided by different groups are probed here at small-$x$ and relatively small scales and lead to a quite different results, especially, at small transverse momenta of charm quark. ![ The same as in Fig. \[fig1\] but here results obtained with the three different collinear gluon PDFs are shown. Details are specified in the figure. []{data-label="fig3"}](dsig_dpt_coll_pT0_gluon.eps){width="100.00000%"} ![ The same as in Fig. \[fig1\] but here results obtained with the three different collinear gluon PDFs are shown. Details are specified in the figure. []{data-label="fig3"}](dsig_dy_coll_pT0_gluon.eps){width="100.00000%"} Now we wish to compare three different schemes for the collinear calculations of the $pp \to gc X$ reaction via the $gc \to gc$ leading-order partonic subprocess. In Fig. \[fig4\] we present theoretical distributions obtained within the matrix element with massive quarks (called massive ME for brevity) and kinematics including quark masses (solid lines, our default choice), within the massless matrix element and massless kinematics (dotted histograms), as well as within the massless matrix element and kinematics corrected for the charm quark mass (solid histograms). In each of the cases, we kept the same choice of the renormalization scale $\mu_{R}^{2} = p_{t0}^{2}+p_{t}^{2}+m_{c}^{2}$ and the factorization scale $\mu_{F}^{2} = p_{t}^{2}+m_{c}^{2}$. The charm quark transverse momentum distributions (left panel) are almost identical and some very small (almost invisible) discrepancies appear only at extremely small transverse momenta. The rapidity distributions (right panel) are found to be very sensitive to the charm quark mass effects. Neglecting the charm quark mass in the kinematics leads to a shift of its rapidity distribution to a far forward direction. Correction of the kinematics by inclusion of the outgoing particles mass in the calculation of $x$-values seems to approximately restore the full massive calculations. This step seems to be necessary in the case of massless calculations, otherwise shapes of the predicted rapidity distributions may not be correct. ![ The same as in Fig. \[fig1\] but here results of three different schemes of the collinear caclulations are compared. The solid lines correspond to the calculations with massive matrix element and kinematics, the dotted histograms show results for the calculations with massless matrix elements and kinematics, and the solid histograms represent calculations with massless matrix element and kinematics corrected for the charm quark mass. []{data-label="fig4"}](dsig_dpT_coll_kinematics.eps){width="100.00000%"} ![ The same as in Fig. \[fig1\] but here results of three different schemes of the collinear caclulations are compared. The solid lines correspond to the calculations with massive matrix element and kinematics, the dotted histograms show results for the calculations with massless matrix elements and kinematics, and the solid histograms represent calculations with massless matrix element and kinematics corrected for the charm quark mass. []{data-label="fig4"}](dsig_dy_coll_kinematics.eps){width="100.00000%"} Having discussed the dominant leading-order mechanism we wish to move beyond and consider importance of higher-order corrections for the charm quark forward production mechanisms with intrinsic charm in the initial state. In Fig. \[fig5\] we compare our collinear predictions for the leading-order $2\to2$ mechanisms, both $gc \to gc$ (dotted histograms) and $qc\to qc$ (short-dashed histograms) shown separately, and for the higher-order $2\to3$ (long-dashed histograms) and $2\to 4$ (dash-dotted histograms) mechanisms calculated at tree-level. A sum of the four different components denoted as $2\to 2+3+4$ is also shown but it does not follow any merging procedure here[^3]. For the higher-order contributions the partonic subprocesses with $gc$ and $qc$ initial states are added together. We report a huge contribution to the cross section coming from the higher-order mechanisms (more than order of magnitude). It clearly shows that the leading-order mechanisms are not enough in order to get reasonable predictions for the impact of intrinsic charm concept on forward charm quark production. Full NLO and even NNLO frameworks are required for precise studies of the subject within the collinear parton model. The situation in the case for other approaches, like the hybrid- and the $k_{T}$-factorization is quite different than in the collinear case what will be discussed in next two subsections. ![ The same as in Fig. \[fig1\] but here results of the $2 \to 2$ ($gc$ and $qc$ initial states), $2\to 3$ ($gc + qc$ initial states) and $2\to4$ ($gc + qc$ initial states) mechanisms are shown separately. The calculations are done with massless matrix element and kinematics corrected for the charm quark mass. Details are specified in the figure. []{data-label="fig5"}](dsig_dpT_coll_mechanisms.eps){width="100.00000%"} ![ The same as in Fig. \[fig1\] but here results of the $2 \to 2$ ($gc$ and $qc$ initial states), $2\to 3$ ($gc + qc$ initial states) and $2\to4$ ($gc + qc$ initial states) mechanisms are shown separately. The calculations are done with massless matrix element and kinematics corrected for the charm quark mass. Details are specified in the figure. []{data-label="fig5"}](dsig_dy_coll_mechanisms.eps){width="100.00000%"} The hybrid model ---------------- Now we wish to start presentation of our numerical results obtained in the hybrid model. Here the incident small-$x$ parton is assumed to be off-mass shell in contrast to the large-$x$ intrinsic charm which is kept on-shell. In Fig. \[fig6\] we show theoretical predictions for charm quark transverse momentum (left panel) and rapidity (right panel) distributions for forward charm production within the leading-order $g^*c \to gc$ and the $q^*c \to qc$ mechanisms. Here the KMR-CT14lo gluon and light quark/antiquark uPDFs are used. We observe that here much larger cross sections are obtained than in the analogous calculations done in the collinear framework (see two lowest histograms in Fig. \[fig5\]). Especially, in the hybrid model the gluonic component is much bigger than its collinear counterpart. Significant effects related to the off-shellness of the incident gluons are found. Considering far forward rapidities of the produced charm quark the transverse momenta (virtualities) of the incident small-$x$ gluons start to play a very important role and lead to a sizeable enhancement of the predicted cross section with respect to the leading-order collinear calculations. ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g^* c \rightarrow g c$ and $q^* c \rightarrow q c$ mechanisms calculated within the intrinsic charm concept in the hybrid model with off-shell initial state gluon and/or off-shell light-quark. Here the KMR-CT14lo unintegrated parton densities were used. []{data-label="fig6"}](dsig_dpT_hyb_kmr.eps){width="100.00000%"} ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g^* c \rightarrow g c$ and $q^* c \rightarrow q c$ mechanisms calculated within the intrinsic charm concept in the hybrid model with off-shell initial state gluon and/or off-shell light-quark. Here the KMR-CT14lo unintegrated parton densities were used. []{data-label="fig6"}](dsig_dy_hyb_kmr.eps){width="100.00000%"} Since in the hybrid model the leading-order quark component $q^* c \rightarrow q c$ is found to be negligible one can safely concentrate on the gluonic $g^* c \rightarrow g c$ channel only. In Fig. \[fig7\] we show the relevant predictions for different unintegrated gluon densities from the literature. We compare results obtained with the KMR-CT14lo (solid histograms), the CCFM JH-2013-set2 (dashed histograms) as well as the KS-linear (dotted histograms) and KS-nonlinear (dash-dotted histograms) gluon uPDFs. Different models lead to quite different results, however, they seem to be consistent with each other up to a factor of 5. Main differences appear at larger quark transverse momenta. At small transverse momenta predictions within the KMR-CT14lo, the JH-2013-set2 and the KS-linear uPDFs coincide. It translates also into the rapidity spectrum. Only the KS-nonlinear uPDF leads to a somewhat different behaviour of the cross section at small $p_{T}$’s. We observe that both the transverse momentum and rapidity distributions of charm quark are sensitive to the non-linear evolution effects that lead here to a sizeable damping of the predicted cross section. Thus, the forward production of charm within intrinsic charm concept might be a very good testing ground for studies of the non-linear term in the evolution of unintegrated gluon densities and may shed new light on phenomenon of parton saturation. ![ The same as in Fig. \[fig6\] but here results for four different unintegrated gluon densities in a proton are shown. Here only the $g^* c \rightarrow g c$ mechanism is taken into account. []{data-label="fig7"}](dsig_dpT_hyb_uGDFs.eps){width="100.00000%"} ![ The same as in Fig. \[fig6\] but here results for four different unintegrated gluon densities in a proton are shown. Here only the $g^* c \rightarrow g c$ mechanism is taken into account. []{data-label="fig7"}](dsig_dy_hyb_uGDFs.eps){width="100.00000%"} Above, we have used those gluon uPDF models that are assumed to allow for an effective resummation of extra real emissions (real higher-order terms). Therefore, their can be successfully used in phenomenological studies based even on leading-order matrix elements (see a discussion in Refs. [@Maciula:2019izq; @Maciula:2020cfy]). Here we wish to present results obtained within the scheme of the calculations where the higher-order corrections are not resummed in the uPDF but are taken into account via the hard-matrix elements. This procedure can be tested with the help of the DGLAP-based Parton-Branching uPDFs as was proposed in Ref. [@Maciula:2019izq] and further applied in Refs. [@Lipatov:2019izq; @Maciula:2020cfy]. In Fig. \[fig8\] we show predictions of the hybrid model for the $2 \to 2$, $2 \to 3$ and $2 \to 4$ mechanisms, as well as for their sum $2 \to 2+3+4$ obtained using a dedicated merging procedure. The results are calculated with the PB-NLO-set1 quark and gluon uPDFs. For the leading-order $2\to 2$ mechanisms we show $g^*c$ and $q^*c$ channels separately while for the higher-order components we plot sum of all possible gluonic and quark channels. As in the collinear case, the higher-order mechanisms are found to be very important also here. ![ The same as in Fig. \[fig6\] but here results for PB-NLO-set1 unintegrated parton densities obtained within the $2 \to 2+3+4$ scheme of the calculation. Here, the $2\to2$, $2\to3$, and $2\to4$ components as well as their sum $2\to 2+3+4$ obtained including merging procedure are shown separately. []{data-label="fig8"}](dsig_dpT_hyb_pb-dce.eps){width="100.00000%"} ![ The same as in Fig. \[fig6\] but here results for PB-NLO-set1 unintegrated parton densities obtained within the $2 \to 2+3+4$ scheme of the calculation. Here, the $2\to2$, $2\to3$, and $2\to4$ components as well as their sum $2\to 2+3+4$ obtained including merging procedure are shown separately. []{data-label="fig8"}](dsig_dy_hyb_pb-dce.eps){width="100.00000%"} For a better transparency in Fig. \[fig9\] we compare the hybrid model results obtained with the KMR-CT14lo (solid histograms) with the PB-NLO-set1 (dashed histograms) uPDFs, that correspond to the two different hybrid calculation schemes, together with the results obtained in the collinear approach (dotted histograms). Both types of the hybrid model calculations seem to lead to a very similar predictions. It seems to justify the proposed $2 \to 2+3+4$ hybrid calculation scheme with the PB uPDFs and with the applied merging in a qualitative way. On the other hand, the collinear $2 \to 2+3+4$ results seem to be larger by a factor of 2 than their hybrid model counterpart. However, this might be related to a lack of a relevant merging procedure in the collinear case. ![ The same as in Fig. \[fig6\] but here we compare results for the CT14nnlo collinear PDFs with $2 \to 2+3+4$ collinear model calculations, for the KMR-CT14lo uPDFs with the $2\to 2$ hybrid model calculations and for the PB-NLO-set1 uPDFs with $2 \to 2+3+4$ hybrid model calculations including merging. []{data-label="fig9"}](dsig_dpT_hyb_pb-dce-vs-kmr-vs-nlo.eps){width="100.00000%"} ![ The same as in Fig. \[fig6\] but here we compare results for the CT14nnlo collinear PDFs with $2 \to 2+3+4$ collinear model calculations, for the KMR-CT14lo uPDFs with the $2\to 2$ hybrid model calculations and for the PB-NLO-set1 uPDFs with $2 \to 2+3+4$ hybrid model calculations including merging. []{data-label="fig9"}](dsig_dy_hyb_pb-dce-vs-kmr-vs-nlo.eps){width="100.00000%"} The $\bm{k_{T}}$-factorization approach --------------------------------------- Now we wish to present results obtained within the $k_{T}$-factorization approach. So here we take into account also effects related to off-shellness of $c$ quark of the intrinsic charm in the proton. The transverse momentum dependent intrinsic charm uPDF is obtained by Gaussian smearing of the collinear PDF. Rather small smearing parameter is used that do not allow for a large transverse momenta of the intrinsic charm. It seems to be appropriate for the case of the forward production of charm. For the unintegrated gluon density the KMR-CT14lo model is used. In Fig. \[fig10\] we show predictions for the $g^*c^* \to gc$ mechanism with both initial state partons being off-shell. Here three different values of the smearing parameter in the calculation of the intrinsic charm uPDF are used: $\sigma_{0} = 0.5$ GeV (solid histograms), $3.5$ GeV (dotted histograms) and $7.0$ GeV (dashed histograms). The larger $\sigma_{0}$ is taken the smaller cross section at small outgoing charm quark transverse momenta is obtained (left panel). The same is true for the rapidity spectrum in the forward region (right panel). ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g^* c^* \rightarrow g c$ mechanism calculated within the intrinsic charm concept in the $k_{T}$-factorization approach with both off-shell initial state partons. Here the KMR-CT14lo unintegrated gluon density and Gaussian $k_{t}$-distribution for off-shell charm quark were used. We show results for different values of the smearing parameter $\sigma$. []{data-label="fig10"}](dsig_dpT_kTfact-2to2-gauss.eps){width="100.00000%"} ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g^* c^* \rightarrow g c$ mechanism calculated within the intrinsic charm concept in the $k_{T}$-factorization approach with both off-shell initial state partons. Here the KMR-CT14lo unintegrated gluon density and Gaussian $k_{t}$-distribution for off-shell charm quark were used. We show results for different values of the smearing parameter $\sigma$. []{data-label="fig10"}](dsig_dy_kTfact-2to2-gauss.eps){width="100.00000%"} In Fig. \[fig11\] we illustrate mutual relations between the results obtained with the hybrid and the $k_T$-factorization frameworks. When the smearing parameter in the calculation of the intrinsic charm uPDF is small, e.g. $\sigma_{0} = 0.5$ GeV, the hybrid model $g^*c \to gc$ results coincide with the $g^*c^* \to gc$ results obtained within the full $k_T$-factorization approach. ![ The same as in Fig. \[fig10\] but here we compare results for the hybrid $g^*c\to gc$ and the $k_{T}$-factorization $g^*c^* \to gc$ calculations obtained with the KMR-CT14lo unintegrated gluon densities. The off-shell charm quark Gaussian $k_{t}$-distribution is obtained with the the smearing parameter $\sigma_0 = 0.5$ GeV. []{data-label="fig11"}](dsig_dpT_hyb-vs-kTfact.eps){width="100.00000%"} ![ The same as in Fig. \[fig10\] but here we compare results for the hybrid $g^*c\to gc$ and the $k_{T}$-factorization $g^*c^* \to gc$ calculations obtained with the KMR-CT14lo unintegrated gluon densities. The off-shell charm quark Gaussian $k_{t}$-distribution is obtained with the the smearing parameter $\sigma_0 = 0.5$ GeV. []{data-label="fig11"}](dsig_dy_hyb-vs-kTfact.eps){width="100.00000%"} Finally, we wish to present results of the $k_{T}$-factorization approach for the $g^*c^* \to c$ mechanism. In Fig. \[fig12\] we compare the corresponding predictions obtained with the four different gluon uPDFs: the KMR-CT14lo (solid lines), the JH-2013-set2 (dotted lines), the KS-linear (dash-dotted lines) and the KS-nonlinear (dashed lines). Different models lead to quite different results. The discrepancies between the uPDF models obtained here seem to be larger than in the corresponding case of the $g^*c \to gc$ calculations within the hybrid model. ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g^* c^* \rightarrow c$ mechanism calculated within the intrinsic charm concept in the $k_{T}$-factorization approach with both off-shell initial state partons. Here the Gaussian $k_{t}$-distribution for off-shell charm quark were used. We show results for different gluon uPDFs. []{data-label="fig12"}](dsig_dpT_kTfact_2to1.eps){width="100.00000%"} ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. The results correspond to the $g^* c^* \rightarrow c$ mechanism calculated within the intrinsic charm concept in the $k_{T}$-factorization approach with both off-shell initial state partons. Here the Gaussian $k_{t}$-distribution for off-shell charm quark were used. We show results for different gluon uPDFs. []{data-label="fig12"}](dsig_dy_kTfact_2to1.eps){width="100.00000%"} Predictions for future experiments ---------------------------------- Before we go to predictions for different future and present experiments we wish to summarize the conclusions drawn in the previous subsection by a direct comparison of the results corresponding to the approaches discussed above. ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. Here we compare predictions of the three different approaches used in the previous subsections: the $2 \to 2+3+4$ collinear, the hybrid $g^*c \to gc$ and the $k_{T}$-factorization $g^*c^* \to c$ calculations. []{data-label="fig13"}](dsig_dpT_summary.eps){width="100.00000%"} ![ The charm quark transverse momentum (left) and rapidity (right) differential cross sections for $pp$-scattering at $\sqrt{s}=7$ TeV. Here we compare predictions of the three different approaches used in the previous subsections: the $2 \to 2+3+4$ collinear, the hybrid $g^*c \to gc$ and the $k_{T}$-factorization $g^*c^* \to c$ calculations. []{data-label="fig13"}](dsig_dy_summary.eps){width="100.00000%"} In Fig. \[fig13\] we compare predictions of the three different approaches used in the previous subsections: the $2 \to 2+3+4$ collinear (dashed histograms), the hybrid $g^*c \to gc$ (solid histograms) and the $k_{T}$-factorization $g^*c^* \to c$ (solid lines) calculations. Different models lead to a very different results with more than one order of magnitude difference between the lowest and the highest predicted cross section. Huge cross section for $g c \to c$ or $c g \to c$ may be partly due to ignoring other emissions than $c$ or $\bar c$ in the evolution of $x_1$ and $x_2$. These large uncertainties of the predictions can be reduced only by a forward experiments at forward directions. Forward charm production data sets that will be dominated by the contribution from intrinsic charm are necessary to draw definite conclusions about the level of applicability of the different theoretical approaches. Therefore, now we wish to present results of the study of the impact of the intrinsic charm component on the forward charm particle production in already existing or future experiments at different energies. We start with predictions for the high energy experiments at the LHC and the FCC, at $\sqrt{s}=13$ and $50$ TeV, respectively (top and bottom panels in Fig. \[fig14\]). In the LHC case the considered kinematics correspond to the planned FASER experiment. Here we compare predictions of the $k_{T}$-factorization approach for the $g^*g^* \to c\bar c$ mechanism which is known to give a very good description of the LHC open charm data [@Maciula:2019izq], and predictions of the $g^*c \to gc$ mechanism (dashed) within the hybrid model. In both cases the charm production cross section starts to be dominated by the intrinsic charm component at very forward rapidities, *i.e.* $y \geq 7$. In this far-forward region, the transverse momentum distribution of charm quark is also dominated by the contributions of the intrinsic charm. The predicted enhancement of the charm cross section could certainly be examined by the FASER experiment dedicated to a measurement of forward neutrinos originating from semileptonic decays of $D$ mesons. The actual predictions for neutrinos will be presented elsewhere. ![ Predictions of the impact of the intrinsic charm component in charm quark production in different experiments. Here we explore kinematics relevant for the FASER experiment at the LHC and an exemplary experiment at the FCC. []{data-label="fig14"}](dsig_dy_hyb_faser.eps){width="100.00000%"} ![ Predictions of the impact of the intrinsic charm component in charm quark production in different experiments. Here we explore kinematics relevant for the FASER experiment at the LHC and an exemplary experiment at the FCC. []{data-label="fig14"}](dsig_dpT_hyb_faser.eps){width="100.00000%"} \ ![ Predictions of the impact of the intrinsic charm component in charm quark production in different experiments. Here we explore kinematics relevant for the FASER experiment at the LHC and an exemplary experiment at the FCC. []{data-label="fig14"}](dsig_dy_hyb_fcc.eps){width="100.00000%"} ![ Predictions of the impact of the intrinsic charm component in charm quark production in different experiments. Here we explore kinematics relevant for the FASER experiment at the LHC and an exemplary experiment at the FCC. []{data-label="fig14"}](dsig_dpT_hyb_fcc.eps){width="100.00000%"} In addition, we also analysed a possibility of experimental study of the intrinsic charm concept at lower energies. In Fig. \[fig15\] we show predictions for the fixed-target LHC and the SHIP experiment, at $\sqrt{s}=86.6$ and $27.4$ GeV, respectively (top and bottom panels). We observe that also at relatively small energies the intrinsic charm contributions could be identified experimentally. It seems that the already existing data set on open charm meson production in the fixed-target LHC mode [@Aaij:2018ogq] needs to have the intrinsic charm component included in the theoretical description. Similarly, our results suggests that the predictions of the tau-neutrino flux that could be measured in the SHIP experiment should include effects related to a possible intrinsic charm content of the proton. ![ Predictions of the impact of the intrinsic charm component in charm production in different experiments. Here we explore kinematics of the fixed-target mode LHCb and the kinematics relevant for the SHIP experiment. []{data-label="fig15"}](dsig_dy_hyb_lhcb.eps){width="100.00000%"} ![ Predictions of the impact of the intrinsic charm component in charm production in different experiments. Here we explore kinematics of the fixed-target mode LHCb and the kinematics relevant for the SHIP experiment. []{data-label="fig15"}](dsig_dpT_hyb_lhcb.eps){width="100.00000%"} \ ![ Predictions of the impact of the intrinsic charm component in charm production in different experiments. Here we explore kinematics of the fixed-target mode LHCb and the kinematics relevant for the SHIP experiment. []{data-label="fig15"}](dsig_dy_hyb_ship.eps){width="100.00000%"} ![ Predictions of the impact of the intrinsic charm component in charm production in different experiments. Here we explore kinematics of the fixed-target mode LHCb and the kinematics relevant for the SHIP experiment. []{data-label="fig15"}](dsig_dpT_hyb_ship.eps){width="100.00000%"} Conclusions =========== In this paper we have discussed the effect of intrinsic charm in the proton on forward production of $c$ quark or $\bar c$ antiquark at different energies. Three different approaches: collinear, hybrid and $k_{T}$-factorization have been used with modern collinear and unintegrated parton distribution functions. The production mechanism of $c$-quarks and $\bar c$-antiquarks originating from intrinsic charm in the nucleon is concentrated in forward/backward directions, but details depend on collision energy. The absolute normalization strongly depends on the approach used. The leading-order (LO) collinear framework leads to the smallest cross section. The cross section becomes much bigger in the $k_{T}$-factorization or in the hybrid model which effectively include higher-order corrections. The next-to-leading (NLO) and even next-to-next-to-leading (NNLO) tree-level corrections are found to be very important here. Therefore, the $k_{T}$-factorization or the hybrid model will give stringent limits on the intrinsic charm which cannot be constrained at present from first principles. We have shown that in the collinear approach the LO calculations of the intrinsic charm component are insufficient. We have included the NLO and NNLO components at tree-level which were found to significantly contribute to the cross section. Working in the hybrid model or in the $k_{T}$-factorization approach we have shown that the effects related to the off-shellness of the incident partons (especially gluons) are large. In both cases higher-order corrections are effectively included already within the basic $gc \to gc$ mechanism. We have used different models for gluon unintegrated parton distribution functions (uPDFs) from the literature. We obtained different results for different gluon uPDFs. The forward charm production was recognized as a useful testing ground for the small-$x$ behaviour of the gluon uPDFs. We have shown in addition that the final results are also sensitive to the concept of gluon saturation in a proton. Unintegrated gluon densities derived from linear and non-linear evolution equations lead to a quite different results. We have performed also leading-order calculations within $k_T$-factorization approach where the basic process is either $g + c \to c$ or $c + g \to c$ as done for forward production of charm quarks. We have shown that the intrinsic charm component dominates over the standard pQCD (extrinsic) mechanism of $c\bar c$-pair production at forward (or far-forward) rapidities starting from low energy fixed-target experiment at $\sqrt{s}=27.4$ and $86.6$ GeV, through the LHC Run II nominal energy $\sqrt{s}=13$ TeV, and up to the energies relevant for the IceCube experiment ($\sqrt{s}=50$ TeV). The LHC experiments at low energies (fixed-target experiments) can provide valuable information already now. Future LHC experiments on $\nu_{\tau}$ neutrino production such as SHIP sand FASER are an interesting alternative in next few years. In the present study we intentionally limited to the production of charm quarks/antiquarks. The production of charmed mesons or baryons is currently uder discussion and a new fragmentation scheme was proposed [@szczurek2020] very recently. We leave the predictions for production of charmed hadrons and their semileptonic decays for a separate study. However, the consequences for high-energy neutrino production have been discussed shortly in the context of the IceCube experiment and experiments proposed at the LHC (SHIP and FASER). Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Victor Goncalves for a discussion on IC. This study was partially supported by the Polish National Science Center grant UMO-2018/31/B/ST2/03537 and by the Center for Innovation and Transfer of Natural Sciences and Engineering Knowledge in Rzeszów. [100]{} S.J. Brodsky, P. Hoyer, C. Peterson and N. Sakai, Phys. Lett. [**B93**]{}, 451 (1980). V.D. Barger, F. Halzen and W.Y. Keung, Phys. Rev. [**D25**]{}, 112 (1982). R. Vogt and S.J. Brodsky, Nucl. Phys. [**B438**]{}, 261 (1995), [**478**]{}, 311 (1996). F.S. Navarra, M. Nielsen, C.A.A. Nunes and M. Teixeira, Phys. Rev. [**D54**]{}, 842 (1996). W. Melnitchouk and A.W. Thomas, Phys. Lett. [**B414**]{}, 134 (1997). F.M. Steffens, W. Melnitchouk and A.W. Thomas, Eur. Phys. J. [**C11**]{}, 673 (1999). F. Carvalho, F.O. Duraes, F.S. Navarra and M. Nielsen, Phys. Rev. Lett. [**86**]{} (2001) 5434. T.J. Hobbs, J.T. Londergan and W. Melnitchouk, Phys. Rev. [**D89**]{}, 074008 (2014). R.S. Sufian, T. Liu, A. Alexandru, S.J. Brodsky, G.F. de Teramond, H. G. Dosch, T. Draper, K.-F. Liu and Y.-B. Yang. arXiv:2003.01078 \[hep-lat\]. P.H. Beauchemin, V.A. Bednyakov, G.I. Lykasov, Y.Y. Stepanenko, Phys. Rev. [**D92**]{}, 0341014 (2015). S. Rostami, A. Khorramian, A. Aleedaneshvar and M. Goharpour, J. Phys. [**G43**]{}, 055001 (2016). A.V. Lipatov, G.I. Lykasov, Y.Y. Stepanenko and V.A. Bednyakov, Phys. Rev. [**D94**]{}, 053011 (2016). V.A. Bednyakov, S.J. Brodsky, A.V. Lipatov, G.I. Lykasov, M.A. Malyshev, J. Smiesko and S. Tokar, arXiv:1712.09096 \[hep-ph\]. S. Brodsky, A. Kusina, F. Lyonnet, I. Schienbein, H. Spiesberger and R. Vogt, Adv. High Energy Phys. **2015**, 231547 (2015). R. D. Ball *et al.* \[NNPDF\], J. High Energy Phys. **04**, 040 (2015). T. J. Hou, S. Dulat, J. Gao, M. Guzzi, J. Huston, P. Nadolsky, C. Schmidt, J. Winter, K. Xie and C. P. Yuan, J. High Energy Phys. **02**, 059 (2018). R. Enberg, M.H. Reno and I. Sarcevic, Phys. Rev. [**D78**]{}, 043005 (2008). R. Laha and S. Brodsky, Phys. Rev. [**D96**]{}, 123002 (2017), arXiv:1607.08240. A.V. Giannini, V.P. Goncalves and F.S. Navarra, Phys. Rev. [**D98**]{}, 014012 (2018). R. Maciu[ł]{}a and A. Szczurek, Phys. Rev. D **97**, no.7, 074001 (2018). W. Bai, M. Diwan, M. V. Garzelli, Y. S. Jeong and M. H. Reno, \[arXiv:2002.03012 \[hep-ph\]\]. W. Bai and M. H. Reno, J. High Energy Phys. **02**, 077 (2019). R. Aaij *et al.* \[LHCb\], Phys. Rev. Lett. **122**, no.13, 132002 (2019). R. Maciu[ł]{}a, \[arXiv:2003.05702 \[hep-ph\]\]. V.P. Goncalves, F.S. Navarra and T. Ulrich, Nucl. Phys. [**A842**]{} 59 (2010). F. Carvalho, A.V. Giannini, V.P. Goncalves and F.S. Navarra, Phys. Rev. [**D96**]{}, 094002 (2017). A. Edin and G. Ingelman, Phys. Lett. [**B432**]{}, 402 (1998). M. Gluck, E. Reya and A. Vogt, Z. Phys. C **67**, 433-448 (1995). J. Pumplin, H.L. Lai and W.K. Tung, Phys. Rev. [**D75**]{}, 054029 (2007). J. C. Collins, D. E. Soper and G. F. Sterman, Adv. Ser. Direct. High Energy Phys. **5**, 1-91 (1989). R. D. Ball *et al.* \[NNPDF\], Eur. Phys. J. C **76**, no.11, 647 (2016). L. Harland-Lang, A. Martin, P. Motylinski and R. Thorne, Eur. Phys. J. C **75**, no.5, 204 (2015). P. Jimenez-Delgado and E. Reya, Phys. Rev. D **89**, no.7, 074049 (2014). A. van Hameren, Comput. Phys. Commun. **224**, 371-380 (2018). T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen and P. Z. Skands, Comput. Phys. Commun. **191**, 159-177 (2015). M. Deak, F. Hautmann, H. Jung and K. Kutak, J. High Energy Phys. **09**, 121 (2009). K. Kutak and S. Sapeta, Phys. Rev. D **86**, 094043 (2012). P. Kotko, K. Kutak, C. Marquet, E. Petreska, S. Sapeta and A. van Hameren, J. High Energy Phys. **09**, 106 (2015). F. Hautmann and H. Jung, Nucl. Phys. B [**883**]{}, 1 (2014). K. Kutak, Phys. Rev. D [**91**]{}, no. 3, 034021 (2015). A. Bermudez Martinez, P. Connor, H. Jung, A. Lelek, R. Žlebčík, F. Hautmann and V. Radescu, Phys. Rev. D [**99**]{}, no. 7, 074008 (2019). G. Watt, A. D. Martin and M. G. Ryskin, Eur. Phys. J. C [**31**]{}, 73 (2003). A. van Hameren, private communication R. Maciu[ł]{}a and A. Szczurek, Phys. Rev. D [**100**]{}, no. 5, 054001 (2019). A. V. Lipatov, M. A. Malyshev and H. Jung, Phys. Rev. D [**101**]{}, no. 3, 034022 (2020). R. Maciu[ł]{}a, arXiv:2003.05702 \[hep-ph\]. S. Catani, M. Ciafaloni and F. Hautmann, Phys. Lett. B242 (1990) 97; Nucl. Phys. B366 (1991) 135; Phys. Lett. B307 (1993) 147.\ J.C. Collins and R.K. Ellis, Nucl. Phys. B360, 3 (1991).\ L.V. Gribov, E.M. Levin, and M.G. Ryskin, Phys. Rep. 100, 1 (1983);\ E.M. Levin, M.G. Ryskin, Yu.M. Shabelsky and A.G. Shuvaev, Sov. J. Nucl. Phys. 53, 657 (1991). M. Nefedov, V. Saleev and A. V. Shipilova, Phys. Rev. D **87**, no.9, 094030 (2013). M. Czech and A. Szczurek, J. Phys. [**G32**]{}, 1253 (2006). I. Babiarz, R. Pasechnik, W. Schäfer and A. Szczurek, J. High Energy Phys. [**02**]{} (2020) 037. I. Babiarz, R. Pasechnik, W. Schäfer and A. Szczurek, J. High Energy Phys. **06**, 101 (2020). P. Lebiedowicz, R. Maciu[ł]{}a and A. Szczurek, Phys. Lett. [**B806**]{}, 135475 (2020). R. Aaij et al. (LHCb collaboration), Phys. Lett.[**B718**]{}, 902 (2013). K. Kutak, Phys. Rev. [**D91**]{}, 034021 (2015). V.P. Goncalves, R. Maciu[ł]{}a, R. Pasechnik and A. Szczurek, Phys. Rev. [**D96**]{} (2017) 094026. V.P. Goncalves, R. Maciu[ł]{}a and A. Szczurek, Phys. Lett. [**B794**]{} (2019) 29. R. Maciu[ł]{}a, A. Szczurek, J. Zaremba and I. Babiarz, J. High Energy Phys. [**01**]{} (2020) 116. A. Szczurek, arXiv:2006.12918 \[hep-ph\]. [^1]: also at University of Rzeszów, PL-35-959 Rzeszów, Poland [^2]: Here and in the following we concentrate only on the forward production mechanisms (with charm quark having positive-rapidity), but, the formalism for symmetric backward configuration is the same. [^3]: Technically, this could be done properly only if the parton level calculations are supplemented with a parton shower but it goes beyond the scope of the present study.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider the problem of electing a leader among nodes in a highly dynamic network where the adversary has unbounded capacity to insert and remove nodes (including the leader) from the network and change connectivity at will. We present a randomized algorithm that (re)elects a leader in $O(D\log n)$ rounds with high probability, where $D$ is a bound on the dynamic diameter of the network and $n$ is the maximum number of nodes in the network at any point in time. We assume a model of broadcast-based communication where a node can send only $1$ message of $O(\log n)$ bits per round and is not aware of the receivers in advance. Thus, our results also apply to mobile wireless ad-hoc networks, improving over the optimal (for deterministic algorithms) $O(Dn)$ solution presented at FOMC 2011. We show that our algorithm is optimal by proving that [*any*]{} randomized Las Vegas algorithm takes at least $\Omega(D\log n)$ rounds to elect a leader with high probability, which shows that our algorithm yields the best possible (up to constants) termination time.' author: - 'John Augustine[^1]' - 'Tejas Kulkarni$^\ast$' - 'Paresh Nakhe$^\ast$' - 'Peter Robinson[^2]' bibliography: - 'papers.bib' - 'papers1.bib' - 'papers2.bib' - 'leader.bib' title: 'Robust Leader Election in a Fast-Changing World' --- [^1]: Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai, India. : [augustine@cse.iitm.ac.in, tejasvijaykulkarni@gmail.com, paresh.nakhe@gmail.com]{}. [^2]: Division of Mathematical Sciences, Nanyang Technological University, Singapore 637371. : [peter.robinson@ntu.edu.sg]{}
{ "pile_set_name": "ArXiv" }
Preprint hep-ph/0006089 [Improved Conformal Mapping of the Borel Plane]{} U. D. Jentschura and G. Soff [*Institut für Theoretische Physik, TU Dresden, 01062 Dresden, Germany*]{}\ [**Email:**]{} jentschura@physik.tu-dresden.de, soff@physik.tu-dresden.de The conformal mapping of the Borel plane can be utilized for the analytic continuation of the Borel transform to the entire positive real semi-axis and is thus helpful in the resummation of divergent perturbation series in quantum field theory. We observe that the convergence can be accelerated by the application of Padé approximants to the Borel transform expressed as a function of the conformal variable, i.e. by a combination of the analytic continuation via conformal mapping and a subsequent numerical approximation by rational approximants. The method is primarily useful in those cases where the leading (but not sub-leading) large-order asymptotics of the perturbative coefficients are known. 11.15.Bt, 11.10.Jj General properties of perturbation theory;\ Asymptotic problems and properties The problem of the resummation of quantum field theoretic series is of obvious importance in view of the divergent, asymptotic character of the perturbative expansions [@LGZJ1990; @ZJ1996; @Fi1997]. The convergence can be accelerated when additional information is available about large-order asymptotics of the perturbative coefficients [@JeWeSo2000]. In the example cases discussed in [@JeWeSo2000], the location of several poles in the Borel plane, known from the leading and next-to-leading large-order asymptotics of the perturbative coefficients, is utilized in order to construct specialized resummation prescriptions. Here, we consider a particular perturbation series, investigated in [@BrKr1999], where only the [*leading*]{} large-order asymptotics of the perturbative coefficients are known to sufficient accuracy, and the subleading asymptotics have – not yet – been determined. Therefore, the location of only a single pole – the one closest to the origin – in the Borel plane is available. In this case, as discussed in [@CaFi1999; @CaFi2000], the (asymptotically optimal) conformal mapping of the Borel plane is an attractive method for the analytic continuation of the Borel transform beyond its circle of convergence and, to a certain extent, for accelerating the convergence of the Borel transforms. Here, we argue that the convergence of the transformation can be accelerated further when the Borel transforms, expressed as a function of the conformal variable which mediates the analytic continuation, are additionally convergence-accelerated by the application of Padé approximants. First we discuss, in general terms, the construction of the improved conformal mapping of the Borel plane which is used for the resummation of the perturbation series defined in Eqs. (\[gammaPhi4\]) and (\[gammaYukawa\]) below. The method uses as input data the numerical values of a finite number of perturbative coefficients and the leading large-order asymptotics of the perturbative coefficients, which can, under appropriate circumstances, be derived from an empirical investigation of a finite number of coefficients, as it has been done in [@BrKr1999]. We start from an asymptotic, divergent perturbative expansion of a physical observable $f(g)$ in powers of a coupling parameter $g$, $$\label{power} f(g) \sim \sum_{n=0}^{\infty} c_n\,g^n\,,$$ and we consider the generalized Borel transform of the $(1,\lambda)$-type (see Eq. (4) in [@JeWeSo2000]), $$\label{BorelTrans} f^{(\lambda)}_{\rm B}(u) \; \equiv \; f^{(1,\lambda)}_{\rm B}(u) \; = \; \sum_{n=0}^{\infty} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The full physical solution can be reconstructed from the divergent series (\[power\]) by evaluating the Laplace-Borel integral, which is defined as $$\label{BorelIntegral} f(g) = \frac{1}{g^\lambda} \, \int_0^\infty {\rm d}u \,u^{\lambda - 1} \, \exp\bigl(-u/g\bigr)\, f^{(\lambda)}_{\rm B}(u)\,.$$ The integration variable $u$ is referred to as the Borel variable. The integration is carried out either along the real axis or infinitesimally above or below it (if Padé approximants are used for the analytic continuation, modified integration contours have been proposed [@Je2000]). The most prominent issue in the theory of the Borel resummation is the construction of an analytic continuation for the Borel transform (\[BorelTrans\]) from a finite-order partial sum of the perturbation series (\[power\]), which we denote by $$\label{PartialSum} f^{(\lambda),m}_{\rm B}(u) = \sum_{n=0}^{m} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The analytic continuation can be accomplished using the direct application of Padé approximants to the partial sums of the Borel transform $f^{(\lambda),m}_{\rm B}(u)$ [@BrKr1999; @Je2000; @Raczka1991; @Pi1999] or by a conformal mapping [@SeZJ1979; @LGZJ1983; @GuKoSu1995; @CaFi1999; @CaFi2000]. We now assume that the [*leading*]{} large-order asymptotics of the perturbative coefficients $c_n$ defined in Eq. (\[power\]) is factorial, and that the coefficients display an alternating sign pattern. This indicates the existence of a singularity (branch point) along the negative real axis corresponding to the leading large-order growth of the perturbative coefficients, which we assume to be at $u=-1$. For Borel transforms which have only a single cut in the complex plane which extends from $u=-1$ to $u=-\infty$, the following conformal mapping has been recommended as optimal [@CaFi1999], $$\label{DefZ} z = z(u) = \frac{\sqrt{1+u}-1}{\sqrt{1+u}+1}\,.$$ Here, $z$ is referred to as the conformal variable. The cut Borel plane is mapped unto the unit circle by the conformal mapping (\[DefZ\]). We briefly mention that a large variety of similar conformal mappings have been discussed in the literature . It is worth noting that conformal mappings which are adopted for doubly-cut Borel planes have been discussed in [@CaFi1999; @CaFi2000]. We do not claim here that it would be impossible to construct conformal mappings which reflect the position of more than two renormalon poles or branch points in the complex plane. However, we stress that such a conformal mapping is likely to have a more complicated mathematical structure than, for example, the mapping defined in Eq. (27) in [@CaFi1999]. Using the alternative methods described in [@JeWeSo2000], poles (branch points) in the Borel plane corresponding to the subleading asymptotics can be incorporated easily provided their position in the Borel plane is known. In a concrete example (see Table 1 in [@JeWeSo2000]), 14 poles in the Borel plane have been fixed in the denominator of the Padé approximant constructed according to Eqs. (53)–(55) in [@JeWeSo2000], and accelerated convergence of the transforms is observed. In contrast to the investigation [@JeWeSo2000], we assume here that only the [*leading*]{} large-order factorial asymptotics of the perturbative coefficients are known. We continue with the discussion of the conformal mapping (\[DefZ\]). It should be noted that for series whose leading singularity in the Borel plane is at $u = -u_0$ with $u_0 > 0$, an appropriate rescaling of the Borel variable $u \to |u_0|\, u$ is necessary on the right-hand side of Eq. (\[BorelIntegral\]). Then, $f^{(\lambda)}_{\rm B}(|u_0|\,u)$ as a function of $u$ has its leading singularity at $u = -1$ (see also Eq. (41.57) in [@ZJ1996]). The Borel integration variable $u$ can be expressed as a function of $z$ as follows, $$\label{UasFuncOfZ} u(z) = \frac{4 \, z}{(z-1)^2}\,.$$ The $m$th partial sum of the Borel transform (\[PartialSum\]) can be rewritten, upon expansion of the $u$ in powers of $z$, as $$\label{PartialSumConformal} f^{(\lambda),m}_{\rm B}(u) = f^{(\lambda),m}_{\rm B}\bigl(u(z)\bigr) = \sum_{n=0}^{m} C_n\,z^n + {\cal O}(z^{m+1})\,,$$ where the coefficients $C_n$ as a function of the $c_n$ are uniquely determined (see, e.g., Eqs. (36) and (37) of [@CaFi1999]). We define partial sum of the Borel transform, expressed as a function of the conformal variable $z$, as $$f'^{(\lambda),m}_{\rm B}(z) = \sum_{n=0}^{m} C_n\,z^n\,.$$ In a previous investigation [@CaFi1999], Caprini and Fischer evaluate the following transforms, $$\label{CaFiTrans} {\cal T}'_m f(g) = \frac{1}{g^\lambda}\, \int_0^\infty {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\, f'^{(\lambda),m}_{\rm B}(z(u))\,.$$ Caprini and Fischer [@CaFi1999] observe the apparent numerical convergence with increasing $m$. The limit as $m\to\infty$, provided it exists, is then assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}'_m f(g)\,.$$ We do not consider the question of the existence of this limit here (for an outline of questions related to these issues we refer to [@CaFi2000]). In the absence of further information on the analyticity domain of the Borel transform (\[BorelTrans\]), we cannot necessarily conclude that $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ as a function of $z$ is analytic inside the unit circle of the complex $z$-plane, or that, for example, the conditions of Theorem 5.2.1 of [@BaGr1996] are fulfilled. Therefore, we propose a modification of the transforms (\[CaFiTrans\]). In particular, we advocate the evaluation of (lower-diagonal) Padé approximants [@BaGr1996; @BeOr1978] to the function $f'^{(\lambda),m}_{\rm B}(z)$, expressed as a function of $z$, $$\label{ConformalPade} f''^{(\lambda),m}_{\rm B}(z) = \bigg[ [\mkern - 2.5 mu [m/2] \mkern - 2.5 mu ] \bigg/ [\mkern - 2.5 mu [(m+1)/2] \mkern - 2.5 mu ] \bigg]_{f'^{(\lambda),m}_{\rm B}}\!\!\!\left(z\right)\,.$$ We define the following transforms, $$\label{AccelTrans} {\cal T}''_m f(g) = \frac{1}{g^\lambda}\, \int_{C_j} {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\, f''^{(\lambda),m}_{\rm B}\bigl(z(u)\bigr)$$ where the integration contour $C_j$ ($j=-1,0,1$) have been defined in [@Je2000]. These integration contours have been shown to to provide the physically correct analytic continuation of resummed perturbation series for those cases where the evaluation of the standard Laplace-Borel integral (\[BorelIntegral\]) is impossible due to an insufficient analyticity domain of the integrand (possibly due to multiple branch cuts) or due to spurious singularities in view of the finite order of the Padé approximations defined in (\[ConformalPade\]). We should mention potential complications due to multi-instanton contributions, as discussed for example in Ch. 43 of [@ZJ1996] (these are not encountered in the current investigation). In this letter, we use exclusively the contour $C_0$ which is defined as the half sum of the contours $C_{-1}$ and $C_{+1}$ displayed in Fig. 1 in [@Je2000]. At increasing $m$, the limit as $m\to\infty$, provided it exists, is then again assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}''_m f(g)\,.$$ Because we take advantage of the special integration contours $C_j$, analyticity of the Borel transform $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ inside the unit circle of the complex $z$-plane is not required, and additional acceleration of the convergence is mediated by employing Padé approximants in the conformal variable $z$. [cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{} ------------------------------------------------------------------------ $m$ & & & &\ ------------------------------------------------------------------------ 28 & $-0$ & $501~565~232$ & $-0$ & $538~352~234$ & $-0$ & $573~969~740$ & $-0$ & $827~506~173$\ ------------------------------------------------------------------------ 29 & $-0$ & $501~565~232$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~143$\ ------------------------------------------------------------------------ 30 & $-0$ & $501~565~231$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~136$\ [cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{} ------------------------------------------------------------------------ $m$ & & & &\ ------------------------------------------------------------------------ 28 & $-1$ & $669~071~213$ & $-1$ & $800~550~588$ & $-1$ & $928~740~624$ & $-1$ & $852~027~809$\ ------------------------------------------------------------------------ 29 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~626$ & $-1$ & $852~027~810$\ ------------------------------------------------------------------------ 30 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~625$ & $-1$ & $852~027~810$\ We consider the resummation of two particular perturbation series discussed in [@BrKr1999] for the anomalous dimension $\gamma$ function of the $\phi^3$ theory in 6 dimensions and the Yukawa coupling in 4 dimensions. The perturbation series for the $\phi^3$ theory is given in Eq. (16) in [@BrKr1999], $$\label{gammaPhi4} \gamma_{\rm hopf}(g) \sim \sum_{n=1}^{\infty} (-1)^n \, \frac{G_n}{6^{2 n - 1}} \, g^n\,,$$ where the coefficients $G_n$ are given in Table 1 in [@BrKr1999] for $n=1,\dots,30$ (the $G_n$ are real and positive). We denote the coupling parameter $a$ used in [@BrKr1999] as $g$; this is done in order to ensure compatibility with the general power series given in Eq. (\[power\]). Empirically, Broadhurst and Kreimer derive the large-order asymptotics $$G_n \sim {\rm const.} \; \times \; 12^{n-1} \, \Gamma(n+2)\,, \qquad n\to\infty\,,$$ by investigating the explicit numerical values of the coefficients $G_1,\dots,G_{30}$. The leading asymptotics of the perturbative coefficients $c_n$ are therefore (up to a constant prefactor) $$\label{LeadingPhi4} c_n \sim (-1)^n \frac{\Gamma(n+2)}{3^n}\,, \qquad n\to\infty\,.$$ This implies that the $\lambda$-parameter in the Borel transform (\[BorelTrans\]) should be set to $\lambda=2$ (see also the notion of an asymptotically optimized Borel transform discussed in [@JeWeSo2000]). In view of Eq. (\[LeadingPhi4\]), the pole closest to the origin of the Borel transform (\[BorelTrans\]) is expected at $$u = u^{\rm hopf}_0 = -3\,,$$ and a rescaling of the Borel variable $u \to 3\,u$ in Eq. (\[BorelIntegral\]) then leads to an expression to which the method defined in Eqs. (\[power\])–(\[AccelTrans\]) can be applied directly. For the Yukawa coupling, the $\gamma$-function reads $$\label{gammaYukawa} {\tilde \gamma}_{\rm hopf}(g) \sim \sum_{n=1}^{\infty} (-1)^n \, \frac{{\tilde G}_n}{2^{2 n - 1}} \, g^n\,,$$ where the ${\tilde G}_n$ are given in Table 2 in [@BrKr1999] for $n=1,\dots,30$. Empirically, i.e. from an investigation of the numerical values of ${\tilde G}_1,\dots,{\tilde G}_{30}$, the following factorial growth in large order is derived [@BrKr1999], $${\tilde G}_n \sim {\rm const.'} \; \times \; 2^{n-1} \, \Gamma(n+1/2)\,, \qquad n\to\infty\,.$$ This leads to the following asymptotics for the perturbative coefficients (up to a constant prefactor), $$c_n \sim (-1)^n \frac{\Gamma(n+1/2)}{2^n} \,, \qquad n\to\infty\,.$$ This implies that an asymptotically optimal choice [@JeWeSo2000] for the $\lambda$-parameter in (\[BorelTrans\]) is $\lambda=1/2$. The first pole of the Borel transform (\[BorelTrans\]) is therefore expected at $$u = {\tilde u}^{\rm hopf}_0 = -2\,.$$ A rescaling of the Borel variable according to $u \to 2\,u$ in (\[BorelIntegral\]) enables the application of the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]). In Table \[table1\], numerical values for the transforms ${\cal T}''_m \gamma_{\rm hopf}(g)$ are given, which have been evaluated according to Eq. (\[AccelTrans\]). The transformation order is in the range $m=28~,29,~30$, and we consider coupling parameters $g=5.0,~5.5,~6.0$ and $g=10.0$. The numerical values of the transforms display apparent convergence to about 9 significant figures for $g \leq 6.0$ and to about 7 figures for $g=10.0$. In Table \[table2\], numerical values for the transforms ${\cal T}''_m {\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[AccelTrans\]) are shown in the range $m=28,~29,~30$ for (large) coupling strengths $g=5.0,~5.5,~6.0$. Additionally, the value $g = 30^2/(4\,\pi)^2 = 5.69932\dots$ is considered as a special case (as it has been done in [@BrKr1999]). Again, the numerical values of the transforms display apparent convergence to about 9 significant figures. At large coupling $g = 12.0$, the apparent convergence of the transforms suggests the following values: $\gamma_{\rm hopf}(12.0) = -0.939\,114\,3(2)$ and ${\tilde \gamma}_{\rm hopf}(12.0) = -3.287\,176\,9(2)$. The numerical results for the Yukawa case, i.e. for the function ${\tilde \gamma}_{\rm hopf}$, have recently been confirmed by an improved analytic, nonperturbative investigation [@BrKr2000prep] which extends the perturbative calculation [@BrKr1999]. We note that the transforms ${\cal T}'_m \gamma_{\rm hopf}(g)$ and ${\cal T}'_m {\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[CaFiTrans\]), i.e. by the unmodified conformal mapping, typically exhibit apparent convergence to 5–6 significant figures in the transformation order $m=28,~29,~30$ and at large coupling $g \geq 5$. Specifically, the numerical values for $g=5.0$ are $$\begin{aligned} {\cal T}'_{28} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~567~294\,, \nonumber\\[2ex] {\cal T}'_{29} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~564~509\,, \nonumber\\[2ex] {\cal T}'_{30} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~563~626\,. \nonumber\end{aligned}$$ These results, when compared to the data in Table \[table1\], exemplify the acceleration of the convergence by the additional Padé approximation of the Borel transform [*expressed as a function of the conformal variable*]{} \[see Eq. (\[ConformalPade\])\]. It is not claimed here that the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]) necessarily provides the fastest possible rate of convergence for the perturbation series defined in Eq. (\[gammaPhi4\]) and (\[gammaYukawa\]). Further improvements should be feasible, especially if particular properties of the input series are known and exploited (see in part the methods described in [@JeWeSo2000]). We also note possible improvements based on a large-coupling expansion [@We1996d], in particular for excessively large values of the coupling parameter $g$, or methods based on order-dependent mappings (see [@SeZJ1979; @LGZJ1983] or the discussion following Eq. (41.67) in [@ZJ1996]). The conformal mapping [@CaFi1999; @CaFi2000] is capable of accomplishing the analytic continuation of the Borel transform (\[BorelTrans\]) beyond the circle of convergence. Padé approximants, applied directly to the partial sums of the Borel transform (\[PartialSum\]), provide an alternative to this method [@Raczka1991; @Pi1999; @BrKr1999; @Je2000; @JeWeSo2000]. Improved rates of convergence can be achieved when the convergence of the transforms obtained by conformal mapping in Eq. (\[PartialSumConformal\]) is accelerated by evaluating Padé approximants as in Eq. (\[ConformalPade\]), and conditions on analyticity domains can be relaxed in a favorable way when these methods are combined with the integration contours from Ref. [@Je2000]. Numerical results for the resummed values of the perturbation series (\[gammaPhi4\]) and (\[gammaYukawa\]) are provided in the Tables \[table1\] and \[table2\]. By the improved conformal mapping and other optimized resummation techniques (see, e.g., the methods introduced in Ref. [@JeWeSo2000]) the applicability of perturbative (small-coupling) expansions can be generalized to the regime of large coupling and still lead to results of relatively high accuracy.\ U.J. acknowledges helpful conversations with E. J. Weniger, I. Nándori, S. Roether and P. J. Mohr. G.S. acknowledges continued support from BMBF, DFG and GSI. [10]{} J. C. LeGuillou and J. Zinn-Justin, [*Large-Order Behaviour of Perturbation Theory*]{} (North-Holland, Amsterdam, 1990). J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{}, 3rd ed. (Clarendon Press, Oxford, 1996). J. Fischer, Int. J. Mod. Phys. A [**12**]{}, 3625 (1997). U. D. Jentschura, E. Weniger, and G. Soff, Asymptotic Improvement of Resummation and Perturbative Predictions, Los Alamos preprint hep-ph/0005198, submitted. D. Broadhurst and D. Kreimer, Phys. Lett. B [**475**]{}, 63 (2000). I. Caprini and J. Fischer, Phys. Rev. D [**60**]{}, 054014 (1999). I. Caprini and J. Fischer, Convergence of the expansion of the Laplace-Borel integral in perturbative QCD improved by conformal mapping, Los Alamos preprint hep-ph/0002016. U. D. Jentschura, Resummation of Nonalternating Divergent Perturbative Expansions, Los Alamos preprint hep-ph/0001135, Phys. Rev. D (in press). P. A. Raczka, Phys. Rev. D [**43**]{}, R9 (1991). M. Pindor, Padé Approximants and Borel Summation for QCD Perturbation Series, Los Alamos preprint hep-th/9903151. R. Seznec and J. Zinn-Justin, J. Math. Phys. [**20**]{}, 1398 (1979). J. C. L. Guillou and J. Zinn-Justin, Ann. Phys. (N. Y.) [**147**]{}, 57 (1983). R. Guida, K. Konishi, and H. Suzuki, Ann. Phys. (N. Y.) [**241**]{}, 152 (1995). D. J. Broadhurst, P. A. Baikov, V. A. Ilyin, J. Fleischer, O. V. Tarasov, and V. A. Smirnov, Phys. Lett. B [**329**]{}, 103 (1994). G. Altarelli, P. Nason, and G. Ridolfi, Z. Phys. C [**68**]{}, 257 (1995). D. E. Soper and L. R. Surguladze, Phys. Rev. D [**54**]{}, 4566 (1996). K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Phys. Lett. B [**371**]{}, 93 (1996). K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Nucl. Phys. B [**482**]{}, 213 (1996). K. G. Chetyrkin, R. Harlander, and M. Steinhauser, Phys. Rev. D [**58**]{}, 014012 (1998). G. A. Baker and P. Graves-Morris, [*Padé approximants*]{}, 2nd ed. (Cambridge University Press, Cambridge, 1996). C. M. Bender and S. A. Orszag, [*Advanced Mathematical Methods for Scientists and Engineers*]{} (McGraw-Hill, New York, NY, 1978). D. Broadhurst and D. Kreimer, in preparation (2000). E. J. Weniger, Phys. Rev. Lett. [**77**]{}, 2859 (1996).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate theoretically the influence of laser phase noise on the cooling and heating of a generic cavity optomechanical system. We derive the back-action damping and heating rates and the mechanical frequency shift of the radiation pressure-driven oscillating mirror, and derive the minimum phonon occupation number for small laser linewidths. We find that in practice laser phase noise does not pose serious limitations to ground state cooling. We then consider the effects of laser phase noise in a parametric cavity driving scheme that minimizes the back-action heating of one of the quadratures of the mechanical oscillator motion. Laser linewidths narrow compared to the decay rate of the cavity field will not pose any problems in an experimental setting, but broader linewidths limit the practicality of this back-action evasion method.' author: - 'Gregory A. Phelps' - Pierre Meystre title: Laser phase noise effects on the dynamics of optomechanical resonators --- Introduction ============ The emerging field of cavity optomechanics [@KippenbergVahala2007] is witnessing rapid and remarkable progress, culminating recently in the cooling of micromechanical cantilevers to the ground state of motion [@OConnell2010]. With the prospect of a broad variety of systems reaching that milestone in the near future, the emphasis of much current research is now shifting to “beyond ground state" physics. Because cavity optomechanics is largely driven by the double goal of developing force sensors of extreme sensitivity and to investigate quantum effects in nanoscale (or larger) systems, a major near-term goal of that program involves the manipulation and control of the quantum state of these systems. Examples of particular interest include the preparation of quantum states, such as squeezed states, that allow us to circumvent the standard quantum limit, the generation of non-classical, macroscopically occupied phononic fields such as Fock states with large occupation number, and the realization of macroscopic quantum superpositions [@Armour2002; @Marshall2003]. Quantum entanglement between two or more mechanical oscillators, or between mechanical oscillators and optical fields, is another goal with much promise for quantum metrology [@Helmerson2008]. In all of these situations, dissipation and decoherence are, of course, major obstacles that need to be understood and brought under control. Most optomechanical systems are comprised of a mechanical oscillator attached to a support that is either at room temperature or in a cryostat environment. In such systems, clamping losses are usually the dominant source of dissipation and decoherence, and major efforts are underway to control and minimize these losses. One approach that is currently receiving much attention is the use of “all-optical" optomechanical systems comprised for instance of optically levitating micro-mirrors [@Swati2010] or of dielectric micro-spheres [@RomeroIsart2010; @Chang2010]. The remarkable isolation, extremely long mechanical coherence times, high sensitivity to forces and displacements, as well as the ability to generate non-classical light and phononic fields in such systems are particularly promising features of these systems. In such situations, though, laser fluctuations, which are otherwise a minor concern when compared to clamping noise, become a major issue, perhaps ‘the’ major issue. The effect of laser phase noise in the cooling and coherent evolution of optomechanical systems has recently been studied in much detail by Rabl [*et al.*]{} [@Rabl2009], who concluded that while laser noise does pose a challenge to ground state cooling and the coherent transfer of single excitations between the optical cavity and the mechanical resonator, it is not a stringent limitation, in contrast to earlier predictions [@Diosi2008], see also Ref. [@Yin2009]. The present paper expands on these results to consider not just the cooling regime, but also the regime of parametric instability –or more precisely mechanical amplification and regenerative oscillations [@instability] – that can be reached for laser fields blue-detuned from the cavity resonance. It is known that this instability can lead to self-sustained oscillations and phononic lasing [@Vahala2009; @Grudinin2010; @Braginsky1980]. As such, this regime is particularly promising for the “beyond ground state” program, as it may result, when combined with phononic analogs of cavity QED, to the generation of non-classical phononic fields. We also consider the effects of laser noise on the parametric driving of the oscillator, a situation that may lead to back-action evading measurements of one quadrature of mechanical motion, and the possibility of generating a squeezed state of motion [@Clerk2008]. The paper is organized as follows: Section II introduces our model and establishes the notation. Section III discusses the effects of laser phase noise on back-action cooling and the optical spring effect, within a classical description of both the mirror motion and the intracavity light field. It then turns to a quantum description of the mirror motion to evaluate the minimum mean phonon number in the red-detuned driving regime. It also comments on the unstable blue-detuned regime. Section IV discusses the parametric driving of the mechanical oscillator and evaluates the influence of a finite laser linewidth on the heating of the out-of-phase quadrature. Finally Section V is a summary and conclusion. Model system ============ We consider a generic cavity optomechanical system modeled as a Fabry-P[é]{}rot cavity with one fixed input mirror and a harmonically bound movable end mirror connected to a support, see Fig. 1. An incident laser beam of carrier frequency $\omega_\ell$, classical field amplitude $E(t)$ and power $P$ provides the desired radiation pressure to achieve cooling, an instability, or squeezing of the center-of-mass motion of the mirror. At the simplest level we describe the optical field inside the Fabry-P[' e]{}rot as a single-mode field, coupled to the center-of-mass (COM) mode of motion of the moving mirror of oscillating frequency $\Omega$ and effective mass $M$ by the usual cavity optomechanical coupling. This system is described by the Hamiltonian [@Law1995] $$\begin{aligned} \hat{H}&=& \hbar\Omega \hat{a}^\dagger \hat{a} +\hbar\omega_c \hat{b}^\dagger \hat{b} - \hbar g_0 \left(\hat{b}^\dagger \hat{b}-\langle\hat{b}^\dagger \hat{b}\rangle\right) \left(\hat{a}^\dagger+\hat{a}\right) \nonumber \\ &+&i\hbar\left[\hat{\eta}^\dagger(t) \hat{b}-\hat{b}^\dagger\hat{\eta}(t)\right] + \hat{H}_{\Gamma}+\hat{H}_{\kappa} \label{H}\end{aligned}$$ where $\hat{H}_{\Gamma}$ and $\hat{H}_{\kappa}$ describe the coupling of the mirrorCOM mode and the cavity field to reservoirs and account for dissipation at rates $\Gamma$ and $\kappa$, respectively. The bosonic creation and annihilation operators $\hat{a}^\dagger$ and $\hat{a}$ describe the COM phononic mode and $\hat{b}^\dagger$ and $\hat{b}$ describe the cavity field mode of frequency $\omega_c$. The optomechanical coupling coefficient is $g_0 = (\omega_c/L) x_{\rm zpt}$, where $x_{\rm zpt}= [\hbar/2M\Omega]^{1/2}$ is the ground state position uncertainty of the mechanical oscillator and $L$ is the equilibrium length of the Fabry-P[é]{}rot. The optical driving rate $\hat{\eta}(t)$ of the intracavity field is given by [@Giovannetti2001] $$\hat{\eta}(t)= \sqrt{\frac{c \epsilon_0 \sigma \kappa}{\hbar \omega_\ell}} E(t) e^{-i \omega_\ell t+i\phi(t)}+\sqrt{\kappa} \hat{d}_{\rm in}(t)e^{-i\omega_\ell t},$$ where $\sigma$ is the area of the incident beam and $\kappa$ the intrinsic cavity loss rate. Laser phase noise can be accounted for by a random phase $\phi(t)$ characterized in the case of a Lorentzian linewidth by the two-time correlation function $$\label{noise} \langle\dot{\phi}(t)\rangle_{\rm av} = 0, \mbox{ } \langle\dot{\phi}(t)\dot{\phi}(s)\rangle_{\rm av} = \sqrt{2 \gamma} \delta(t-s),$$ where $\langle \rangle_{\rm av}$ denotes the classical ensemble average. The bosonic noise operator $\hat{d}_{\rm in}(t)$, which accounts for quantum fluctuations of the classical laser field, satisfies the two-time correlations functions $$\langle\hat{d}_{\rm in}^\dagger (t) \hat{d}_{\rm in}(s)\rangle = 0, \mbox{ } \langle\hat{d}_{\rm in} (t) \hat{d}_{\rm in}^\dagger(s)\rangle = \delta(t-s). \label{corr_din}$$ From Eq. (\[H\]) one readily obtains the Langevin equations of motion for the cavity field ($\hat{b} \rightarrow \hat{b}e^{i\omega_\ell t}$) and COM operators $$\begin{aligned} \dot{\hat{a}} &=& \left[- i \Omega-\frac{\Gamma}{2}\right] \hat{a}-\sqrt{\Gamma}\hat{a}_{\rm in}(t)+i g_{\rm 0} \left(\hat{b}^\dagger \hat{b} -\langle \hat{b}^\dagger \hat{b}\rangle \right) \\ \dot{\hat{b}} &=&\left[ i\Delta-\frac{\kappa}{2}\right] \hat{b}-\hat{\eta}(t)e^{i \omega_\ell t}+i g_{\rm 0} \hat{b} \left(\hat{a}^\dagger+\hat{a}\right),\end{aligned}$$ where $\Delta = \omega_\ell-\omega_c$ is the detuning from cavity resonance. From standard input-output formalism [@Walls1994], the thermal input term $\hat{a}_{\rm in}(t)$ obeys the two-point correlations $$\begin{aligned} \langle\hat{a}_{\rm in}^\dagger (t) \hat{a}_{\rm in}(s)\rangle &=& n_M \delta(t-s) \nonumber \\ \langle\hat{a}_{\rm in}(t) \hat{a}_{\rm in}^\dagger(s)\rangle &=& (n_M+1)\delta(t-s), \label{corr_ain}\end{aligned}$$ where $n_M = k_b T_{\rm eff}/\hbar \Omega$ is the thermal occupation number of an oscillator of mechanical frequency $\Omega$ coupled to a thermal reservoir at temperature $T_{\rm eff}$. With $\hat{b} = \bar{b}+\hat{d}$, where $\bar{b}$ is the classical part of the cavity field and $|\bar{b}|^2 \gg \langle \hat{d}^\dagger \hat{d} \rangle$, we easily obtain the linearized Langevin equations of motion $$\begin{aligned} \dot{\hat{a}} &\approx& \left[- i \Omega-\frac{\Gamma}{2}\right] \hat{a}-\sqrt{\Gamma}\hat{a}_{\rm in}+i g_{\rm 0} \left(\bar{b} \hat{d}^\dagger+\bar{b}^* \hat{d} \right) \label{LinearA} \\ \dot{\hat{d}} &\approx& \left[ i\Delta-\frac{\kappa}{2}\right] \hat{d}-\sqrt{\kappa}\hat{d}_{\rm in}(t)+i g_{\rm 0} \bar{b} \left(\hat{a}^\dagger+\hat{a}\right), \label{LinearD}\end{aligned}$$ where the classical amplitude $\bar{b}$ obeys the equation of motion $$\dot{\bar{b}} = \left[i \Delta -\frac{\kappa}{2}\right] \bar{b}-\sqrt{\frac{c \epsilon_0 \sigma \kappa}{\hbar \omega_\ell}} E(t) e^{i\phi(t)}. \label{LinearB}$$ Single frequency driving ======================== Back-action cooling and optical spring effect --------------------------------------------- It is well known that a driving laser tuned to the red side of the cavity resonance results in an increase in cavity damping and a concomitant cooling of the COM motion, while at the same time reducing the mirror oscillator frequency [@KippenbergVahala2007], the optical spring effect. To derive the radiation-pressure induced corrections to the damping and frequency shift in the presence of a finite laser linewidth we introduce the COM position and momentum operators in the familiar way as $$\begin{aligned} \hat{x} &=& x_{\rm zpt}(\hat{a}+\hat{a}^\dagger),\nonumber \\ \hat p&=&[i\hbar/(2 x_{\rm zpt})](\hat a^\dagger - \hat a )\end{aligned}$$ and the scaled field mode operator $$\hat \beta = \sqrt{\hbar \omega_c} \hat b.$$ We consider first the situation where the mirror motion and the intracavity light field can both be described classically, $\hat{x} \rightarrow x$, $\hat p \rightarrow p$, $\hat \beta \rightarrow \beta$. With Eqs. (5) and (6), the Langevin equations of motion describing the mirror motion and the intracavity field are then $$\begin{aligned} \label{classical_langevin} &&\ddot{x}+\Gamma \dot{x}+\Omega^2 x = \frac{\left|\beta\left(t\right)\right|^2}{M L}-\frac{\left|\beta_0\right|^2}{M L}+\sqrt{\frac{2 k_b T_{\rm eff} \Gamma}{M}} \nu\left(t\right), \nonumber \\ && \dot{\beta} = \left[i [\Delta+g_0 (x/x_{\rm zpt})]-\frac{1}{2}\kappa\right]\beta+\sqrt{\kappa P} e^{i \phi\left(t\right)},\end{aligned}$$ where $P= c \epsilon_0 \sigma |E|^2$ is the driving laser power, $\nu(t)$ is a Gaussian noise process of zero mean, $\langle \nu(t)\nu(s)\rangle_{\rm av} = \delta(t-s)$, and $|\beta_0|^2$ is the mean intracavity field energy, given by $$\left |\beta_0\right |^2 = P \frac{4(2\gamma+\kappa)}{(2\gamma+\kappa)^2+4\Delta^2}. \label{beta0}$$ For a finite laser linewidth, $\gamma$, the optical damping coefficient becomes $$\label{Gamma opt} \Gamma_{\rm opt} = P \left(\frac{\omega_c\kappa}{\Omega M L^2}\right) \frac{8\left[A_--A_+\right]}{\left[\left(2\gamma+\kappa\right)^2+4\Delta^2\right]},$$ where we have assumed $\Gamma+\Gamma_{\rm opt} \ll \kappa$. Here, $$\label{A} A_{\pm} = \frac{\left(\gamma +\kappa\right)\left(2\gamma +\kappa\right)^2 +2\gamma \left(\left(\Delta\mp\Omega\right)^2 +\Delta^2\right)+\kappa\Omega^2}{\left[\left(2\gamma +\kappa\right)^2 +4\left(\Delta\mp\Omega\right)^2\right]\left(\kappa^2+\Omega^2\right)}.$$ The details of the derivation are given in Appendix A. The expression (\[Gamma opt\]) reduces to the results of Ref. [@KippenbergVahala2007] for $\gamma \rightarrow 0$, as it should. Similarly, the optically induced shift in the mirror COM frequency becomes $$\label{omega} \Delta \Omega_{\rm opt} = -P \left(\frac{\omega_{c}\kappa}{\Omega^2 M L^2}\right) \frac{2\left[B_+-B_-\right]}{\left[\left(2\gamma+\kappa\right)^2 +4\Delta^2\right]\left(\kappa^2+\Omega^2\right)},$$ where $$\label{B} B_\pm = \frac{\kappa\left(2\gamma+\kappa\right)^3+\kappa^2\left(2\Delta\pm\Omega\right)^2 +\left(8\gamma\Delta\kappa+4\Delta\Omega^2\right)\left(\Delta\pm\Omega\right) -4\gamma^2\Omega^2}{\left(2\gamma+\kappa\right)^2+4\left(\Delta\pm\Omega\right)^2}.$$ It is well known that for $\gamma =0$ that back-action damping is optimized for a laser red-detuned from the cavity resonance by the COM oscillation frequency $\Delta = -\Omega$. As would be intuitively expected a finite laser linewidth decreases $\Gamma_{\rm opt}$ for this optimal detuning. Somewhat surprisingly, though, an increase in laser linewidth can also result in an [*increased*]{} cooling for a small range of detunings $\Delta \neq -\Omega$ see Fig. 2. One can gain an intuitive feeling for this unexpected behavior by first considering the coefficients $A_\pm(\Delta,\gamma)$ and recalling how they contribute to either cold damping or to a possible instability. Figure 3 shows $A_-(\Delta,\gamma)$ as a function of $\Delta$ for increasing values of $\gamma$. It is always positive, but its peak value, at $\Delta = -4\kappa$ for the parameters of the figure, decreases with increasing $\gamma$. Since $A_+(\Delta,\gamma) = A_-(-\Delta,\gamma)$, $A_+(\Delta,\gamma)$ has the same behavior for $\Delta \rightarrow -\Delta$. For a red-detuned driving laser at the peak detuning $\Delta=-4 \kappa$ the $A_-$ contribution to Eq. (\[Gamma opt\]) dominates over the $A_+$ contribution, leading to an increase in the damping rate of the mirror and in cooling. For a a blue-detuned laser, on the other hand, the $A_+$ contribution dominates, leading to decreased mirror damping and to the onset of an instability for appropriate parameters. \[aplus\] The complex behavior of back-action damping as a function of the laser linewidth $\gamma$ can then be understood as a result of a delicate balance between the dependence of $A_\pm(\Delta,\gamma)$ on $\Delta$ and the dependence of the intensity of the relevant spectral components of the intracavity field on $\gamma$ (see Appendix A). Several examples of this dependence are shown in Fig. 4. For large detunings, $|\beta_0(\Delta, \gamma)|^2$ increases with $\gamma$, but this increase is not quite linear, and, of course, neither is the dependence of $A_\pm(\Delta,\gamma)$ on $\Delta$. A finite laser linewidth will therefore result in increased back-action damping for $$\begin{aligned} G(\Delta,\gamma) &>& G(\Delta,0), \\ G(\Delta,\gamma) &=& \left|\int_{- \infty}^{\infty} (A_-(\nu)-A_+(\nu)) |\beta_0(\nu-\Delta, \gamma)|^2 d\nu\right|. \nonumber\end{aligned}$$ For the reversed inequality, the laser linewidth results in a decrease in back-action damping. The situation at resonance is slightly different. Here the decrease in cold damping is simply a result in the decrease in the intensity of the spectral components about $\Delta =0$ for increasing $\gamma$, see Fig. 4. Similar arguments can be invoked to understand the behavior of the mechanical frequency shift. We observe an increase in $\Delta\Omega_{\rm opt}$ for $\Delta < -\Omega$, but a decrease for $-\Omega < \Delta < 0$ for a finite laser linewidth. This is illustrated in Fig. 5, which shows the radiation pressure induced mechanical frequency shift in the presence of laser linewidth as a function of relative detuning, $\Delta/\kappa$, and for various laser linewidths, $\gamma/\kappa$. For most of the relevant parameter range we have $|\Delta\Omega_{\rm opt}(\gamma)| < |\Delta\Omega_{\rm opt}(\gamma = 0)|$, a result of the increase in the “effective” cavity linewidth from $\kappa$ to $\kappa + 2 \gamma$ due to the finite laser linewidth. This increase is equivalent to the softening of the radiation pressure-induced potential. In the good cavity limit $\Omega \gg \kappa$, and for $\Delta = -\Omega$, there is always an increase in the mechanical frequency shift. Specifically, for small laser linewidths ($\gamma \ll \kappa$) the mechanical frequency shift increase is given by $$\Delta\Omega_{\rm opt}(\gamma,\Delta = -\Omega) \approx \Delta\Omega_{\rm opt}(0,\Delta=-\Omega)+\left(\frac{2 P \omega_{c}\kappa}{\Omega^2 M L^2}\right) \frac{\gamma\kappa}{\Omega^2}.$$ The increase in $\Delta\Omega_{\rm opt}$ has its origin in a change in the position of its zero for negative detunings, again see Fig. 5. Similar effects occur on the heating side ($\Delta > 0$). The heating rate of the mirror is reduced near $\Delta = \Omega$ for finite laser linewidth, see Fig 2, but in analogy to the situation on the cooling side, we note an increase in the heating rate for a range of detunings $\Delta > \Omega$. As expected, we also observe a decrease in the mechanical frequency shift for finite laser linewidths. This indicates that the detuning required to maximize the optical spring effect depends on both the mechanical frequency $\Omega$ and on the linewidth of the input laser. These considerations may play a role in the optimization of the operation of optomechanical phonon lasers. Minimum phonon occupation number -------------------------------- The minimum phonon occupation number for the case of an ideal, monochromatic driving laser has been discussed in several publications [@WisonRae2007; @Marquardt2007]. It is limited by the cavity decay rate, $\kappa$, and the mechanical frequency of the movable end mirror, $\Omega$. Ground state cooling can be achieved when $\kappa \ll \Omega$ and $\Delta \approx -\Omega$. In practice, a more severe limitation arises from the clamping losses associated with the mechanical support of the movable end mirror. Proposals to reduce or eliminate clamping noise include the optical levitation of the end mirror, see Ref. [@Swati2010]. In contrast to the preceding discussion, a derivation of the minimum phonon occupation number $\langle n\rangle_{\rm min}$ clearly requires a quantum mechanical description of the mirror motion. It is given by $$\langle n\rangle_{\rm min} = \frac{1}{2\pi}\int_{-\infty}^\infty d\omega S_N[\omega],$$ where the noise spectral density is given by $$\begin{aligned} S_N[\omega] &\approx& \int_{-\infty}^\infty dt e^{i\omega t} \langle \langle \hat{a}^\dagger(t)\hat{a}(0)\rangle\rangle_{\rm av} \nonumber \\ &=& \frac{ (2 \gamma+\kappa) \sigma_{\rm opt}(\omega)+\Gamma \sigma_{\rm th}(\omega)}{ |\Lambda(\omega)|^2}, \label{Spectrum}\end{aligned}$$ see Appendix B, and $\langle\rangle_{\rm av}$ is an average over the classical noise. Here $$\begin{aligned} \sigma_{\rm opt}(\omega) &=& \frac{4 g_0^2 |B_0|^2}{(2 \gamma+\kappa)^2+4(\omega+\Delta)^2} \left| \chi_M^{-1}(\omega)\right|^2,\nonumber \\ \sigma_{\rm th}(\omega) &=& n_M \left| \chi_M^{-1}(\omega)+\sigma^*(\omega)\right|^2+(n_M+1) |\sigma(\omega)|^2, \nonumber \\ \Lambda(\omega)&=& \chi_M^{-1}(\omega) \chi_M^{-1*}(-\omega)-2 i \Omega \sigma(\omega), \nonumber \\ \sigma(\omega) &=& g_0^2 |B_0|^2 \left[ \chi_R(\omega)-\chi_R^*(-\omega)\right] \nonumber \\ B_0 &=& \sqrt{P \kappa/ \hbar \omega_c} \chi_R(0),\end{aligned}$$ and we have introduced the mechanical and optical response functions $$\chi_M(\omega) = \frac{1}{\Gamma/2-i(\omega-\Omega)},\,\,\,\, \chi_R(\omega) = \frac{1}{\kappa/2-i(\omega+\Delta)}.$$ From the cantilever occupation number spectrum, it is a simple matter to find in the weak coupling limit ($\Gamma_{\rm opt} \ll \kappa$), $$\langle n\rangle_{\rm min} = - \frac{ (2\gamma+\kappa)(\kappa^2+4(\Delta-\Omega)^2)(\kappa^2+4(\Delta+\Omega)^2)}{16 \Delta \Omega \kappa \left(\left(2 \gamma+\kappa\right)^2+4 \left(\Delta-\Omega\right)^2\right)}.$$ This expression reduces to the result of Ref. [@Marquardt2007] for $\gamma = 0$. In the good cavity limit, $\gamma \ll \kappa \ll \Omega$ and for $\Delta =-\Omega$, $\langle n\rangle_{\rm min}$ becomes $$\langle n\rangle_{\rm min} =\frac{(2\gamma +\kappa)\kappa}{16 \Omega^2}.$$ \[nmin\] For $\Omega = 40 \kappa$ and $\gamma = 0.1\kappa$ this yields a 20 percent increase in the minimum occupation number. In other words, for a laser with narrow linewidth compared to the cavity decay rate $\kappa$ there is no significant increase in the minimum occupation number and phase noise does not pose a significant problem for ground state cooling, in agreement with the conclusions of Ref. [@Rabl2009]. In the strong cooling regime, ($g_0^2 \left|B_0\right|^2 \gg \Gamma \kappa$), the effects of a relatively narrow laser linewidth are even less dramatic and likewise do not pose a serious problem for ground state cooling. This is illustrated in Fig. 6, which shows the minimum occupation number compared to the zero-linewidth case as a function of $\gamma$ and detuning $\Delta$. We observe that the effects of laser linewidth are suppressed near $\Delta = -\Omega$. For very large $\gamma/\kappa$, though, the occupation number does increase significantly. Parametric driving ================== Following an original proposal by Braginsky, Vorontsov and Thorne [@Braginsky1980], Clerk [*et al.*]{} [@Clerk2008] have recently shown that by modulating the driving laser frequency at the mechanical frequency $\Omega$ and driving on cavity resonance $\omega_{\rm c}$, $$E(t) = \sqrt{\frac{P}{c \epsilon_0 \sigma}} \sin(\Omega t),$$ where $P$ is the maximum laser power and $\omega_\ell = \omega_{\rm c}$, it is possible to minimize back-action heating of one of the quadratures of COM mirror motion $$\begin{aligned} \hat{X} &=& \frac{1}{\sqrt{2}}\left[\hat{a}e^{i \Omega t}+\hat{a}^\dagger e^{-i \Omega t}\right] \nonumber \\ \hat{Y} &=& -\frac{i}{\sqrt{2}}\left[\hat{a} e^{i\Omega t}-\hat{a}^\dagger e^{-i\Omega t}\right]. \label{quadratures}\end{aligned}$$ Because laser phase noise places additional limitations on cooling, we expect that the phase noise will also increase the back-action heating of one of the quadratures. We consider a measurement of one of the quadratures in the weak coupling limit, $g_0^2|B_0|^2\ll \kappa^2$. With these constraints and following a similar method to that outlined in Appendix B, it is possible to find the time averaged variance of the cosine and sine quadratures. For $\gamma,\Gamma \ll \kappa, \Omega$ we find $$\begin{aligned} \Delta \hat{X}^2&=& \frac{\Omega}{2 \pi}\int_0^{\frac{2\pi}{\Omega}} \Delta \hat{X}^2(t) dt = \frac{1}{2}\left(2 n_M +1\right)+ \nonumber \\ &+&48 \frac{|b_0|^2 g_0^2 \kappa}{\Gamma}\Big[\frac{\kappa (\kappa^2+12 \Omega^2)}{(\kappa^2+4\Omega^2)^2(\kappa^2+16 \Omega^2)} \nonumber \\ &+& 3 \gamma\frac{512 \Omega^8+352 \Omega^6 \kappa^2-104 \kappa^4 \Omega^4 -20\kappa^6 \Omega^2-\kappa^8}{(\kappa^2+\Omega^2)(\kappa^2+4\Omega^2)^3(\kappa+16 \Omega^2)^2} \Big]\nonumber \\ \Delta \hat{Y}^2&=&\frac{\Omega}{2 \pi}\int_0^{\frac{2\pi}{\Omega}} \Delta \hat{Y}^2(t) dt = \Delta \hat{X}^2+ \nonumber \\ &+&32 \frac{|b_0|^2 g_0^2 }{\Gamma}\Big[\frac{ (4 \Omega^2 -\kappa^2)}{(\kappa^2+4\Omega^2)^2} \nonumber \\ &-& \gamma\frac{32 \Omega^6+24 \kappa^2 \Omega^4+16 \kappa^4 \Omega^2-3 \kappa^6}{\kappa(\kappa^2+\Omega^2)(\kappa^2+4\Omega^2)^3} \Big], \label{TimeAvgVariance}\end{aligned}$$ where $|b_0|^2 = P/ \hbar \omega_c$, and we have taken a time average. In the good cavity limit $\kappa \ll \Omega$ these expressions reduce to $$\begin{aligned} \Delta \hat{X}^2 &\approx& \frac{1}{2}(n_M+1)+g_0^2 |b_0|^2 \frac{9 \kappa (\kappa+2 \gamma)}{4\Gamma \Omega^4},\nonumber \\ \Delta \hat{Y}^2 &\approx&\frac{1}{2}(n_M+1)+g_0^2 |b_0|^2 \frac{9 \kappa (\kappa+2 \gamma)}{4\Gamma \Omega^4}\nonumber \\ &+& 8 g_0^2 |b_0|^2 \frac{\kappa-2 \gamma}{\kappa \Gamma\Omega^2}+16 g_0^2 |b_0|^2 \frac{\gamma \kappa}{\Gamma \Omega^4}. \label{quad}\end{aligned}$$ Equations (\[TimeAvgVariance\]) and (\[quad\]) show that the laser phase noise results in an increase in fluctuations of the quadratures of COM motion. What may appear surprising is that a contribution proportional to $\gamma$ is preceded by a minus sign in $\Delta {\hat Y}^2$. Keeping in mind that these results are only valid in the limit $\gamma, \Gamma \ll \kappa, \Omega$, we emphasize that this [*does not*]{} imply that phase diffusion results in a reduction in fluctuations in the cosine quadrature, but merely that its variance increases more slowly than the variance of the sine quadrature. This can be understood intuitively from the fact that while a perfectly sinusoidal driving field provides an optimal back-action evasion method for the cosine quadrature [@Clerk2008], phase noise in the driving laser translates into intracavity intensity fluctuations about zero frequency. These fluctuations increasingly overwhelm the back-action evasion provided by the sinusoidal drive, resulting in the effect of the sinusoidal drive being reduced in relative importance, and additional heating in each quadrature due to laser phase noise. In the limit $\gamma \gg \kappa$ one would expect both quadratures to be heated equally, which means the variance of the sine quadrature must ‘catch up’ with that of the cosine quadrature. Because the back-action heating of the sine quadrature is proportional to the mean intra-cavity photon number, the effect of back-action can easily be limited to an acceptable level, even in the presence of laser phase noise. A comparison of the heating of the cosine quadrature to the sine quadrature due to phase diffusion is shown in Fig. \[Parametric\] as a function of mechanical frequency $\Omega$ and laser linewidth $\gamma$. We see that the cosine quadrature is heated by nearly a full order of magnitude for a laser linewidth of $\gamma = 0.3 \kappa$. This implies that larger laser linewidths can hinder this back-action evasion method and well stabilized lasers are necessary for employing this method. \[parametricdrive\] Conclusions =========== We have analyzed the effects of laser phase noise in the dynamics of a generic optomechanical system, considering both single-frequency driving that can result either in back-action damping or mechanical amplification, and parametric driving useful for the generation of squeezing and back-action evading measurement schemes. We showed that laser phase noise reduces the effectiveness of backaction damping and softens the effects of a mechanical frequency shift. Additionally, we observed an increase in the minimum phononic occupation number of the mechanical element that remains however modest for $\gamma \ll \kappa$. It was concluded that ground state cooling can easily be achieved with a well stabilized laser. When extending the results of Clerk [*et. al.*]{} [@Clerk2008] on back-action evasion to include the influence of laser phase noise we showed that the laser phase noise results in additional heating of both sine and cosine quadratures, as expected. Overall, though, we have shown that for narrow laser linewidths such that $\gamma \ll \kappa$ the contribution from this noise source remains small. Future work will extend theses results to the preparation and detection of nonclassical mechanical states, including squeezed states, number states, and Schr[ö]{}dinger cat states, and the analysis of quantum state transfer between mechanical and electromagnetic degrees of freedom. We thank S. Singh, D. Goldbaum, S. Steinke and E. M. Wright for stimulating discussions, and P. Rabl for insightful comments on radiation pressure induced cooling rates. This work is supported by the US National Science Foundation, the DARPA ORCHID program, the US Army Research Office, US Office of Naval Research, and the University of Arizona/NASA Space Grant. Cold damping and mechanical frequency shift =========================================== The dynamics of the mirror and the intracavity light field are given by Eqs. (\[classical\_langevin\]). We consider in the following the simplified case of classical COM motion at frequency $\Omega$ in the absence of light field, $$\label{ansatz} x(t) = x_0 \sin(\Omega t).$$ This simplification is sufficient to determine the cooling rates and mechanical frequency shifts from an initially classical state. (Note that $x_0$ is bounded from below by the zero point motion $x_0 \geq \sqrt{\hbar/2 M \Omega}$. ) We proceed by substituting the ansatz (\[ansatz\]) into the Eq. (\[classical\_langevin\]) for the light field and solve for $\beta(t)$. Integrating that equation formally gives $$\beta\left(t\right) = \beta\left(0\right) e^{\left[i\Delta-\frac{1}{2}\kappa\right]t-i\epsilon [\cos(\Omega t)-1]}+\beta_P(t),$$ where $\epsilon = \omega_c x_0/ L \Omega$ and $\beta_P(t)$ is the contribution of the driving laser field, given explicitly by $$\beta_P(t) = \sqrt{\kappa P} e^{[i\Delta-\frac12\kappa]t-i \frac{\omega_c x_0}{\Omega L}cos(\Omega t)} \sum_{n = -\infty}^{\infty} i^n J_n(\epsilon)\int_0^t e^{i (n \Omega -\Delta ) s +\frac{1}{2}\kappa s}e^{i\phi (s)} ds.$$ In deriving this expression we have used the Jacobi-Anger expansion on the $\exp[i \epsilon \cos(\Omega s)]$ term, and $J_n(z)$ is a Bessel function of the first kind. In the following we ignore the free transients compared to the relevant driven contribution to the intracavity field, resulting in the intracavity normalized intensity $$|\beta_P(t)|^2 = \kappa P e^{-\kappa t} \sum_{n_1,n_2 = -\infty}^{\infty} i^{n_1-n_2} J_{n_1}(\epsilon)J_{n_2}(\epsilon) \int_0^t \int_0^t e^{i (n_1\Omega-\Delta)s-i (n_2 \Omega-\Delta)s'+\frac{1}{2}\kappa (s+s')} e^{i (\phi(s)-\phi(s'))}ds'ds.$$ We include the effect of the Lorentzian spectrum of the driving laser via an ensemble average over the random phase noise (\[noise\]), $$\left\langle e^{i [\phi(s)-\phi(s')]}\right\rangle _{\rm av} = e^{-\gamma \left|s-s'\right|}$$ to find the ensemble-averaged intracavity normalized intensity $\langle |\beta_P(t)|^2\rangle_{\rm av}$. In most cases of practical interest in cavity optomechanics we have that $\epsilon \ll 1$. Keeping then linear terms in $\epsilon$ only we find $$\left\langle |\beta_P(t)|^2\right\rangle_{\rm av} \approx M L [ |\beta_0|^2- \Omega_{\rm opt}^2 x(t)-\Gamma_{opt} \dot{x}(t)],$$ where $|\beta_0|^2$ is given explicitly in Eq (\[beta0\]). Considering $|\Omega_{\rm opt}|^2 \ll \Omega^2$ for our ansatz, we have the effective mechanical frequency given by $$\Omega_{\rm eff} = \sqrt{\Omega^2+\Omega_{\rm opt}^2} \approx \Omega+\frac{1}{2\Omega}\Omega_{\rm opt}^2 = \Omega+\Delta\Omega_{\rm opt}$$ The explicit form of the frequency shift $\Delta \Omega_{\rm opt}$ is given in Eq. (\[Gamma opt\]). This shift is due to the component of the light field that is in-phase with the mirror oscillations. The mechanical damping $\Gamma_{\rm opt}$ is given explicitly in Eq. (\[omega\]), and is due to the out-of-phase components of the light field. Cantilever occupation number spectrum ===================================== Our starting point is the linearized equations of motion Eqs. (\[LinearA\],\[LinearD\],\[LinearB\]) and the electric field $|E|^2 = P/(c \epsilon_0 \sigma)$. These equations of motion are conveniently manipulated in the Fourier domain. Introducing the Fourier transform of an arbitrary operator $\hat{c}(t)$ as $$\begin{aligned} C[\omega] &= \int_{-\infty}^\infty dt \hat{c}(t) e^{i\omega t}\\ C^\dagger[\omega] &= \int_{-\infty}^\infty dt \hat{c}^\dagger(t) e^{i\omega t},\end{aligned}$$ which leads to the Fourier space coupled equations for the cantilever and light field $$A[\omega] = \chi_M(\omega)\left[-\sqrt{\Gamma} A_{in}[\omega]+i \frac{g_{\rm 0}}{2 \pi} F[\omega]\right], \nonumber$$ $$D[\omega] = \chi_R(\omega)\left[-\sqrt{\kappa} D_{\rm in}[\omega]+i \frac{g_{\rm 0}}{2 \pi} \bar{B}[\omega]\star\left(A^\dagger[\omega]+A[\omega]\right)\right], \nonumber$$ where $$\begin{aligned} F[\omega] &=& -\sqrt{\kappa}\Big[\bar{B}^\dagger[\omega]\star \left(\chi_R[\omega] D_{\rm in}[\omega]\right) + +\bar{B}[\omega]\star(\chi_R^*(-\omega) D^\dagger_{\rm in}[\omega])\Big] + i\frac{g_{\rm 0}}{2 \pi}\Big\{ \bar{B}^\dagger[\omega]\star\left[\chi_R(\omega) \left(\bar{B}[\omega]\star(A[\omega]+A^\dagger[\omega])\right)\right]\nonumber \\ &-&\bar{B}[\omega]\star\left[\chi_R^*(-\omega) \left(\bar{B}^\dagger[\omega]\star(A[\omega]+A^\dagger[\omega])\right)\right]\Big\}. \nonumber\end{aligned}$$ Here the convolution of two arbitrary functions is as usual $$f(\omega)\star g(\omega) \equiv \int_{-\infty}^\infty dx f(x) g(\omega-x), \nonumber$$ and for small laser linewidths $\gamma \ll \kappa$ we have $$\bar{B}[\omega] \approx -2 \pi \chi_R[\omega]\sqrt{\kappa} b_0 \delta(\omega). \nonumber$$ With this explicit form, $F(\omega)$ reduces to $$F[\omega] \approx -\sqrt{\kappa}\Big[\bar{B}^\dagger[\omega]\star \left(\chi_R[\omega] D_{\rm in}[\omega]\right)+ +\bar{B}[\omega]\star(\chi_R^*(-\omega) D^\dagger_{\rm in}[\omega])\Big]\nonumber + i 2 \pi g_{\rm 0}\left|\bar{B_0}\right|^2\left( \chi_R(\omega) -\chi_R^*(-\omega)\right) (A[\omega]+A^\dagger[\omega]) , \nonumber$$ where $\bar{B}_0 = \sqrt{\kappa} b_0 \chi_R[0]$. In this approximation we can easily find a solution for $A[\omega]$ as $$A[\omega] \approx \frac{\chi_M(\omega)}{\Sigma[\omega]} \Big[ i g_0 F_0[\omega]-\sqrt{\Gamma} A_{\rm in}[\omega] +\sqrt{\Gamma} g_0^2 |B_0|^2 \chi_M^*(-\omega) \left[A_{\rm in}[\omega]+A_{\rm in}^\dagger[\omega] \right]\times \left\{\chi_R(\omega)-\chi_R^*(-\omega)\right\}\Big],$$ where $$\Sigma[\omega] = 1+g_0^2 |B_0|^2 \left(\chi_M(\omega)-\chi_M^*(-\omega)\right)\left(\chi_R(\omega)-\chi_R^*(-\omega)\right), \nonumber$$ and $$F_0[\omega] = -\sqrt{\kappa}\Big[\bar{B}^\dagger[\omega]\star \left(\chi_R(\omega) D_{\rm in}[\omega]\right) + \bar{B}[\omega]\star(\chi_R^*(-\omega) D^\dagger_{\rm in}[\omega])\Big]. \nonumber$$ With the two-frequency noise input correlations: $$\begin{aligned} \langle D_{\rm in}^\dagger (\omega)D_{\rm in}(\omega')\rangle &=& 0\nonumber \\ \langle D_{\rm in} (\omega) D_{\rm in}^\dagger(\omega')\rangle &=& 2\pi\delta(\omega+\omega')\nonumber \\ \langle A_{\rm in}^\dagger (\omega) A_{\rm in}(\omega')\rangle &=& 2 \pi n_{M}\delta(\omega+\omega')\nonumber \\ \langle A_{\rm in}(\omega) A_{\rm in}^\dagger(\omega')\rangle &=& 2\pi(n_{M}+1)\delta(\omega+\omega'),\nonumber\end{aligned}$$ which are equivalent to the two-time correlations of Eqs. (\[corr\_din\], \[corr\_ain\]), it is a simple matter to find $$S_N[\omega] = \int_{-\infty}^\infty \frac{d\omega'}{2\pi} \langle\langle A^\dagger[\omega]A[\omega']\rangle\rangle_{\rm av}. \label{averageN}$$ [10]{} For a recent pedagogical review see T. J. Kippenberg and K. J. Vahala, Optics Express [**15**]{}, 17173 (2007). A. D. O’Connell *et al.*, Nature [**464**]{}, 697 (2010). W. Marshall, C. Simon, R. Penrose, D. Bouwmeester, Phys. Rev. Lett. [**91**]{}, 130401 (2003). A. D. Armour, M. P. Blencowe, and K. C. Schwab, Phys. Rev. Lett [**88**]{}, 148301 (2002). K. Helmerson and W. D. Phillips, Riv. Nuovo Cimento [**31**]{}, 141 (2008). S. Singh, G. A. Phelps, D. S. Goldbaum, E. M. Wright and P. Meystre, arXiv:1005.3568 (2010). D. Chang, C. Regal, S. Papp, D. Wilson, J. Ye, O. Painter, H. Kimble, and P. Zoller, PNAS [**107**]{}, 1005 (2010). O. Romero-Isart, M. L. Juan, R. Quidant, and J. I. Cirac, New Journal of Physics [**12**]{}, 033015 (2010). P. Rabl, C. Genes, K. Hammerer, M. Aspelmeyer, Phys. Rev. A [**80**]{}, 063819 (2009). L. Diosi, Phys. Rev. A [**78**]{}, 021801 (2008). Z. Yin, Phys. Rev. A [**80**]{}, 033821 (2009). T. J. Kippenberg [*et al.*]{} Phys. Rev. Lett. [**95**]{}, 033901 (2005); H. Rokhsari [*et al*]{}, Optics Express [**13**]{}, 5293 (2005); T. Carmon [*et al.*]{} Physical Review Letters [**94**]{}, 223902 (2005). K. Vahala, M. Herrmann, S. Knünz, V. Batteiger, G. Saathoff, T. W. Hänsch, and Th. Udem, Nature Physics [**5**]{}, 682 (2009). I. Grudinin, H. Lee, O. Painter, K. J. Vahala, Phys. Rev. Lett [**104**]{}, 083901 (2010). V. Braginsky, Y. I. Vorontsov, and K. P. Thorne, Science [**209**]{}, 547 (1980). A. A. Clerk, F. Marquardt, and K. Jacobs, New J. Phys. [**10**]{} 095910 (2008). C. K. Law, Phys. Rev. A. [**51**]{}, 2537 (1995). V. Giovannetti and D. Vitali, Phys. Rev. A. [**63**]{}, 023812 (2001). G. Milburn and D.F. Walls, [*Quantum Optics*]{} (Springer-Verlag, 1994). I. Wilson-Rae, N. Nooshi, W. Zwerger, and T. J. Kippenberg, Phys. Rev. Lett [**99**]{}, 093901 (2007). F. Marquardt, J. P. Chen, A. A. Clerk, and S. M. Girvin, Phys. Rev. Lett. [**99**]{}, 093902 (2007).
{ "pile_set_name": "ArXiv" }
--- abstract: | A Fibonacci heap is a deterministic data structure implementing a priority queue with optimal amortized operation costs. An unfortunate aspect of Fibonacci heaps is that they must maintain a “mark bit” which serves only to ensure efficiency of heap operations, not correctness. Karger proposed a simple randomized variant of Fibonacci heaps in which mark bits are replaced by coin flips. This variant still has expected amortized cost $O(1)$ for insert, decrease-key, and merge. Karger conjectured that this data structure has expected amortized cost $O(\log s)$ for delete-min, where $s$ is the number of heap operations. We give a tight analysis of Karger’s randomized Fibonacci heaps, resolving Karger’s conjecture. Specifically, we obtain matching upper and lower bounds of $\Theta(\log^2 s / \log \log s)$ for the runtime of delete-min. We also prove a tight lower bound of $\Omega(\sqrt{n})$ on delete-min in terms of the number of heap elements $n$. The request sequence used to prove this bound also solves an open problem of Fredman on whether cascading cuts are necessary. Finally, we give a simple additional modification to these heaps which yields a tight runtime $O(\log^2 n / \log \log n)$ for delete-min. author: - Jerry Li - John Peebles bibliography: - 'fib-heaps-paper.bib' title: Replacing Mark Bits with Randomness in Fibonacci Heaps --- Acknowledgments =============== We would like to thank David Karger for making us aware of this problem and for pointing out that our analysis in actually gave us something tighter than we originally thought.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The causal interpretation of quantum mechanics is applied to a homogeneous and isotropic quantum universe, whose matter content is composed by non interacting dust and radiation. For wave functions which are eigenstates of the total dust mass operator, we find some bouncing quantum universes which reachs the classical limit for scale factors much larger than its minimum size. However these wave functions do not have unitary evolution. For wave functions which are not eigenstates of the dust total mass operator but do have unitary evolution, we show that, for flat spatial sections, matter can be created as a quantum effect in such a way that the universe can undergo a transition from an exotic matter dominated era to a matter dominated one.' author: - 'N. Pinto-Neto' - 'E. Sergio Santini' - 'F. T. Falciano' title: 'Quantization of Friedmann cosmological models with two fluids: dust plus radiation' --- Introduction ============ The Bohm-de Broglie (BdB) interpretation [@bohm1][@bohm2][@hol] has been sucessfully applied to quantum minisuperspace models [@vink; @bola1; @kow; @hor; @bola2; @fab; @fab2], and to full superspace [@must] [@cons] [@tese]. In the first case, it was discussed the classical limit, the singularity problem, the cosmological constant problem, and the time issue. It was shown in scalar field and radiation models for the matter content of the early universe that quantum effects driven by the quantum potential can avoid the formation of a singularity through a repulsive quantum force that counteract the gravitational attraction. The quantum universe usually reach the classical limit for large scale factors. However, it is possible to have small classical universes and large quantum ones: it depends on the state vector and on initial conditions [@fab]. It was also shown that the quantum evolution of homogeneous hypersurfaces form the same four-geometry independently on the choice of the lapse function [@bola1]. In the present work we study the minisuperspace model given by a quantum Friedmann-Lemaître-Robertson-Walker (FLRW) universe filled with dust and radiation decoupled from each other. We write down the hamiltonian that comes from the velocity potential Schutz formalism [@schutz1]. After implementing a canonical transformation, the momentum associated to the radiation fluid $p_{T}$ and to the dust fluid $p_{\varphi}$ appear linearly in the superhamiltonian constraint. Both can be associated to time parameters, but physical reasons and mathematical simplicity led us to choose the coordinate $T$ associated with $p_{T}$ as the time parameter. This is equivalent to choose the (reversed) conformal time. We quantize this system obtaining a Schrödinger-like equation. We analyze its time dependent solutions applying the BdB interpretation in order to study the scale factor quantum dynamics. We first consider an initial quantum state given by a gaussian superposition of the scale factor which is an eigenstate of the total dust mass operator (matter is not being created nor destroyed in such states), and we compute the solution at a general subsequent time by means of the propagator approach. We calculate the bohmian trajectories for the scale factor. For flat and negative curvature spatial sections, we find that the quantum solutions for the scale factor reach the classical behaviour for long times, but do not present any initial singularities due to quantum effects. In the same way, in the case of positive curvature spatial sections, the classical initial and final singularities are removed due to quantum effects, and the scale factor oscillates between a minimum and a maximum size. For large scale factor, the classical behaviour is recovered. However, such eigenfunctions of the total dust mass operator do not have unitary evolution. This led us to consider an initial state given by gaussian superpositions of the total dust matter content. In this situation, dust and radiation can be created and destroyed. We calculate general solutions for flat, negative and positive curvature spatial sections. In particular, for flat spatial sections, we construct a wave packet whose quantum trajectories represent universes which begin classically in an epoch where the dust matter has negative energy density (exotic dust matter), evolving unitarily to a configuration where quantum effects avoid the subsequent classical big crunch singularity, performing a graceful exit to an expanding classical model filled with conventional matter and radiation. There is thus a transition from an exotic matter era to a conventional matter one due to quantum effects. This paper is organized as follows. In section \[bdbs\] we synthesize the basic features of the Bohm-de Broglie interpretation of quantum mechanics, which will be necessary to interpret our quantum model studied in other sections. In section \[drs\], we briefly summarize the velocity potential Schutz formalism, and we apply it to construct the hamiltonian of the FLRW universe filled with two perfect fluids, which are dust and radiation. We then review and analyze the classical features of the two perfect fluids FLRW model in order to have the results to be contrasted with the quantum models of the following sections. In section \[1f\], we present some new results concerning the existence of singularities in the quantization of the one fluid case. We show that, when the fluid is radiation, all quantum solutions do not present singularities. In section \[quantum\], we quantize the model with two fluids, and we compute the solutions of the Schrödinger like equation for two different initial conditions: the first being an eigenstate of the total dust matter operator, and the second a gaussian superposition of total dust matter eigenstates. We interpret the solutions according to the BdB view and we develop the main results of the paper. Section \[conclu\] is for discussion and conclusions. The Bohm-de Broglie interpretation of quantum mechanics {#bdbs} ======================================================= In this section, we briefly review the basic principles of the Bohm-de Broglie (BdB) interpretation of quantum mechanics. According to this causal interpretation, an individual physical system comprises a wave $\Psi(x,t)$, which is a solution of the Schrödinger equation, and a point particle that follows a trajectory ${x}(t)$, independent of observations, which is solution of the Bohm guidance equation $$\label{bohmg} p=m\dot{x}=\nabla S(x,t)|_{x=x(t)} ,$$ where $S(x,t)$ is the phase of $\Psi$. In order to solve Eq.(\[bohmg\]), we have to specify the initial condition $x(0)=x_0$. The uncertainty in the initial conditions define an ensemble of possible motions, [@bohm1][@bohm2][@hol]. It is sufficient for our purposes to analyze the Schrödinger equation for a non relativistic particle in a potential $V(x)$, which, in coordinate representation, is $$\label{s} i\frac{\partial\Psi(x,t)}{\partial t}= \biggl[-\frac{1}{2m}\nabla^2 +V(x)\biggr]\Psi(x,t) .$$ Substituting in (\[s\]) the wave function in polar form, $\Psi=A \exp (iS)$, and separating into real and imaginary parts, we obtain the following two equations for the fields $A$ and $S$ $$\label{equacaoHJ} \frac{\partial {S}}{\partial t}+\frac{\left(\nabla S\right)^2}{2m} + V-\frac{1}{2m}\frac{\nabla^2 A}{A}=0 ,$$ $$\frac{\partial A^2}{\partial t}+\nabla.\left(A^2\frac{\nabla S}{m}\right)=0 .$$ Equation (\[equacaoHJ\]) can be interpreted as a Hamilton-Jacobi type equation for a particle submitted to a potential, which is given by the classical potential $V(x)$ plus a [*quantum potential*]{} defined as $$\label{qpote} Q\equiv -\frac{1}{2m}\frac{\nabla^2 A}{A} .$$ It is possible to verify that the particle trajectory $x(t)$ satisfies the equation of motion $$m\frac{d^2 x}{dt^2}=-\nabla V - \nabla Q .$$ The classical limit is obtained when $Q=0$. The BdB interpretation does not need a classical domain outside the quantized system to generate the physical facts out of potentialities. In a real measurement, we do not see superpositions of the pointer apparatus because the measurement interaction causes the wave function to split up into a set of non overlapping packets. The particle will enter in one of them, the rest being empty, and it will be influenced by the unique quantum potential coming from the sole non zero wave function of this region. The particle cannot pass to another branch of the superposition because they are separated by regions where $\Psi=0$, nodal regions. In section \[quantum\], the FLRW minisuperspace model containing dust and radiation as two perfect decoupled fluids will be quantized. A preferred time variable can be chosen as one of the velocity potencials associated to the fluids (radiation), yielding a Schrödinger like equation. Then the BdB interpretation of our quantum model runs like in Ref. [@bola2], in close analogy to the non relativistic particle model described above. In the present case, however, the scale factor of the universe will not be the only degree of freedom: the velocity potential associated with the dust field and its canonical momentum, interpreted as the dust total mass, are also present. They satisfy a Hamilton-Jacobi equation modified by an extra term, the quantum potential, so that their time evolution will be different from the classical one. The main features of this classical model we describe in the following section. Classical dust plus radiation model in the velocity potential Schutz formalism {#drs} ============================================================================== We start by considering a perfect fluid in a FLRW universe model. The line element is given by $$\label{metric} ds^{2}=-N^{2}dt^{2}+a^{2}\left(t\right)\gamma_{ij}dx^{i}dx^{j}$$ where $N$ is the lapse function, $a$ is the scale factor, and $\gamma_{ij}$ is the metric of the three-dimensional homogeneous isotropic static spatial section of constant curvature $\kappa=1, 0,$ or $-1$. Following the Schutz’s canonical formalism to describe the relativistic dynamics of a perfect fluid in interaction with the gravitational field [@schutz1], we introduce the five velocity potentials, $ \alpha, \beta, \theta, \varphi$ and $s$. The potentials $\alpha$ and $\beta$, which describe vortex motion, vanish in the FLRW model because of its symmetry. The potential $s$ is the specific entropy and $\theta$ can be related with the temperature of the fluid. By now $\varphi$ works only as a mathematical tool. The four-velocity of the fluid is obtained from the velocity potentials as $$\mathit{U}_{\nu}=\frac{1}{\mu}\left(\varphi,_{\nu} +\theta\,s,_{\nu}\right),$$ where $\mu$ stands for the specific enthalpy. The four velocity is normalized as $$g_{\alpha\,\beta}\mathit{U}^{\alpha}\mathit{U}^{\beta}=-1 .$$ Using this equation, it is possible to write the specific enthalpy $\mu$ as a function of the velocity potentials. The action for a relativistic perfect fluid and the gravitational field in the natural units $c=\hbar=1$ is given by $$\label{A} I = -\frac{1}{6l_p^2}\int_{M}d^{4}x\sqrt{-g}\, ^{4}{\cal R}+ \int_{M}d^{4}x\sqrt{-g}\, p+ \frac{1}{3l_p^2}\int_{\partial M} d^{3}x\sqrt{h}h_{ij}K^{ij},$$ where $l_p\equiv(8\pi G/3)^{-1/2}$, $G$ is the Newton’s constant (hence $l_p$ is the Planck length in the natural units), $^{4}{\cal R}$ is the scalar curvature of the spacetime, $p$ is the pressure of the fluid, $h_{ij}$ is the three metric on the boundary $\partial M$ of the 4-dimensional manifold $M$, and $K^{ij}$ its extrinsic curvature. The velocity potentials are supposed to be functions of $t$ only, in accordance with the homogeneity of spacetime. The perfect fluid follows the equation of state $p=\lambda \rho$. Substituting the metric (\[metric\]) into the action (\[A\]), using the formalism of Schutz [@schutz1] to write the pressure of the fluid as $$p= p_{0r}\left[ \frac{\dot{\varphi}+\theta\dot{s}} {N(\lambda+1)} \right]^{\frac{\lambda+1}{\lambda}} \exp{\left(-\frac{s}{s_{0r}\lambda} \right)},$$ with $p_{0r}$ and $s_{0r}$ constants, computing the canonical momenta $p_{\varphi},p_{s}, p_{\theta}$ for the fluid and $p_a$ for the gravitational field, using the two constraints equations $p_{\theta}=0, \,\,\, \theta p_{\varphi}=p_s $, and performing the canonical transformation $$T=-\frac{p_s}{6^{1-3\lambda}}\exp \left( -\frac{s}{s_{0r}}\right) p_\varphi^{-(\lambda+1)}\rho_{0r}^{\lambda}s_{0r}, \label{can1}$$ and $$\varphi_N=\varphi + (\lambda + 1)s_{0r} \frac{p_s}{p_\varphi}, \label{can3}$$ leading to the momenta $$p_{_T}=6^{1-3\lambda} \frac{p_\varphi^{(\lambda+1)}}{\rho_{0r}^{\lambda}} \exp\left(\frac{s}{s_{0r}}\right), \label{can2}$$ and $$p_{\varphi_N}=p_{\varphi}, \label{can4}$$ we obtain for the final Hamiltonian (see Ref. [@Lapshinskii] for details), $$\label{superh} H\equiv N{\cal H}=N\biggl(-\frac{p_{a}^2}{24a}-6\kappa a+ \frac{p_T}{a^{3\lambda}}\biggr),$$ where $N$ plays the role of a Lagrange multiplier whose variation yields the constraint equation $$\label{constr} {\cal H}\approx 0,$$ where $\approx$ means ‘weakly zero’ (this phase space function is constrained to be zero, but its Poisson bracket to other quantities is not). We have redefined $\tilde{a}=\sqrt{V/(16\pi l_p^2)}\; a$ in order for $\tilde{a}$ be dimensioless, and $\tilde{N}=\sqrt{6}N$, where $V$ is the total comoving volume of the spatial sections. The tilda have been omitted. Considering now two decoupled fluids, one being radiation ($\lambda_r=1/3$), and the other dust matter ($\lambda_d=0$), the Hamiltonian reads: $$\label{hrm} H \equiv N{\cal H}=N\biggl(-\frac{p_{a}^2}{24a}-6\kappa a+ \frac{p_{T}}{a}+p_{\varphi}\biggr)$$ The classical Hamilton equations are: $$\label{aponto} \dot{a}=\left\{a,H\right\}=-\frac{N}{12a}p_{a} \Rightarrow p_{a}=-\frac{12a \dot{a}}{N} ,$$ $$\label{13} \dot{p_{a}}=\left\{p_{a},H\right\}= N\biggl(-\frac{p_{a}^2}{24 a^2}+6\kappa+ \frac{p_{T}}{a^2}\biggr), \label{pr14}$$ $$\label{conf} \dot{T}=\frac{N}{a} ,$$ $$\label{cosm} \dot{\varphi}=\left\{\varphi,H\right\}=N ,$$ $$\label{15} \dot{p}_{T}=\dot{p}_{\varphi}=0 \Rightarrow \mbox{$p_{T}$, $p_{\varphi}$ are constants}.$$ The superhamiltonian is constrained to vanish due to variation of the Hamiltonian with respect to the lapse function $N$, ${\cal H}\approx0$, $$\label{ham0} -\frac{p_{a}^2}{24a}-6\kappa a + \frac{p_{T}}{a} + p_{\varphi} = 0 .$$ The constraint (\[ham0\]) combined with Eqs. (\[aponto\]) and (\[15\]) yield the Friedmann’s equation $$\label{friedmann} \left(\frac{\dot{a}}{a}\right)^{2}=N^{2}\left[-\frac{\kappa}{a^{2}}+\frac{1}{6} \left(\frac{p_{T}}{a^{4}}+ \frac{p_{\varphi}}{a^{3}}\right)\right]$$ Note that the conjugate momenta $p_T$ and $p_{\varphi}$, classical constants of motion, can be identified to the total content of dust and radiation in the universe: $$p_{\varphi}=16\pi Ga^{3} \rho_{m} ,$$ $$p_{T}=16\pi Ga^{4} \rho_{r}.$$ Note also that Eq.(\[cosm\]) implies that $d\varphi=Ndt$, hence $\varphi$ is cosmic time, while Eq.(\[conf\]) yields $adT=Ndt$ so $T$ is conformal time. Consequently, choosing $N=1$ means taking coordinate time $t$ as cosmic time $\varphi$, while choosing $N=a$ imposes coordinate time to be conformal time $T$. Explicit analytic solutions of Eqs.(\[aponto\],\[pr14\],\[15\],\[friedmann\]) can be obtained only in the gauge $N=a$. In this gauge, besides the constraint (\[friedmann\]) with $N=a$, we obtain the simple second order equation, $$\label{2order} a''+\kappa a=\frac{p_{\varphi}}{12},$$ where a prime means differentiation with respect to conformal time, which we denote $\eta$ from now on. The solutions read: $$\label{cdr} a = \left\{ \begin{array}{ll} \left(\frac{2a_{eq}}{\eta_{eq}^{2}}\right)\left[1-\cos( \eta )+ \eta_{eq}\sin( \eta)\right] & \;\; ; \kappa=1 ,\\ \\ a_{eq}\left[2\frac{\eta }{\eta_{eq}}+\left(\frac{\eta }{\eta_{eq}}\right)^{2}\right] & \;\; ; \kappa=0 ,\\ \\ \left(\frac{2a_{eq}}{\eta_{eq}^{2}}\right)\left[\cosh( \eta )+ \eta_{eq}\sinh( \eta)-1\right] & \;\; ; \kappa=-1 . \end{array} \right.$$ The quantity $a_{eq}$ is defined to be the value of the scale factor at the equilibrium time where $\rho_{m}=\rho_{r}$, and $\eta_{eq}^{2}=3/\left(2\pi\, G \, \rho_{r}a^4\right)=24\, a_{eq}/\mid p_{\varphi}\mid $. As we will see in section (\[quantum\]), the presence of quantum effects can create exotic dust matter content. Hence, for comparison, we analyze a classical universe filled with exotic dust, which means $\rho_m<0$, i.e $p_{\varphi}<0$. For simplicity, let us focus on the flat spatial case. In the presence of exotic dust, the behaviour of the scale factor is drastically different. From the Friedmann Eq.(\[friedmann\]), since $p_{\varphi}<0$, the radiation density must always be equal or greater then the dust density, otherwise the Friedmann equation $$\left(\frac{\dot{a}}{a}\right)^{2}=\frac{1}{6}\left(\frac{p_{\eta}}{a^{2}}- \frac{\mid p_{\varphi}\mid }{a}\right)$$ has no solution. For small values of the scale factor, the radiation term dominates. As the scale factor grows, the exotic dust term begins to be comparable to the radiation term up to the critical point where both are equal and $\dot{a}=0$. &gt;From this point, the scale factor decreases until the universe recollapses. Note that Eq.(\[cdr\]) for $\kappa=0$ and $p_{\varphi}<0$ implies that $a''<0$ at all times. Hence, contrary to the normal dust matter case where after the big bang the universe expands forever \[see Eq.(\[cdr\]) for $\kappa=0$\], in the exotic case the universe recollapses in a big crunch. The qualitative evolution of the scale factor is plotted in figure 1: The deceleration parameter in conformal time is given by $$\label{q} q=-\frac{{a''}a}{{a'}^{2}}+1 .$$ It diverges when the scale factor reaches its maximum value (${a'}=0$ and $a''<0$). FLRW Quantum Model with Radiation {#1f} ================================= In this section, we present a general result concerning the presence of singularities in the quantization of a FLRW model with radiation. The hamiltonian constraint in this case is $$\label{hconstr} {\cal H}=-\frac{p_{a}^2}{24a}-6\kappa a+\frac{p_\eta}{a} \approx 0,$$ and $\eta$ is conformal time, as discussed above. Using the Dirac quantization procedure, the hamiltonian constraint phase space function $\cal{H}$ becomes an operator which must annihilate the quantum wave function: $\hat{\cal{H}}\Psi=0$. One then obtains in natural units the Wheeler-De Witt equation for the minisuperspace FLRW metric with radiation: $$\label{sch27} i \frac{\partial }{\partial \eta}\Psi\left(a,\eta\right)= \left(-\frac{1}{24} \frac{\partial^{2}}{\partial a^{2}}+6\kappa a^{2}\right)\Psi \left(a,\eta\right).$$ Note that a particular factor ordering has been chosen and, because $p_\eta$ appears linearly in Eq.(\[hconstr\]), $\eta\rightarrow -\eta$ is chosen to be the time label in which the wave function evolves (the sign reversing was done in order to express this quantum equation in a familiar Schröedinger form [@Lapshinskii]). The scale factor is defined only in the half line $[0,\infty)$, which means that the superhamiltonian (\[hconstr\]) is not in general hermitian. Hence, if one requires unitary evolution, the Hilbert space is restricted to functions in $L^{2}(0,\infty)$ satisfying the condition $$\label{cond1} \frac{\partial\Psi}{\partial a}(0,\eta)=\alpha\Psi(0,\eta),$$ where $\alpha$ is a real parameter [@JMP371449]. We will now show that condition (\[cond1\]), together with the assumption that $\Psi(a,\eta)$ is analytic in $\eta$ at $a=0$, implies that general quantum solutions of Eq.(\[sch27\]), when interpreted using the BdB interpretation, yield quantum cosmological models without any singularity. We can rearrange Eq.(\[sch27\]) in order to isolate the second spatial derivative: $$\label{d2psi} \frac{\partial^{2}}{\partial a^{2}}\Psi \left(a,\eta\right)= 24\left[-i\frac{\partial }{\partial \eta}\Psi\left(a,\eta \right)+ 6 \, \kappa \, a^{2}\Psi \left(a,\eta\right)\right].$$ Using the BdB interpretation, the scale factor equation of motion is given by the gradient of the phase $S\left(a,\eta\right)$ of the wave function $$a'=\frac{1}{12}S_a\left(a,\eta\right)=-\frac{i}{24}\frac{(\Psi \Psi_a^{ \ast }-\Psi_a \Psi^{\ast })}{\Psi \Psi^{\ast }}\equiv f\left(a,\eta\right),$$ where the index $a$ means derivative with respect to $a$. Taking the boundary condition (\[cond1\]) at $a=0$, the velocity function $f\left(0,\eta\right)$ vanishes. Hence, if there is a time $\eta_0$ where $a(\eta_0)=0$, then $a'(\eta_0)=0$. For $a''$ one has: $${a''}=\frac{\partial f}{\partial a}a'+ \frac{\partial f}{\partial \eta}=\frac{\partial f}{\partial a}f+ \frac{\partial f}{\partial \eta}.$$ This is also zero at $a=0$ unless $\partial f/\partial a$ diverges there. However, $$\begin{aligned} \frac{\partial f}{\partial a} & = & -\frac{i}{24}\frac{(\Psi \Psi_{aa}^{ \ast }- \Psi_{aa} \Psi^{\ast })}{\Psi \Psi^{\ast }}+\frac{i}{24} \frac{\biggl[\left(\Psi \Psi_a^{ \ast }\right)^{2}- \left(\Psi_a \Psi^{\ast }\right)^{2}\biggr]} {\left(\Psi \Psi^{\ast }\right)^{2}}\\ & = &\frac{(\Psi \frac{\partial \Psi^{ \ast }}{\partial t}+\frac{\partial \Psi}{\partial t}) \Psi^{\ast }}{\Psi \Psi^{\ast }} -\frac{i}{2}\frac{\biggl[\left(\Psi \Psi_a^{ \ast }\right)^{2}- \left(\Psi_a \Psi^{\ast }\right)^{2}\biggl]}{\left(\Psi \Psi^{\ast }\right)^{2}}\\\end{aligned}$$ is obviously finite if condition (\[cond1\]) and analyticity of $\Psi$ in $\eta$ is satisfied at $a=0$. The case when $\left(\Psi \Psi^{\ast }\right)^{2}=0$ does not need to be analyzed because bohmian trajectories cannot pass through nodal regions of the wave function. The same reasoning can be used for all higher derivatives $d^n a/d\eta^n$ at $a=0$ to show that they are all zero: one just have to use equation (\[d2psi\]) to substitute $\partial ^2\Psi/\partial a^2$ for $\partial \Psi/\partial \eta$ and condition (\[cond1\]) to substitute $\partial ^2\Psi/\partial a\partial \eta$ for $\alpha\partial \Psi/\partial \eta$ at $a=0$ whenever they appear, and then use analyticity of $\Psi$ in $\eta$ at $a=0$. With these results, if there is a time $\eta_0$ where $a(\eta_0)=0$, expanding $a(\eta)$ in Taylor series around $\eta_0$ shows that $a(\eta)\equiv 0$. This means that the only singular bohmian trajectory is the trivial one of not having a universe at all! All non trivial quantum solutions have to be non singular. Quantum behaviour of a FLRW Model With Dust and Radiation {#quantum} ========================================================= As we have seen in section \[drs\], the superhamiltonian constraint for a FLRW model with non interacting dust and radiation is given by Eq.(\[ham0\]): $$\label{ham27} {\cal{H}}\equiv -\frac{p_{a}^2}{24a}-6\kappa a + \frac{p_{\eta}}{a} + p_{\varphi}\approx 0,$$ We see that both $p_\eta$ and $p_{\varphi}$ appear linearly in $\cal{H}$, and their canonical coordinates $\eta$ and $\varphi$ are, respectively, conformal and cosmic time. As in the preceeding section, from the Dirac quantization procedure one obtains the quantum equation $\hat{\cal{H}}\Psi=0$, which reads $$\label{hamo} \left(\frac{1}{24a}\frac{\partial^{2}}{\partial a^2}-6\kappa a -\frac{i}{a}\frac{\partial }{\partial \eta}- i\frac{\partial }{\partial \varphi}\right)\Psi(a,\varphi , \eta)=0,$$ where we have used the usual coordinate representation $\hat{p}=-i\partial/\partial q$. Either $\eta$ or $\varphi$ can be chosen as time parameters on which $\Psi$ evolves. However, the classical solutions can be expressed explicitly only in conformal time $\eta$ \[see Eq.(\[cdr\])\]. Furthermore, cosmic time $\varphi$ depends on the constants characterizing each particular solution through $\varphi=\int d\eta a(\eta)$, and it is not the same parameter for all classical solutions (see Ref.[@Tipler] for deatils). Hence, we will take $\eta$ (in fact $-\eta$, for the reasons mentioned in the previous section) as the time parameter of the quantum theory[^1]. With this choice, and for a particular factor ordering, Eq.(\[hamo\]) can be written as: $$\label{hamo2} i\frac{\partial }{\partial \eta}\Psi(a,\varphi , \eta)= \left(-\frac{1}{24}\frac{\partial^{2}}{\partial a^2}+6\kappa a^2 +i a\frac{\partial }{\partial \varphi}\right)\Psi(a,\varphi, \eta).$$ Eigenstates of total matter content {#the} ----------------------------------- In this subsection we only consider initial states $|\Psi(\eta_0)\rangle$ which are eigenstates of the total dust matter operator $\hat{p}_{\varphi}$. It follows that these states at time $\eta$, $|\Psi(\eta)\rangle$ will also be eigenstates of $\hat{p}_{\varphi}$ with the same eigenvalue because $[\hat{H},\hat{p}_{\varphi}]=0$. In other words, we consider that dust matter is not created nor destroyed. In such a way, we have $\hat{p}_{\varphi}|\Psi(\eta)\rangle=p_{\varphi}|\Psi(\eta)\rangle$ and the wave function in the $a$, $\varphi$ representation, $\langle a,\varphi|\Psi(\eta)\rangle=\Psi(a,\varphi,\eta)$, is given by $$\label{mom} \Psi(a,\varphi,\eta)=\Psi(a,\eta)e^{ip_{\varphi} \varphi}.$$ &gt;From the Schrödinger’s equation (\[hamo2\]), we have for $\Psi(a,\eta)$ $$\label{hamoeig} i\frac{\partial }{\partial \eta}\Psi(a,\eta)=\left(-\frac{1}{24} \frac{\partial^{2}}{\partial a^2}+ 6\kappa a^2-p_{\varphi}a\right)\Psi(a,\eta),$$ which is the Schrödinger equation for a particle of mass $m=12$ in a one dimensional forced oscilator with frequency $w=\sqrt{\kappa}$ and constant force $p_{\varphi}$, which we write as $$\label{hamof} i\frac{\partial}{\partial \eta}\Psi(a,\eta)=\left(-\frac{1}{2m} \frac{\partial^{2}}{\partial a^2}+\frac{mw^2}{2}a^2-p_{\varphi}a\right)\Psi(a,\eta)$$ The scale factor is defined only in the half line $[0,\infty)$, which means that the hamiltonian (\[ham27\]) is not in general hermitian. Hence, if one requires unitary evolution, the Hilbert subspace is resctricted to functions on $L^{2}(0,\infty;-\infty,\infty)$ satisfying the condition: $$\label{condition} \int_{-\infty}^{\infty}{d\varphi \left[\frac{\partial \Psi^{\ast }_{2} \left(a,\varphi , \eta\right)}{\partial a}\, \Psi_{1}\left(a,\varphi , \eta\right)\right]_{a=0}}=\int_{-\infty}^{\infty}{d\varphi \left[\frac{\partial \Psi_{1}\left(a,\varphi , \eta\right)}{\partial a}\, \Psi^{\ast }_{2}\left(a,\varphi , \eta\right)\right]_{a=0}}$$ for any $\Psi_{1}(a,\varphi , \eta), \Psi_2(a,\varphi , \eta) \in L^{2}(0,\infty;-\infty,\infty)$. In the special case considered in this section, this condition is reduced to $$\label{cond} \frac{\partial\Psi}{\partial a}(0,\eta)=\alpha\Psi(0,\eta),$$ where $\alpha$ is a real parameter [@JMP371449]. We will analyze the two extreme cases: $\alpha=0$ and $\alpha=\infty$, which are the simpler and usually studied in the literature on quantum cosmology [@bola2; @Lapshinskii; @alvarenga; @dewitt; @gotay; @JMP371449]. For the case $\alpha=0$ we have that $$\label{alfa0} \frac{\partial\Psi}{\partial a}(0,\eta)=0,$$ which is satisfied for even functions of $a$. For the case $\alpha=\infty$, we have $$\Psi(0,\eta)=0 ,$$ which is satisfied for odd functions of $a$. Both of them address the boundary conditions of the wave packet near the singularity at $a=0$. In order to develop the BdB interpretation, we substitute into the Schrödinger’s equation (\[hamof\]), the wave function in the polar form $\Psi=Ae^{iS}$, obtaining for the real part $$\label{equacaoH-J1} \frac{\partial S}{\partial t}+\frac{1}{2m}\left(\frac{\partial S}{\partial a}\right)^{2}-a\, p_{\varphi}+ \frac{m\,w^{2}}{2}\,a^{2}+Q=0,$$ where $$Q\equiv -\frac{1}{2m\,{A}}\frac{\partial^{2}{A}}{\partial a^{2}}$$ is the quantum potential. The Bohm guidance equation reads $$\label{bgr} ma'=\frac{\partial S}{\partial a}.$$ A solution $\Psi(a,\eta)$ of Eq.(\[hamof\]) can be obtained from an initial wave function $\Psi_{0}(a)$ using the propagator of a forced harmonic oscillator. Let us do it for the two boundary conditions just presented. ### **The case of boundary condition $\alpha=0$** {#case1} We denote the propagator $K^{\alpha=0}(2,1)\equiv K^{\alpha=0}(\eta_2,a_2;\eta_1,a_1)$, where $1$ stands for the initial time and initial scale factor $\eta_1, a_1$ respectively, and $2$ stands for their final values. The propagator when the Hilbert space is restricted to $a>0$ can be obtained from the usual one (i.e with coordinate $-\infty<a<\infty$) which is associated to a particle in a forced oscilator $K(2,1)\equiv K(\eta_2,a_2;\eta_1,a_1)$ by means of $$\label{Kpar} K^{\alpha=0}(2,1)=K(\eta_2,a_2;\eta_1,a_1)+K(\eta_2,a_2;\eta_1,-a_1)$$ This symmetry condition is necessary to consistently eliminate the contribution of the negative values of the scale factor [@IJMPA53029]. The usual propagator associated to a particle in a forced oscilator is [@feynman]: $$K(2,1)=\sqrt{\frac{mw}{2 \pi i \sin(w\eta)}} \exp(i \, S_{cl})$$ where $\eta\equiv \eta_2 - \eta_1$ . The classical action $S_{cl}$ is given by $$\begin{aligned} S_{cl}&=&\frac{mw}{2\sin(w\eta)} \biggl\{\cos(w\eta)(a_{2}^2+a_{1}^2)-2a_{2}a_{1}+(a_{2}+ a_{1})\frac{2p_{\varphi}}{m w^2}[1-\cos(w\eta)]- \nonumber \\ &&[1-\cos(w\eta)]\frac{2 p_{\varphi}^2}{m^2 w^4}+\frac{p_{\varphi}^2}{m^2 w^4}\sin(w\eta) w\eta \biggr\} .\end{aligned}$$ We assume that for $\eta_1=0,$ the initial wave function is given by $$\label{initwave} \Psi_0(a)=\biggl(\frac{8\sigma}{\pi}\biggr)^{1/4}\exp(-\sigma a^2),$$ where $\sigma>0$. The wave function in a future time $\eta_2$ is $$\Psi(a_2, \eta_2)=\int_{0}^{\infty} K^{\alpha=0}(2,1)\Psi_0(a_{1})da_{1}= \int_{-\infty}^{\infty} K(2,1)\Psi_0(a_{1})da_{1},$$ where the even caracter of $\Psi(a,0)$ has been taken into account to extend the integral. Integrating and renaming $\eta\equiv\eta_2,\,a\equiv a_2$ we have $$\begin{aligned} \label{psi1t} \Psi^{\alpha=0}(a,\eta)=\biggl(\frac{8 \sigma}{\pi}\biggr)^{1/4} \sqrt{\frac{mw}{i\cos(w\eta) [2 \sigma \tan(w\eta)-imw]}} \exp\biggr\{\frac{imw}{2 \tan(w\eta)} \biggl[a^2 && + \nonumber \\ i\frac{mw}{\cos^2(w\eta)[2 \sigma \tan(w\eta)-imw]} \biggl(-a+\frac{p_{\varphi}}{m w^2}[1-\cos(w\eta)]\biggr)^2+\frac{2ap_{\varphi}}{m} \frac{[1-\cos(w\eta)]}{w^2 \cos(w\eta)}+\nonumber \\ \frac{2 p_{\varphi}^2}{m^2} \biggl(\frac{[\cos(w\eta)-1]}{w^4 \cos(w\eta)} + \eta\frac{\tan(w\eta)}{w^3}\biggr)\biggr]\biggr\} .\end{aligned}$$ \[0\] We consider the case $\kappa=0$, which is obtained by taking the limit of the wave function given by Eq. (\[psi1t\]) for $w\rightarrow 0$. We compute its phase $S$ from $\Psi\equiv {\it A} e^{iS}$ and calculate the derivative $\partial S/\partial a$. In this way we have, for the Bohm guidance equation Eq.(\[bgr\]), $$a'-\frac{4\sigma^2 \eta}{4\sigma^2 \eta^2+m^2}a= \frac{1}{m}\frac{(2\sigma^2 \eta^2+m^2)}{(4\sigma^2 \eta^2+m^2)}p_{\varphi} \eta .$$ Comparing with the radiation case studied in [@bola2], we see that here it appears a term proportional to $p_{\varphi}$ in the RHS of the Bohm equation. The general solution is: $$\label{gk0} a(\eta)=C_0 \sqrt{4\sigma^2 \eta^2+m^2}+\frac{p_{\varphi}}{2m}\eta^2 ,$$ where $C_0$ is a positive integration constant. We can see that, contrary to the classical solution (\[cdr\]), there is no singularity at $\eta=0$. The quantum effects avoid it. Furthermore, for long times $\eta\gg m/2\sigma$, Eq. (\[gk0\]) reproduces the classical behaviour (\[cdr\]) for the scale factor. For the case in which the evolution starts from a shifted gaussian wave function $$\Psi_0(a)=\biggl(\frac{8\sigma}{\pi}\biggr)^{1/4}\exp\left[-\sigma (a-a_0)^2\right],$$ the Bohm guidance relation contains an additional term yielding the general solution $$a(\eta)=C_0 \sqrt{4\sigma^2 \eta^2+m^2}+\frac{p_{\varphi}}{2m}\eta^2+\frac{a_0}{2},$$ which has exactly the same behaviour, apart from the shift on the minimal value of the scale factor by the $a_0/2$ term. Setting $w=\sqrt{\kappa}=1$ in the wave function given by Eq. (\[psi1t\]) and computing its phase $S$, we obtain for the bohmian trajectories $$\label{gk1} a(\eta)=C_0 \sqrt{4\sigma^2\sin ^2(\eta)+m^2\cos ^2(\eta)}+ \frac{p_{\varphi}}{2m}[1-\cos(\eta)],$$ where $C_0$ is a positive integration constant. This is a non singular cyclic universe, see figure 2, which presents classical behaviour for $\eta$ such that $\mid \tan (\eta)\mid\gg m/2$ \[see Eq. (\[cdr\])\]. Quantum effects avoid the classical big bang and big crunch. Setting now $w=\sqrt{\kappa}=i$ in the wave function (\[psi1t\]) yields the bohmian trajectories: $$\label{gk-1} a(\eta)=C_0 \sqrt{4\sigma^2\sinh ^2(\eta)+m^2\cosh ^2(\eta)}+ \frac{p_{\varphi}}{2m}[\cosh(\eta)-1].$$ Again, $C_0$ is a positive integration constant. This is a non singular ever expanding universe which presents classical behaviour for $\eta$ such that $\mid \tanh (\eta)\mid\gg m/2$ \[see Eq. (\[cdr\])\]. Quantum effects avoid the classical big bang. As in the $\kappa =0$ case, a shift in the center of the initial gaussian will not modify these solutions qualitatively. For the boundary conditions $\alpha =\infty$, or $\Psi(0,t)=0$, the propagator $K^{\alpha=\infty}(2,1)$ can be obtained from the usual (i.e, with coordinate $-\infty<a<\infty$) propagator associated to a particle in a forced oscilator $K(2,1)$ by means of $$\label{Kimpar} K^{\alpha=\infty}(2,1)=K(\eta_2,a_2;\eta_1,a_1)-K(\eta_2,a_2;\eta_1,-a_1) .$$ In order to satisfiy the condition $\Psi(0,\eta)=0$, we take as the initial wave function a wave packet given by $$\Psi_0(a)=\biggl(\frac{128 \sigma^3}{\pi}\biggr)^{1/4}a \exp(-\sigma a^2) ,$$ where $\sigma>0$. Following a similar procedure as in the case $\alpha=0$, we calculate the wave function by propagating the initial wave function as $$\Psi(a_{2}, \eta_{2})=\int_{0}^{\infty} K^{\alpha=\infty}(2,1)\Psi_0(a_{1})da_{1}= \int_{-\infty}^{\infty} K(2,1)\Psi_0(a_1)da_{1} ,$$ where the odd caracter of $\Psi$ has been used in order to extend the integral. Integrating this expression and renaming $a\equiv a_{2}$ and $\eta \equiv \eta_{2}$ with $\eta_1=0$, we have $$\Psi^{\alpha=\infty}(a,\eta)=\biggl(\frac{-C}{2 D}\biggr)\Psi^{\alpha=0}(a,\eta)$$ where $$C\equiv \frac{imw}{\sin(w\eta)}\biggl[-a+\frac{p_{\varphi}}{mw^2}(1-\cos(w\eta)\biggr]$$ and $$D\equiv \frac{imw}{2\tan(w\eta)}-\sigma$$ The phase of $\Psi^{\alpha=\infty}(a,\eta)$ can be expressed as the sum : $${\rm phase}[\Psi^{\alpha=\infty}(a,\eta)]= {\rm phase}\biggl(\frac{-C}{2 D}\biggr) + {\rm phase} [\Psi^{\alpha=0}(a,\eta)],$$ and it is easy to see that the phase of $(-C/2 D)$ is independent of $a$. Then, $[\partial {\rm phase}(\Psi^{\alpha=\infty}(a,\eta))]/\partial a= [\partial{\rm phase} [\Psi^{\alpha=0}(a,\eta)]]/\partial a$, and the Bohm guidance relations are the same as in the previous cases. Therefore, the solutions are the same. The quantum cosmological models obtained in this subsection have the nice properties of being non singular and presenting classical behaviour for large $a$. However, they suffer from a fundamental problem: the wave function (\[psi1t\]) from which they are obtained does not have an unitary evolution. The reason is that propagators constructed from Eq.’s (\[Kpar\]) and (\[Kimpar\]) do not in general preserve the hermiticity condition (\[cond\]) imposed on the wave functions: it depends on the classical potential. In Ref.[@IJMPA53029], there are obtained the potentials which allow propagators in the half line ($a>0$) to preserve unitary evolution. The potentials of the previous section are some of them but the potentials of the present one are not. Hence, even though the initial wave function Eq.(\[initwave\]) satisfies the hermiticity condition, the wave function (\[psi1t\]) does not. Let us then explore the more general case of initial superpositions of the total dust mass operator eigenstates. Analysis of wave packets given by superpositions of total dust mass eigenstates ------------------------------------------------------------------------------- In this subsection we consider the case of a general solution of Eq.(\[hamo2\]) which is not necessarily one of the eigenstates of $\hat{p}_{\varphi}$, the total dust mass operator. Following the BdB interpretation of quantum mechanics, we substitute in Eq.(\[hamo2\]) the wave function in polar form: $\Psi = A\left(a,\varphi,\eta \right) \exp\left\{{i}S\left(a,\varphi,\eta \right)\right\}$. The dynamical equation splits in two real coupled equation for the two real functions $S$ and $ A$ (recall that $w=\sqrt{\kappa}$ and $m=12$). $$\label{equacaoH-J} \frac{\partial S}{\partial \eta}+\frac{1}{2m}\left(\frac{\partial S}{\partial a}\right)^{2}-a\frac{\partial S}{\partial \varphi}+ \frac{m\, w^{2}}{2} a^{2}+ Q = 0 ,$$ $$\label{equacaocontinuidade} \frac{\partial A^{2}}{\partial \eta}+\frac{\partial}{\partial \varphi}\left(a\, A^{2}\right)+\frac{\partial}{\partial a}\left( A^{2}\frac{1}{m}\frac{\partial S}{\partial a}\right)=0 ,$$ where $$Q\equiv -\frac{1}{2m\, A}\frac{\partial^{2}{A}}{\partial a^{2}} .$$ Equation (\[equacaoH-J\]) is the modified Hamilton-Jacobi equation where $Q\left(a,\varphi,\eta\right)$ is the quantum potential which is responsible for all the peculiar non classical behaviours. When the quantum potential is zero, the equation is exactly the classical Hamilton-Jacobi equation. The momenta are given by the Bohm’s guidance equations $$\begin{aligned} p_{a}&\equiv & \frac{\partial S\left(a,\varphi,\eta\right)}{\partial a},\\ p_{\varphi}&\equiv& \frac{\partial S\left(a,\varphi,\eta\right)}{\partial \varphi} . \label{bphi}\end{aligned}$$ Note also that $$\label{rad} p_\eta=\frac{\partial S\left(a,\varphi,\eta\right)}{\partial \eta}$$ is the total ‘energy’ of the system, which is interpreted, from its classical meaning, as the total amount of radiation in the universe model. In the causal interpretation, equation (\[equacaocontinuidade\]) is a continuity equation where ${A}^{2}$ is a probability density. The generalised velocities can easily be identified as $$\begin{aligned} {a'} &\equiv & \frac{1}{m}\frac{\partial S\left(a,\varphi,\eta\right)}{\partial a} , \label{ba}\\ {\varphi'}&\equiv & a .\label{velocphi}\end{aligned}$$ Consider now the classical limit ($Q=0$). Then the solution of the principal Hamilton function ($S$) is just $S=W\left(a\right)-E\eta+p_{\varphi}\varphi$, where $E$ and $p_{\varphi}$ are constants. Since $p_{\varphi}$ is proportional to the total amount of dust matter in the universe, and $E$ to the total amount of radiation, there is no creation or annihilation of dust matter nor radiation. However, in the presence of a quantum potential, this solution is no longer valid, opening the possibility of non conservation of matter and radiation due to quantum effects. ### Formal Solutions We now turn to the problem of solving the Schrödinger’s equation (\[hamo2\]). For the case of flat spatial section ($\kappa=0$), equation (\[hamo2\]) simplify to $$i\frac{\partial \Psi \left(a,\varphi ,\eta\right)}{\partial \eta}= -\frac{1}{2m}\frac{\partial^{2} \Psi \left(a,\varphi ,\eta\right)}{\partial a^{2}} +i a\frac{\partial \Psi \left(a,\varphi ,\eta\right)}{\partial \varphi} \label{eqkzero}$$ To solve this equation we make the ansatz $$\Psi \left(a,\varphi ,\eta\right)=\chi\left(a\right) \exp \left(-\frac{i}{2m}\beta \, \eta\right) \exp\left(\frac{i}{2m}\upsilon \, \varphi\right) ,$$ where $\chi\left(a\right)$ must satisfy the differential equation[^2] $$\frac{\partial^{2} \chi\left(a\right)}{\partial a^{2}}+\upsilon a \chi\left(a\right)+\beta\chi\left(a\right)=0.$$ This is essentially an Airy equation with solution given by $$\chi\left(a\right)=\sqrt{a+\frac{\beta}{\upsilon}}\left\{A\,Z_{\frac{1}{3}} \left[\frac{2\sqrt{\upsilon}}{3}\left(a+\frac{\beta}{\upsilon} \right)^{\frac{3}{2}}\right]+ B\,Z_{-\frac{1}{3}} \left[\frac{2\sqrt{\upsilon}}{3} \left(a+\frac{\beta}{\upsilon}\right)^{\frac{3}{2}}\right]\right\}$$ The $Z_{\frac{1}{3}}$ function is the first kind Bessel function of degree $\frac{1}{3}$, and the $A$ and $B$ can be any functions of $\upsilon$ and $\beta$. The general solution is a superposition given by $$\begin{aligned} \Psi \left(a,\varphi,\eta\right)= \int{d\beta\, d\upsilon \exp\left\{-\frac{i}{2m}\beta\,\eta\right\}\exp\left\{\frac{i}{2m}\upsilon\, \varphi\right\}\sqrt{a+\frac{\beta}{\upsilon}}} \times \\ \times \left\{ A\left(\beta,\upsilon\right) \, Z_{\frac{1}{3}} \left[ \frac{2\sqrt{\upsilon}}{3} \left(a+\frac{\beta}{\upsilon}\right)^{\frac{3}{2}}\right] +B\left(\beta,\upsilon\right) \,Z_{-\frac{1}{3}}\left[\frac{2\sqrt{\upsilon}}{3} \left(a+\frac{\beta}{\upsilon}\right)^{\frac{3}{2}}\right]\right\}\\\end{aligned}$$ In the positive curvature case ($\kappa=1$), Eq.(\[hamo2\]) reads $$\label{wheeler-dewittk1} i\frac{\partial \Psi \left(a,\varphi ,\eta\right)}{\partial \eta}= -\frac{1}{2m}\frac{\partial^{2} \Psi \left(a,\varphi ,\eta\right)}{\partial a^{2}} +\frac{m}{2}a^{2} \Psi \left(a,\varphi,\eta\right)+i a\frac{\partial \Psi \left(a,\varphi ,\eta\right)}{\partial\varphi} .$$ There is a canonical transformation which simplifies the problem. Let us define new variables given by $$\begin{aligned} \xi \equiv \sqrt{m}\, a - \frac{p_{\varphi} }{\sqrt{m}} &; & \sigma \equiv -\sqrt{m}\, \varphi + \frac{p_{a} }{\sqrt{m w}} ,\\ p_{\xi} \equiv \frac{p_{a}}{\sqrt{m}} &; & p_{\sigma} \equiv -\frac{p_{\varphi}}{\sqrt{m}} .\end{aligned}$$ Using these new variables, the hamiltonian decouples in two parts, one describing a harmonic oscillator and the other a free particle: $$\label{schk1} \hat{H}=\underbrace{\frac{1}{2}\left(\hat{p}_{\xi}^{2}+ \hat{\xi}^{2}\right)}_{\mbox{\it harmonic oscillator}}-\underbrace{\frac{1}{2}\hat{p}_{\sigma}^{2}}_{\mbox{ \it free particle}} .$$ Decomposing the wave function as $$\Psi\left(\xi,\sigma,\eta \right)= \chi\left(\xi\right)\exp\left\{-i\left(\epsilon\,\eta+\sqrt{2\,k}\,\sigma\right)\right\},$$ we immediately recognize that $\chi\left(\xi\right)= \exp\left\{-\frac{\xi^{2}}{2}\right\}h_{n}\left(\xi\right)$, where $h_{n}$ are the Hermite polinomial of degree $n$. Just as for the harmonic oscillator, the index $\epsilon$ is constrained to take the values $$\label{landau} \epsilon_{n}=k+\left(n+\frac{1}{2}\right) ,$$ where $k$ can take any real positive value while $n$ is a positive integer. Eq. (\[landau\]) determines a set of [*Landau levels*]{} for the cosmological model [@landaulevels]. The most general solution is a superposition given by $$\begin{aligned} \label{solucaogeral} \Psi \left( \xi ,\sigma ,\eta \right) &=& \sum_{n=0}^{\infty}{ \int{dk\,\chi_{n}\left(\xi\right)\left[D_{n}\left(k\right)\exp\left\{i\sigma\, \sqrt{2\,k}\right\}+ \right. }} \nonumber \\ && \left. G_{n}\left(k\right)\exp\left\{-i\sigma\,\sqrt{2\,k}\right\}\right]\times \exp\left\{-i\, \epsilon_{n}\, \eta\right\}.\end{aligned}$$ The quantities $D_{n}\left(k\right)$ and $G_{n}\left(k\right)$ are arbritary coefficients that can depend on the parameter $k$. Recall that we have performed a canonical transformation that mix coordinates and momenta, and these are not the proper variables to apply the causal interpretation. Instead, it is imperatif to apply the inverse transformation to the coordinate basis before using the guidance relations. This is a necessary requirement to maintain the consistency of the causal interpretation of quantum mechanics [@CQG141993]-[@PR89319B].\ For the negative curvature spatial section ($\kappa=-1$), the general solutions are hypergeometric functions whose asymptotic behaviours are rather complicated to study in order to obtain reasonable boundary conditions. Hence, we will not treat this case here. We proceed to the analysis of an interesting particular solution. ### Transition from exotic dust to dust in the flat case The quantum states of the matter and radiation FLRW universe studied in section \[0\] are eigenstates of the total dust matter operator ${\hat{p}}_{\varphi}$. The total wave function is given by $\Psi(a,\varphi,\eta)= \Psi(a,\eta) \exp(i\varphi p_{\varphi})$ where $\Psi(a,\eta)$ is given by Eq.(\[psi1t\]). Taking the limit $w \rightarrow 0$ in that equation, we obtain the wave function $\Psi(a,\eta)$ for the case of flat spatial section, $\kappa=0$ which, after renaming the eigenvalues of total mass by $\upsilon\equiv p_{\varphi}$, is given by $$\begin{aligned} \Psi_{\upsilon}\left(a,\eta\right)= \left(\frac{8\sigma m^{2}}{\pi\,\mu} \right)^{\frac{1}{4}}\exp\left\{ -\frac{m^{2}\sigma}{\mu}\left(a-\frac{\upsilon\,\eta^{2}}{2m}\right)^{2}- i\frac{\upsilon^{2}\eta^{3}}{6m}-i\frac{\theta}{2}+\right.& \nonumber \\ \left.+i\frac{m}{2 \eta}\left[ \left(a+\frac{\upsilon\,\eta^{2}}{2m}\right)^{2}-\frac{m^{2}}{\mu} \left(a-\frac{\upsilon\,\eta^{2}}{2m}\right)^{2}\right] \right\} & ,\end{aligned}$$ where $$\begin{aligned} &&\mu= 4\sigma^{2}\eta^{2}+m^{2} ,\\ &&\theta= \arctan \left(\frac{2\sigma \eta}{m}\right) .\end{aligned}$$ Now we consider a more general situation than in section \[0\]. We suppose an initial state at $\eta=0$ which is given by a gaussian superposition of eigenstates of total matter $$\label{superposicaoguassiana0} \Psi \left(a,\varphi,0\right)=\int_{-\infty}^{\infty}{d \upsilon\, \exp^{-\gamma\left(\upsilon-\upsilon_{0}\right)^{2}}\Psi_{\upsilon}\left(a,0\right)\, \exp\{-i\,\varphi\,\upsilon\}} .$$ Then, the state at time $\eta$ is given by $$\label{superposicaoguassiana} \Psi \left(a,\varphi,\eta \right)=\int^{\infty}_{-\infty}{d \upsilon \, \exp^{-\gamma\left(\upsilon-\upsilon_{0}\right)^{2}}\Psi_{\upsilon}\left(a,\eta \right)\, \exp\{-i\,\varphi\,\upsilon\}} .$$ In this way, we have a square-integrable wave function. We find $$\begin{aligned} \Psi \left(a,\varphi,\eta\right)= \left(\frac{8\sigma \pi m^{2}}{\mu\,\nu}\right)^{\frac{1}{4}}\exp\left\{\left(\frac{\Re\left(F\right)}{4\nu}-\frac{\sigma m^{2}}{\mu}\right)a^{2}+\frac{\Re\left(G\right)}{4\nu}\,a\,\varphi +\frac{\Re\left(J\right)}{4\nu}\varphi^{2}+\frac{\Re\left(L\right)}{4\nu}\,a+\right. & \\ \left. +\frac{\Re\left(M\right)}{4\nu}\varphi +\frac{\Re\left(P\right)}{4\nu}\; + i\left[ \, \left(\frac{\Im\left(F\right)}{4\nu}+\frac{m}{2\mu \eta}\left(\mu-m^{2}\right) \right)a^{2}+\frac{\Im\left(G\right)}{4\nu}a\, \varphi +\right. \right. & \\ \left.\left. +\frac{\Im\left(J\right)}{4\nu}\varphi^{2}+\frac{\Im\left(L\right)}{4\nu}a +\frac{\Im\left(M\right)}{4\nu}\varphi+\frac{\Im\left(P\right)}{4\nu} \right]- i\frac{\theta+\tau}{2} , \right\} &\end{aligned}$$ where we defined $$\begin{aligned} &&\nu= \left(\gamma+\frac{\sigma \eta^{4}}{4\mu}\right)^{2}+\frac{\eta^{6}}{\left(24m \mu \right)^{2}}\left(\mu +3m^{2}\right)^{2}\\ &&\tau= \arctan \left[\frac{\eta^{3}(\mu+3m^{2})}{24m(\gamma \mu+\sigma \eta^4)}\right]\\ &&F= \left[\frac{m\sigma \eta^{2}}{\mu}+i\frac{\eta}{2\mu }\left(\mu+m^{2}\right) \right]^{2}\left[\gamma+\frac{\sigma \eta^{4}}{4\mu}-i\frac{\eta^{3}}{24m\mu }\left(\mu+3m^{2}\right)\right]\\ &&G= -2\, i\left[\frac{m\sigma \eta^{2}}{\mu}+i\frac{\eta}{2\mu } \left(\mu+m^{2}\right) \right]\left[\gamma+\frac{\sigma \eta^{4}}{4\mu}-i\frac{\eta^{3}}{24m\mu }\left(\mu+3m^{2}\right)\right]\\ &&J= -\left[\gamma+\frac{\sigma \eta^{4}}{4\mu}-i\frac{\eta^{3}}{24m\mu }\left(\mu+3m^{2}\right)\right]\\ &&L= -2\,i\,\gamma\upsilon_{0}\,G \\ &&M= 4\,i\,\gamma\upsilon_{0} \,J \\ &&P= -4\gamma^{2}\upsilon_{0}^{2} \,J\end{aligned}$$ and $\Re$ and $\Im$ stands for the real and imaginary part, respectively. If one calculates the squared norm of the wave function, one obtains $$\int_{0}^{\infty}{da}\int_{-\infty}^{\infty}{d\varphi}\left\|\Psi\right\|^{2}= \sqrt{\frac{8\pi^{3}}{\gamma}}\left[1+\frac{1}{\sqrt{\pi}} {\rm{erf}} \left(\frac{\upsilon_{0} \eta^{2}}{2m}\right)\right],$$ where ${\rm{erf}}(x)$ is the error function. The only dependence on time can be eliminated by choosing the gaussian to be centered at $\upsilon_{0}=0$. With this choice we garantee unitary evolution of the total wave function. &gt;From equations (\[bphi\])-(\[velocphi\]), the trajectories can be computed by solving the given system of equations $$\begin{aligned} & &{a'}= \frac{2}{m} \left[\frac{\Im\left(F\right)}{4\nu}+\frac{m}{2\mu \eta}\left(\mu-m^{2}\right)\right]\,a\,+ \frac{\Im\left(G\right)}{4m\nu}\, \varphi \label{aevol}\\ & &{\varphi'}= a \\ & &p_{\varphi} = \left[ 2\frac{\Im\left(J\right)}{4\nu}\, \varphi +\frac{\Im\left(G\right)}{4\nu}\,a\right]\end{aligned}$$ Note that $p_{\varphi}$ is no longer constant. We integrated numerically these equations with the renormalisation condition $a\left(0\right)=1$. The quantum potential $Q\equiv -\frac{1}{2m\, A}\frac{\partial^{2} A}{\partial a^{2}}$ is non zero only close to the origin as shown on figure 3. Hence, we expect that quantum effects be relevant only in this region. Far form the origin, the scale factor must behave classically. The behaviour of $p_{\varphi}$ is plotted on figure 4. &gt;From this plot we can see that far from the origin $p_{\varphi}$ is constant. This is in accordance with classical behaviour as long as the quantum potential is zero in this region. The surprising feature is that in the far past the universe was filled with a classical exotic dust ($p_{\varphi} < 0$). &gt;From Eq.(\[rad\]), one can also compute the amount of radiation. Figure 5 shows the result. Again, far from the origin, radiation is conserved while in the origin, due to quantum effects, it is not conserved. For the evolution of the scale factor, numerical integration of equation (\[aevol\]) yields the plot of figure 6. In the far positive region, the scale factor behaves classically, as expected, and matter is conserved. On the other hand, in the far negative region the scale factor also behaves classically but with a universe filled with exotic dust, and here again matter is conserved (compare this region with figure 1). Both regions have a consistent classical behaviour. Hence, the universe begins classically from a big bang filled with exotic dust and conventional radiation. It evolves until it reaches a configuration when quantum effects avoid the classical big crunch while transforming exotic dust into normal dust. From this point on, the universe expands classically filled with conventional dust and radiation. Conclusions {#conclu} =========== In the present work we studied some features of the minisuperspace quantization of FLRW universes with one and two fluids. For the one fluid case (radiation), we have generalized results in the literature by showing that all bohmian trajectories coming from reasonable general solutions of the wave equation obtained through the assumptions of unitarity and analyticity at the origin, do not present any singularity. Hence, this quantum minisuperspace theory is free of singularities. For the two fluids case (non interacting radiation and dust), we first obtained bohmian quantum universes free of singularities reaching the classical limit for large scale factors. However, these trajectories arise from eigenfunctions of the total dust mass operator whose time evolution is not unitary. When considering the general case, we managed to obtain a wave solution presenting unitary evolution with some surprising effects. Now dust and radiation can be created but the new feature is the possibility of creation of exotic fluids. We have shown that dust matter can be created as a quantum effect in such a way that the universe can undergo a transition from an exotic dust matter era to a conventional dust matter one. In this transition, one can see from figure 5 that radiation also becomes exotic due to quantum effects, helping the formation of the bounce. The fluid approach is not fundamental, but we expect that it can be quite accurate in describing quantum aspects of the Universe, in the same way the Landau description of superfluids in terms of fluid quantization was capable of showing many quantum features of this system [@landau2]. After all, creation and annihilation of particles as well as quantum states with negative energy are usual in quantum field theory. The formalism developed in the present paper seems to be a simple and calculable way to grasp these features of quantum field theory. Their physical applications may be important: exotic fluids are relevant not only in causing cosmological bounces and avoiding cosmological singularities [@peter], but also for the formation of wormholes [@thorn; @matt] and for superluminal travels [@27]. These are some developments of the present paper we want to explore in future works. ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ================ We would like to thank [*Conselho Nacional de Desenvolvimento Científico e Tecnológico*]{} (CNPq) of Brazil and [*Centro Latinoamericano de Física*]{} (CLAF) for financial support. We would also like to thank ‘Pequeno Seminario’ of CBPF’s Cosmology Group for useful discussions. [99]{} D. Bohm, Phys. Rev. [**85**]{} (1952) 166. D. Bohm, Phys. Rev. [**85**]{} (1952) 180. P. R. Holland, The Quantum Theory of Motion: An Account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics, (Cambridge University Press, Cambridge, 1993). J. C. Vink, Nucl. Phys. [**B369**]{} (1992) 707. J. A. de Barros and N. Pinto-Neto, Int. J. of Mod. Phys. [**D7**]{} (1998) 201. J. Kowalski-Glikman and J. C. Vink, Class. Quantum Grav. [**7**]{} (1990) 901. E. J. Squires, Phys. Lett. [**A162**]{}, (1992) 35. J. A. de Barros, N. Pinto-Neto and M. A. Sagioro-Leal, Phys. Lett. [**A241**]{} (1998) 229. R. Colistete Jr., J. C. Fabris and N. Pinto-Neto, Phys. Rev. [**D57**]{} (1998) 4707. R. Colistete Jr., J. C. Fabris and N. Pinto-Neto, Phys. Rev. [**D62**]{} (2000) 83507. N. Pinto-Neto and E. Sergio Santini, Phys. Rev. [**D59**]{} (1999) 123517. N. Pinto-Neto and E. Sergio Santini, Gen. Relativ. Gravit. 34 (2002) 505. E. Sergio Santini, PhD Thesis, CBPF, Rio de Janeiro, May 2000, gr-qc/0005092. B.F. Schutz, Phys. Rev. [**D2**]{} (1970) 2762 ; [**4**]{} (1971) 3559. C. W. Misner, in: Magic Without Magic: John Archibald Wheeler, ed. J.R. Klauder, (Freeman, San Fransisco, CA, 1972). V. G. Lapshinskii and V. A. Rubakov, Theor. Math. Phys. [**33**]{} (1977) 1076. F. J. Tipler, Phys. Rep. [**137**]{} (1986) 231. F. G. Alvarenga, J. C. Fabris, N. A. Lemos and G. A. Monerat, gr-qc/0106051. Bryce S. DeWitt, Phys. Rev.[**D**]{} [**160**]{} 5 (1967) 1113. R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals, New York , 1965. M. J. Gotay and J. Demaret, Phys. Rev. [**D28**]{} (1983) 2402. L. D. Landau, Z. Phys. [**64**]{} (1930) 629, Reprinted and translated in [*Collected papers of Landau*]{}, Paper 4, Edited by D. Ter Haar, (Pergamon Press Ltd and Gordon and Breach, Science Publishers, Oxford, 1965). M.Abramowitz, I.A. Stegun- “Handbook of Mathematical Functions”, (National Bureau of Standards, Washington D.C., 1964). N. Pinto-Neto, in: Cosmology and Gravitation II, Proceedings of the VIII Brazilian School of Cosmology and Gravitation, Edited by Mário Novello, (Editions Frontieres 1995). N.A. Lemos, J. Math. Phys. [**37**]{} (1996) 1449. E. Farhi, Int. J. Mod. Phy. A [**5**]{} (1990) 3029. J. Acacio de Barros, N. Pinto-Neto; Class. Quantum Grav. [**14**]{} (1997) 1993. D. Bohm; Phys. Rev. [**89**]{} (1952) 319. I. M. Khalatnikov; An Introduction to the Theory of Super-fluidity (W. A. Benjamin, New York, 1965). P. Peter and N. Pinto-Neto, Phys. Rev. [**D65**]{} (2001) 023513. M. S. Morris and K. S. Thorne, Am. J. Phys. [**56**]{} (1988) 395. D. Hochberg, M. Visser, Phys. Rev. Lett. [**81**]{} (1998) 746. K. D. Olum, Phys. Rev. Lett [**81**]{} (1998) 3567. [^1]: The choice of $\varphi$ will probably yield a different theory, with a different Hilbert space. The kinetic term is more complicate and the measure is not the trivial one. We will not study this possibility here. [^2]: As in the following we will make superpositions of eigenfunctions of the total dust matter operator, we will use from now on the letter $\upsilon$ in order to not confuse it with the beable $p_{\varphi}=\partial S/\partial\varphi$. We did not make this distinction before because they coincide for eigenfunctions of the total dust matter operator.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We explore the effect of a (non) magnetic impurity on the thermal transport of the spin-$1/2$ Heisenberg chain model. This unique system allows to probe Kondo-type phenomena in a prototype strongly correlated system. Using numerical diagonalization techniques we study the scaling of the frequency dependent thermal conductivity with system size and host-impurity coupling strength as well as the dependence on temperature. We focus in particular on the analysis of “cutting-healing" of weak links or a magnetic impurity by the host chain via Kondo-like screening as the temperature is lowered.' author: - 'A. Metavitsiadis$^1$ and X. Zotos$^1$' - 'O. S. Barišić$^{2,3}$ and P. Prelovšek$^{3,4}$' title: | Thermal transport in a spin-1/2 Heisenberg chain\ coupled to a (non) magnetic impurity --- Introduction ============ A (non) magnetic impurity coupled to a spin-$1/2$ Heisenberg chain is a prototype system that exemplifies “Kondo"-type effects in a correlated system. Starting with the proposal of Kane-Fisher,[@kf] a weak link in a repulsive (attractive) Luttinger liquid was shown to lead to an insulating (transmitting) ground state. The cutting or healing of spin chains by a variety of (non) magnetic defects has also been established[@affleck; @eggert; @sorensen; @laflorencie] as well as the effect of a magnetic impurity on the ground state of the anisotropic easy-plane Heisenberg chain.[@furusaki] Generically, a weak link or coupling to a magnetic impurity in a Heisenberg antiferromagnetic chain leads to a ground state corresponding to two open chains. In the exceptional case of two adjacent links or a ferromagnetic (attractive in the fermionic language) easy axis anisotropy a healing of the defect is conjectured.[@furusaki] This screening effect is characterized by a Kondo-like temperature and screening length.[@eggert; @sorensen] These phenomena have so far mostly been studied either as they are reflected on ground state properties, e.g. finite size gaps, entanglement or, somewhat indirectly, as a temperature dependent induced staggered susceptibility.[@eggert] In this work we use an exceptional physical probe for the study of these effects, namely, the thermal transport in the spin-$1/2$ Heisenberg chain that is truly singular. Although the Heisenberg model describes a strongly correlated system, the thermal conductivity is purely ballistic as the energy current commutes with the Hamiltonian,[@znp] a result that is related to the integrability of this model.[@higherspin] Thus the only scattering present is due to the defect and thus its frequency/temperature/coupling strength dependence can be isolated and clearly analyzed. In this context it was already found that a single potential impurity renders the thermal transport incoherent[@static] with the frequency dependence of the thermal conductivity well described by a Lorentzian, at least for a weak impurity. This is in sharp contrast to the case of a non-interacting system where in spite of the impurity the transport remains coherent described within the Landauer formalism by a finite transmission coefficient through the impurity. Thus a single static impurity materializes the many-body character of scattering states. Besides its theoretical interest, the effect of (non)magnetic impurities on the thermal transport of quasi-one dimensional materials as SrCuO$_2$, Sr$_2$CuO$_3$ and the ladder compound La$_5$Ca$_9$Cu$_{24}$O$_{41}$ has recently become possible to explore experimentally.[@hess] In this work we use numerical diagonalization techniques - (full) exact diagonalization (ED), the Finite-Temperature Lanczos method (FTLM)[@ftlm] and the Microcanonical Lanczos method (MCLM)[@mclm] - to study the thermal transport in the Heisenberg chain model either coupled to a magnetic impurity or perturbed by single and double weak links. These state of the art techniques are crucial in the attempt to look for the subtle low temperature many-body effects associated with Kondo screening. Model ===== We consider the one dimensional anisotropic spin-$1/2$ Heisenberg model in the presence of a magnetic impurity out of the chain or weak links, $$\begin{aligned} H&=&\sum_{l=0}^{L-1} J_{l,l+1} h_{l,l+1}+J' (s^x_0S^x+s^y_0S^y+\Delta' s^z_0S^z), \nonumber\\ h_{l,l+1}&=&s^x_ls^x_{l+1}+s^y_ls^y_{l+1}+\Delta s^z_ls^z_{l+1}, \label{ham}\end{aligned}$$ where $s^{\alpha}, \alpha=x,y,z$ are spin-$1/2$ operators, $J_{l,l+1}>0 $ the in-chain magnetic exchange coupling that we take antiferromagnetic, $J'$ the chain-impurity coupling, $\Delta, \Delta'$ anisotropy parameters and [**S**]{} a spin-S magnetic-impurity operator ($\hbar=1$). In this work we mostly consider a spin-$1/2$ impurity. We assume periodic boundary conditions, ${\bf s}_{L}={\bf s}_{0}$, and uniform couplings $J_{l,l+1}=J$, except in the study of weak links (see below). We vary the anisotropy parameters $\Delta, \Delta'$, with $\Delta=\Delta'$, in order to look for the (healing) cutting of the chain effects mentioned above. In our study, based on standard linear response theory, the frequency $\omega$ dependence of the real part of the thermal conductivity (regular component) is given by $$\kappa(\omega)=-\frac{\beta}{\omega}\chi''(\omega),~~ \chi(\omega)= \frac{i}{L}\int_0^{+\infty} dt e^{i\omega t} \langle[j^{\epsilon}(t),j^{\epsilon}]\rangle,$$ where $\beta=1/T$, $T$ is the temperature and $k_B=1$. We determine the energy current from the hydrodynamic ($q\rightarrow 0$) limit of the energy continuity equation $\partial H_q/\partial t\sim q j^{\epsilon}$ with $H_q=\sum_l e^{iql} h_{l,l+1}$ as, $$\begin{aligned} j^{\epsilon}&=& %J^2 \sum_{l=0}^{L-1} J_{l-1,l}J_{l,l+1}\: {\bf s}_l\cdot( {\bf s}_{l+1}\times{\bf s}_{l-1})\nonumber\\ &+&\frac{JJ'}{2}{\bf s}_0\cdot( {\bf S}\times{\bf s}_{L-1}+ {\bf s}_1\times{\bf S}), \label{je}\end{aligned}$$ showing for simplicity the case $\Delta=1$ ($\Delta\ne 1$ is obtained by $s_l^z \rightarrow \Delta s_l^z$ in the cross-product terms). When $J'=0$ and all $J_{l,l+1}=J$ the energy current commutes with the Hamiltonian, the transport is purely ballistic and the thermal conductivity consists of only a $\delta(\omega)$-peak proportional to the thermal Drude weight. High temperature limit ====================== Starting from the high temperature ($\beta\rightarrow 0$) limit we can obtain a first impression on the behavior of the frequency dependence of $\kappa(\omega)$ from the 0th and 2nd moments, $\mu_n=\int d\omega \omega^n \kappa(\omega)$ which are equal to (for the isotropic point, $\Delta=1$), $$\begin{aligned} \mu_0&=&\text{const.}\times\frac{6}{T^2} \big(J^2+\frac{2}{L}\mathcal{B} ^2\big),\quad \big(\text{const.}=\pi\frac{J^2}{64}\big)\label{mu}\\ \mu_2&=& \text{const.}\times \frac{\mathcal{B}^2}{LT^2} \left(39J^2-12JJ'+3J'^2+36\mathcal{B}^2\right)\:,\nonumber\end{aligned}$$ where ${\cal B}^2=(J'^{\:2}/3)S(S+1)$ is the characteristic impurity spin dependence. One could expect the 2nd moment to reflect the width of $\kappa(\omega)$ and thus to be related to the inverse scattering time $1/\tau$. We note that for this impurity problem an assumption of a Gaussian form $\kappa(\omega)=\kappa_{dc}\exp^{-(\omega\tau)^2}$ would imply from the $L$ dependence of $\mu_{0,2}$ that $\kappa_{dc}=\kappa(0)$ would scale as $\sqrt{L}$ and $1/\tau \sim 1/\sqrt{L}$. This is, however, incorrect as is also evident from the disagreement with higher moments, $n>2$, which behave all as $\mu_n \propto 1/L$. For weak-coupling cases, such as a single impurity weakly coupled to the host chain, we should therefore rather expect a Lorentzian-like frequency dependence with a static $\kappa(0) \propto L$ and a characteristic frequency width $1/\tau \propto 1/L$. ![Frequency-dependent thermal conductivity in the high-$T$ limit scaled as $\kappa(\omega L)/L$ for ($\Delta=1$): (a) weak coupling $J'=0.5J$, (b) strong coupling $J'=2J$ (curves are normalized to unity).[]{data-label="l_scale"}](l_scale.pdf){width=".9\linewidth"} In Fig. \[l\_scale\] we show the frequency dependence of the thermal conductivity, normalized and appropriately scaled with system size. Note that in the high-$T$ ($\beta \rightarrow 0$) limit the relevant (but still nontrivial) quantity is $T^2 \kappa(\omega)$ which is implicitely extracted by the normalization. We thus present results of the normalized $\kappa(\omega L)/L$ for a weak, $J'=0.5J$ and strong, $J'=2J$ coupling case respectively. The data up to $L=16$ were obtained by full ED while for $L=18-22$ the MCLM was used.[@mclm] The $\delta-$peaks at the excitation frequencies are binned in windows $\delta\omega=0.01$, which also gives the frequency resolution of the spectra. For $J'=0.5J$ we find a simple Lorentzian form while in the strong coupling case the behavior is nonmonotonic with a maximum at a finite frequency $O(1/L)$. In both cases the proposed $L$ scaling is indeed realized. ![Frequency dependence of the normalized thermal conductivity $\kappa(\omega L)/L$ in the high-$T$ limit for a variety of impurity spin values $S=1/2,1,3/2,2$ and for: (a) $J'/J=0.5,0.3,0.22,0.18$ corresponding to the weak coupling ${\cal B}^2=(J'^{\:2}/3)S(S+1)\simeq0.06$, (b) $J'/J=1.5,0.92,0.67,0.53$ corresponding to the stronger coupling ${\cal B}^2\simeq 0.57$.[]{data-label="s_scale"}](s_scale.pdf){width=".9\linewidth"} As for the scaling with impurity spin $S$ suggested by the proportionality of the 2nd moment to ${\cal B}^2=(J'^{\:2}/3)S(S+1)$ we show in Fig. \[s\_scale\] MCLM results for $\kappa(\omega L)/L$ for a series of S-values and couplings $J'$ so that the effective perturbation strength ${\cal B}^2$ retains its value. We find indeed that at both weak as well as strong coupling the scaling is well obeyed, giving a wider applicability to our results. They can be applied to a range of impurity spin values becoming directly relevant in the interpretation of experiments. ![Memory function $\tilde N''(\omega)$ for a strong coupling $J'=2J$ and for various lattice sizes $L=12-24$, using both ED and FTLM. Inset: the scaled function $\tilde N''(\omega L)$ is shown at low frequencies.[]{data-label="w_scale"}](w_scale){width=".9\linewidth"} Now let us address the generic $L\rightarrow \infty$ behavior. We can discuss it by considering the memory function $N(\omega)$ representation defined via the general complex function $\bar{\kappa}(\omega)$, $$\begin{aligned} \bar{\kappa}(\omega)&=&i\beta\frac{\chi_0} {\omega+N(\omega)},~~~ \chi_0= \chi(\omega\rightarrow 0).\end{aligned}$$ where the real $\kappa(\omega)=\bar{\kappa}'(\omega)$ and $N''(\omega) \sim 1/\tau$ plays the role of the (frequency dependent) thermal-current relaxation rate. The lowest moments $\mu_n$ can be evaluated (in principle) exactly in the high-$T$ limit[@ins] on a finite size lattice of $L$ sites. Involving only local quantities, at least for $0<n<L/2$, they should behave as $\mu_n = \tilde \mu_n/L$ whereby $\tilde \mu_n$ is size independent for $n<L/2$. It is plausible that also higher moments, $n>L/2$, behave as $\mu_n \propto 1/L$. If $\tilde \mu_n$ for $n>L/2$ would be also size independent, then this would imply the scaling $N(\omega)=\frac{1}{L}\tilde N(\omega)$, with a universal (size independent) $\tilde N(\omega)$. Consequently $$\bar{\kappa}(\omega)=\frac{i\beta\chi_0 L}{(\omega L) + \tilde N (\omega)},$$ with the real part $\kappa(\omega)$ for $L\rightarrow \infty$ and $\omega \to 0$ obeying the Lorentzian scaling relation, $$\frac{\kappa(\omega L)}{L}=\frac{\beta\chi_0 \tilde N''(\omega\rightarrow 0)} {(\omega L)^2 + \tilde N''(\omega\rightarrow 0)^2},$$ provided that $N''(\omega\rightarrow 0)$ is finite. This is, however, clearly not what we observe in Fig. \[l\_scale\], where from the non-Lorentzian shape we must conclude that the memory function also scales as $\tilde N(\omega L)$ and thus, $$\frac{\kappa(\omega L)}{L}=\frac{\beta\chi_0 \tilde N''(\omega L)} {(\omega L + \tilde N' (\omega L) )^2 + \tilde N''(\omega L)^2}.$$ This is not in contradiction with the moments argument, since the higher moments, $n>L/2$, determine the low frequency behavior. So we can argue that at high frequencies $\tilde N(\omega)$ scales as $\omega$ while at low frequencies as $\omega L$. This scenario is indeed verified in Fig. \[w\_scale\] at the low/high frequency regimes, where $N(\omega)$ is extracted from the $\kappa(\omega)$ data. The FTLM method is used for lattice sizes $L\ge16$ with $M_L=500$ Lanczos steps and smoothed with an additional frequency broadening $\delta\omega=0.03$. On the other hand, we can also explain the observed general $\kappa(\omega L)/L$ scaling with the similarity to a noninteracting system - with an impurity. In the latter case, the characteristic scaling $L \omega$ is signature of “free” oscillations in the system. ![Impurity coupling $J'$ dependence of scaled $\tilde N''(L \omega)/J'^2$ and the comparison with the perturbative result. Results are obtained for $\Delta=1$ and $L=16$ via ED.[]{data-label="j_scale"}](j_scale){width=".9\linewidth"} To study the crossover from weak to strong coupling regime we show in Fig. \[j\_scale\] the evolution of the relaxation-rate function $\tilde N''(\omega L)$ with impurity coupling $J'$ along with a perturbative evaluation $\tilde N_0''(\omega L)$ using the eigestates of the Hamiltonian without the impurity.[@gw] It is interesting that the memory function shows an increasingly pronounced structure with minima at approximately the same frequencies, multiples of $2\pi/L$ [*independently of $J'$*]{} and which are not present in the perturbative calculation. In particular the characteristic frequency of the minima decreases as the anisotropy parameter $\Delta$ decreases and thus it apparently related to the velocity of elementary excitations (spinons) in the system. We can conjecture that this peak structure is due to a resonant mode, created by multiple forward/backward scattering on the impurity, characteristic for the noninteracting system. It is remarkable that this happens even in this high temperature limit. This effect has already been seen in integrable systems where a perturbation seems to affect the totality of the energy spectrum.[@ins] Now the picture is clear, $\tilde N''(\omega)$ increases as $J'^2$, scales as $\omega L$ at low frequencies and at the same time develops a structure that dominates the behavior of $\kappa(\omega L)$ turning the Lorentzian weak-coupling shape to a nontrivial one at strong coupling. Weak links - finite $T$ ======================= Next we examine the behavior of the thermal conductivity $\kappa(\omega)$ as we lower the temperature, starting with the influence of static weak exchange links. Kane-Fisher[@kf] for a Luttinger liquid and Eggert and Affleck[@affleck] (EA) for the isotropic spin-$1/2$ Heisenberg chain, proposed that a weak link leads to an open chain (cutting) in the low energy limit. In contrast, a defect of two adjacent weak links is “healed" leading to a uniform chain at $T=0$. ![Frequency dependence of: (a) the normalized thermal conductivity $\kappa(\omega L)/L$, (b) the extracted memory function $\tilde N''(\omega L)$, for a chain of $L=22$ sites with one weak link $\tilde J=0.7J$ and various $T/J=0.3 - 2.0$ . (c) Temperature dependence of $\kappa_{dc}(T)/L$.[]{data-label="wl1"}](1wl){width=".9\linewidth"} To analyze this effect we consider a chain with only one weak link, that is one altered bond with coupling e.g. $J_{0,1}=\tilde J$ in an otherwise uniform chain ($J'=0$, there is no spin impurity). The characteristic Kane-Fisher temperature is given in the weak coupling limit by $T_{KF}\sim (J-\tilde J)^2/J$. In Fig. \[wl1\]a we show the corresponding $\kappa(\omega L)/L$ for $\tilde J=0.7J$ and a series of temperatures. The data are obtained using the FTLM method for a chain of $L=22$ spins, by $M_L=2000$ Lanczos steps and smoothed by an additional frequency broadening $\delta \omega = 0.007$. From Fig. \[wl1\]a we notice that $\kappa(\omega L)/L$ develops a strongly nonmonotonic frequency dependence by lowering the temperature, with a maximum at a finite frequency that suggests a flow to the strong coupling limit similar to the one discussed before by increasing $J'$. In Fig. \[wl1\]b, the extracted $\tilde N''(\omega L)$ for various $T$ is presented, with the development of a characteristic structure that explains the nonmonotonic behavior of $\kappa(\omega)$. The increasing value of $\tilde N''(0) \sim 1/\tau$ with decreasing temperature indeed corresponds to the effect of “cutting" of the chain. Nonmonotonic is also the frequency dependence of $\kappa(\omega L)/L$ for the case of two adjacent equal weaker links, $J_{L-1,0}=J_{0,1}=\tilde J= 0.7 J$, as shown in Fig. \[wl2\]a. However, in this case we observe in Fig. \[wl2\]b the opposite behavior of $\tilde N''(\omega)$. Namely “healing" of the double defect deduced by the decreasing $\tilde N''(0)$ as the temperature is lowered in agreement with theoretical prediction.[@affleck] We should note that both cutting/healing are low frequency effects at frequencies $\omega L\:\: O(1)$. ![Frequency dependence of: (a) the normalized thermal conductivity $\kappa(\omega L)/L$, (b) the extracted memory function $\tilde N''(\omega L)$ for a chain of $L=22$ sites with two adjacent weak links $\tilde J=0.7 J$ and various $T/J=0.3 - 2.0$. (c) Temperature dependence of $\kappa_{dc}(T)/L$.[]{data-label="wl2"}](2wl){width=".9\linewidth"} ![Temperature dependence of $\tilde N''(0)$ for $\tilde{J}=0.5,0.7J$ showing cutting/healing behavior for one and two weak links.[]{data-label="wl12n0"}](wl12n0){width=".9\linewidth"} To summarize the observed behavior we show in Fig. \[wl12n0\], the $T$-dependence of the relaxation rate $\tilde N''(0)$ for two different couplings $\tilde{J}/J=0.5,0.7$, for one and two weak links, respectively. The presented results confirm the existence of the cutting behavior at low $T$ for a single link, as well as the healing by lowering $T$ for two adjacent and equal links. As expected, both effects appear only at low $T/J < 1$ while the dependence of the characteristic $T_{KF}$ on $\tilde J/J$ is less pronounced. Spin coupled to the chain - finite $T$ ====================================== Finally we can study the effect of lowering the temperature on the scattering by a magnetic impurity. According to EA it leads to cutting the chain at $T=0$ irrespective of the sign of $J'$. This proposal was extended by Furusaki and Hikihara[@furusaki] to the anisotropic spin chain $-1 < \Delta \le 1$ where they furthermore proposed that for $-1 < \Delta < 0$ (attractive case in the fermionic language) there is “healing" of the impurity, in analogy to the case of two adjacent weak links. In the Kondo problem the characteristic temperature in the weak coupling limit is given by $T_K\sim v \exp(-c/J')$ with $c$ being a constant, $v$ the velocity of spin excitations and $J'$ the Kondo coupling. In the case of a spin-$1/2$ chain it was shown[@frahm] that the exponential dependence is replaced by $T_K\sim \exp (-\pi \sqrt {1/J' -(S'+1/2)^2})$ and a next-nearest neighbor coupling $J_2\simeq 0.2412$ is needed to recover the traditional Kondo case. We should note that in the model studied the impurity spin is attached only at the end of the chain - in contrast to our model - but plausibly the behavior is qualitatively similar. To get a qualitative idea of orders of magnitude for our problem [@laflorencie] for $J'=0.3J$, $T_K\sim 0.014$, $\xi_K\sim 40$, for $J'=0.6J$, $T_K\sim 0.388$, $\xi_K\sim 4$ and $J'=J$, $\xi_K=0.65$. As in our study we are limited to $T \ge 0.4$ in order to see a “Kondo" crossover we must consider a coupling $J' \ge 0.5J$ and thus we are in the relatively strong coupling regime, with typical screening length of the order $\xi_K \sim 1$. ![Frequency dependent normalized thermal conductivity $\kappa(\omega L)/L$ for strong coupling $J'=2J$, $\Delta=\pm0.5$ and three $T/J=50,2,0.4$. Inset: $T$-dependence of $N''(0)$ for $\Delta=\pm0.5$ and $\Delta=+1$.[]{data-label="cutheal"}](cutheal.pdf){width=".9\linewidth"} In Fig. \[cutheal\] we show $\kappa(\omega L)/L$ for a chain of $L=22$ sites at strong coupling $J'=2J$ and two representative cases $\Delta=\pm 0.5$ as we lower the temperature. Indeed we find at low frequencies the gradual development of the corresponding “cutting/healing" behavior which we exemplify in the inset by $\tilde N''(0)$ as a function of temperature both for $\Delta=\pm 0.5$ and the most typical isotropic case $\Delta=+1.0$. It is remarkable that the tendency to increase-decrease the scattering time is already evident from high $T$, presumably due to the local character of the effect because of the strong $J'$ coupling. We note in passing that the $\omega L$ scaling is found not just at high $T$ but rather at all $T$ (not shown). Next in Fig. \[memjp\] we show $\tilde N''(0)$ as a function of $T$ for a series of increasing $J'$ couplings. The “cutting" effect for the repulsive case $\Delta=+0.5$ is present for all values of $J'$ with no easily distiguishable “Kondo" temperature. We are always dealing with screening lengths well less than the system size where presumably no subtle many-body effects come into play. On the other hand, in the attractive case $\Delta=-0.5$, we do not observe “healing" for the weakest coupling $J'=+0.5$ where the screening length is expected to be several lattice sites. ![$\tilde N''(0)$ vs. $T$ for the repulsive (attractive) case $\Delta=+0.5 (-0.5)$ for different $J'/J=0.5,1.0,1.5$.[]{data-label="memjp"}](memjp){width=".9\linewidth"} Finally, in Fig. \[kdc\] we summarize the $T$-dependence of $\kappa_{dc}/L$ for a variety of coupling strenghts $J'/J$ and $\Delta=\pm 0.5$. The experimentally most interesting case $\Delta=+1$ corresponding to isotropic antiferromagnetic as well as ferromagnetic impurity coupling is shown in Fig. \[kdciso\]. For $\Delta>0$ we observe in Fig. \[kdc\]a and Fig. \[kdciso\] a continuous decrease of the $\kappa_{dc}$ with increasing $J'$. This can be explained with the formation of a local singlet, at least for $T<J'$ which blocks the current through the impurity region. On the other hand, the $\Delta<0$ case in Fig. \[kdc\]b reveals a saturation of $\kappa_{dc}$ with $J'$, at least for intermediate large $J'$. However, for severe perturbations ($J'\gg J$) the impurity cannot be healed by the chain leading inevitably to a further decrease of the $\kappa_{dc}$. ![Temperature dependence of $\kappa_{dc}/L$ for a variety of impurity couplings $J'$ and for: (a) repulsive $\Delta=+0.5$, (b) attractive $\Delta=-0.5$.[]{data-label="kdc"}](kdpm){width=".9\linewidth"} ![Temperature dependence of $\kappa_{dc}/L$ for a variety of impurity couplings $J'$, $\Delta=1$ and for: anti-ferromagnetic couplings (top), ferromagnetic couplings (bottom).[]{data-label="kdciso"}](kfaf-iso){width=".9\linewidth"} Conclusions =========== In conclusion, by analysing the unique behavior of the thermal conductivity of the spin-1/2 Heisenberg model several effects of the local static and dynamical impurities have been established:\ (*a*) A single local impurity, either static as the local field [@static] and weak link, or dynamical as the spin coupled to the chain turn the dissipationless thermal conductivity into an incoherent one. Numerical results for the dynamical conductivity, best studied at high-$T$, reveal that a single impurity in a system of $L$ sites shows a universal scaling form $\kappa(\omega L)/L$ at least in the low-$\omega$ regime. For weak perturbation, as weakly coupled spins outside the chain, the scaling form is of the simple Lorentzian type. On the contrary large local perturbation can lead to a nontrivial form with the maximum response at $\omega >0$.\ (*b*) Furthermore, universal oscillations in the dynamical relaxation rate $N''(\omega)$ become visible, from the weak coupling regime already, with the period $\omega \propto 1/L$ being a remnant of the impurity multiple-scattering phenomena in a noninteracting system.\ (*c*) Our results confirm the existence of the Kondo-type effects of impurities on lowering the temperature. In the case of weak links and for the isotropic Heisenberg model cutting and healing effects are observed at lower $T$ for a single weak link and a pair of identical weaker links, respectively, in accordance with theoretical predictions.[@kf; @eggert] In the case of a spin coupled to the chain the cutting/healing effects at low $T$ depend on the sign of the anisotropy $\Delta$. For ferromagnetic anisotropy ($\Delta<0$), the chain screens the impurity and the system enters the weak coupling regime as the temperature is decreased. The opposite behavior is obtained for antiferromagnetic anisotropy ($\Delta>0$), where the system flows to the strong coupling limit at lower temperatures.\ (*d*) Obtained data can be used to model the behavior observed in experiments on materials with spin chains doped with magnetic and nonmagnetic impurities.[@hess] acknowledgments =============== This work was supported by the FP6-032980-2 NOVMAG project and by the Slovenian Agency grant No. P1-0044. Open chain ========== ![ Frequency dependence of the thermal conductivity $\kappa(\omega L)/L$ in the high temperature limit for various values of the coupling $J'/J=0.8-4.0$ and $\Delta=1.0$.[]{data-label="cutchain"}](kcut){width=".9\linewidth"} Throughout the article the term “cutting” is used to describe the behavior of the system in the strong coupling limit. In order to justify the term “cutting”, we present in Fig. \[cutchain\] results for the thermal conductivity of a chain of $L=16$ sites obtained by ED in the high temperature limit for various couplings $J'$ and the thermal conductivity of a uniform chain with open boundary conditions as well. Fig. \[cutchain\] illustrates the flow of the system from a Drude like behavior (weak coupling) to a chain with open boundary conditions (strong coupling), which was already proposed for a single non-magnetic impurity (a local field) from the level statistics analysis.[@static] We choose to present the jagged results, i.e. without implementing any smoothing procedure, in order not to wash out the development of the narrow peaks corresponding to the excitations of the open chain. For the strong coupling cases there is some rather significant structure at frequencies $\sim J'$ which correspond to local excitations of the impurity. However, these excitations are irrelevant for the effect of the impurity on the chain which is studied here. C.L. Kane, M.P.A. Fisher, Phys. Rev. Lett.[**68**]{}, 1220 (1992). S. Eggert, I. Affleck, Phys. Rev. B[**46**]{}, 10866 (1992). S. Rommer, S. Eggert, Phys. Rev. B[**62**]{}, 4370 (2000). E.S. Sorensen, M. Chang, N. Laflorencie and I. Affleck, J. Stat. Mech.: Th. and Exp., L01001 (2007). N. Laflorencie, E.S. Sorensen and I. Affleck, J. Stat. Mech.: Th. and Exp., P02007 (2008). A. Furusaki, T. Hikihara, Phys. Rev. B[**58**]{}, 5529 (1998). X. Zotos, F. Naef and P. Prelovšek, Phys. Rev. B[**55**]{}, 11029 (1997). It should be noted that a tower of integrable Hamiltonians exist for every value of spin, where the energy current is a conserved quantity, but these Hamiltonians have no obvious physical realizations. O.S. Barišić, P. Prelovšek, A. Metavitsiadis, X. Zotos, Phys. Rev. B[**80**]{}, 125118 (2009). C. Hess, Eur. Ph. J. Special Topics, [**151**]{}, 73 (2007); private communication. J. Jaklič and P. Prelovšek, Adv. Phys. [**49**]{}, 1 (2000). M. W. Long, P. Prelovšek, S. El Shawish, J. Karadamoglou, and X. Zotos, Phys. Rev. B[**68**]{}, 235106 (2003). P. Prelovšek, S. El Shawish, X. Zotos and M. Long, Phys. Rev. B[**70**]{}, 205129 (2004). W. Götze and P. Wölfle, Phys. Rev. B[**6**]{}, 1226 (1972). H. Frahm and A.A. Zvyagin, J. Phys. C[**9**]{}, 9939 (1997).
{ "pile_set_name": "ArXiv" }
[**APPENDIX**]{} Outer Warp + Flat Extension =========================== Adding $\alpha$ Dependence to Outer Warp ---------------------------------------- TB96 describe how to calculate the SED for an outer warp seen at various inclination angles. Their general method for calculating the SED also includes a dependence on azimuthal viewing angle, although their detailed treatment of various occultation effects (the star blocking the far side of the disk, the disk blocking the star, etc.) does not include this dependence. Since these disks are non-axissymetric the SED can depend substantially on the azimuthal viewing angle, $\alpha$, of the observer. In this section we describe how we have added this dependence into the equations of TB96. We do not include a detailed description of the derivation of the equations, but merely state most of them and some of the geometric logic behind their modification. The first modification is to equation (6) of TB96, which describes the calculation of the flux from the disk based on the temperature of the disk. This equation assumes that the disk is viewed along $\alpha=0$ so that the upper disk from 0 to $\pi/2$ looks the same as the upper disk from $3\pi/2$ to $2\pi$. When $\alpha\neq0$ this symmetry is broken and the individual components of the disk must be considered. While the temperature of a concave, or convex, piece of the disk does not change with viewing angle, the orientation of each of the concave, or convex pieces, changes and must be treated separately. For the outer warp this becomes: $$\begin{aligned} \textstyle F_{\nu,{\bf u}} = \int^{R_{disk}}_{r_{min}}\left[\int_0^{\pi/2}B(T_{concave})f_{up}+\int_0^{\pi/2}B(T_{convex})f_{low}+\int_{\pi/2}^{\pi}B(T_{concave})f_{low}+\int_{\pi/2}^{\pi}B(T_{convex})f_{up}\right]\nonumber\\ \textstyle +\int^{R_{disk}}_{r_{min}}\left[\int_{\pi}^{3\pi/2}B(T_{convex})f_{up}+\int_{\pi}^{3\pi/2}B(T_{concave})f_{low}+\int_{3\pi/2}^{2\pi}B(T_{concave})f_{up}+\int_{3\pi/2}^{2\pi}B(T_{convex})f_{low}\right]\end{aligned}$$ The next change comes in the appendix to the functional form of the parameter $C$. The function $C$ is used to define the points that are along the line of sight with the star. If the line intersects the star then we need to worry about whether the disk blocks the star or the star blocks the disk. If this line does not intersect the star then the disk cannot block the star and the star cannot block the disk. The definition of $C$ changes from $C=r\sin\theta$ to $C=r\sin (\theta-\alpha )$. Also the radial part of the deformation used in equation (A6) of TB96 is taken to be $H(r)=gR_{disk}\left(\frac{r}{R_{disk}}\right)^n\cos\alpha$. For $\alpha=\pi/2$ the disk along the line of sight is flat and the radial part of the height will remain at zero, while along $\alpha=\pi$ the disk curves below the midplane as expected. The final change comes when calculating the stellar flux. In equation (A12) of TB96 we take $h(r,\theta)$ to be $h(r,\alpha)$ since this represents the part of the disk that will block the star. As the azimuthal angle increases the disk blocks less of the star because the height of the disk is smaller. We make more changes to how the stellar flux is calculated, which are described below, but when it comes to the occultation of the star by the disk this is the only change. In the end we are able to run our models from $0<i<\pi$ and $-\pi/2<\alpha<\pi/2$. Due to the symmetry of the disk this covers all possible viewing angles allowing us to accurately model the precession of the warp, as well as observe the warp from an arbitrary angle. Flat extension of Outer Warp ---------------------------- We have taken the outer warp model and added a flat extension beyond it in order to treat disks where the warp is not at the outer edge of the disk. The warp will shadow the outer disk, changing its temperature structure. For simplicity we assume that the outer disk is a flat blackbody. The temperature can be derived using the same formula as with the warped disk (Equation 6), but with different definitions for the integration boundaries. Half of the flat extension will be shadowed while half will not be shadowed. For the side of the flat extension that is beyond the part of the warp the goes below the midplane there is no shadowing of the disk and the integration ranges over: $$\begin{aligned} \varepsilon_{min}=0\nonumber\\ \varepsilon_{max}=\pi/2\nonumber\\ \\ \delta_{min}=0\nonumber\\ \delta_{max}=\arcsin(R_*/d)\nonumber\\ d^2=r^2+h^2\nonumber\\\end{aligned}$$ For the part of the flat extension that lies behind the warp that stretches above the midplane the definition of $\delta_{min}$ changes. In this case $\delta_{min}$ is set by the angle between the warp and the point $P(r,\theta)$ in the disk. $$\delta_{min}=\arctan\left(\frac{gR_{warp}\cos(\theta)}{(r-R_{warp})}\right)$$ This takes into account shadowing of the flat extension to the disk due to the warp. Once the temperature structure has been determined the flux can be derived using the equation for the flux from a disk (Equation 5). Inner Warp ========== Temperature Profile of the Inner Warp ------------------------------------- In this section we describe the method for calculating the SED of a disk with an inner warp. In the text we laid out the basic equations from TB96 that are needed to calculate the temperature structure. As mentioned in the text the essential difference between the inner warp and outer warp comes in calculating $\delta_{max},\delta_{min},\varepsilon_{min},\varepsilon_{max}$ for each point $P(r,\theta)$, which are used in equation 6. From here the disk is split into two sides that are treated separately. The convex side is the side of the disk that faces the star on the inner edge and receives the most direct heating from the star. For this side, the integration ranges over: $$\begin{aligned} \varepsilon_{min}=0\nonumber\\ \varepsilon_{max}=\pi/2\nonumber\\ \\ \delta_{min}=-\arctan(\partial h/\partial r)\nonumber\\ \delta_{max}=\arcsin(R_*/d)\nonumber\\ d^2=r^2+h^2\nonumber\\\end{aligned}$$ Figure \[del\_vex\] demonstrates the limits on $\delta$ for the convex side of the disk. The definition of $\delta_{min}$, demonstrated in figure \[delblock\_vex\], comes from the inner disk blocking light from the top of the star. The inner disk will limit the field of view of the point $P(r,\theta)$ as it looks toward the star. Traveling out from the star, less of the star will be seen by the disk because the shallower slope of the disk will cause more of the star to be blocked. In the limit of a flat disk far from the star $\delta_{min}$ approaches zero and the disk can only see half of the star. If the point $P(r,\theta)$ on the disk is close enough to the star then the disk can see all of the star and $\delta_{min}=-\delta_{max}$. The limits on $\varepsilon$ assume that the scale height of the disk does not change across the face of the disk, which will be an accurate approximation far from the star. The concave side of the disk is the side that does not directly face the star. Since it does not face the star much of the inner parts of the disk will be blocked by the warp and will only be heated by viscous dissipation. The condition for the point $P(r,\theta)$ on the concave part of the disk to see any of the star is: $$\frac{h(r_{min},\theta)}{r-r_{min}}<\frac{R_*}{r}$$ If the point $P(r,\theta)$ meets this condition then this point can see some of the star and $\delta_{min}$ becomes (fig \[delmin\_cave\]) $$\delta_{min}=\arctan(\frac{h(r_{min},\theta)}{r-r_{min}})$$ The rest of the limits stay the same. In the limit of a perfectly flat disk $\delta_{min}$ approaches 0 and the point $P(r,\theta)$ is irradiated by only half of the star. For a large warp the only heating by this side of the disk will be from viscous dissipation because the warp will block the star over most of the disk. Calculating the SED for an Inner Warp ------------------------------------- In this section we describe the procedure for converting the temperature structure into a spectral energy distribution (SED). From TB96, the flux emitted by the disk is $$F_{\nu,{\bf u}}= \int\int_{disk surface} B_{\nu}(T(r,\theta))dS{\bf n_d}\cdot{\bf u}$$ In this case [**u**]{} is the vector along the line of sight to the observer from the center of the star and ${\bf n_d}$ is the normal to the disk at the point $P(r,\theta)$. The vector ${\bf u}$ can be defined in terms of the azimuthal and polar angles to the line of sight, $\alpha$ and $i$ respectively. The area of the disk along the line of sight is given by $$\textstyle dS {\bf n_d}\cdot{\bf u} = r\left[\left(\frac1{r}\frac{\partial h}{\partial \theta}\sin\theta-\frac{\partial h}{\partial r}\cos\theta\right)\cos\alpha\sin i-\left(\frac1{r}\frac{\partial h}{\partial\theta}\cos\theta+\frac{\partial h}{\partial r}\sin\theta\right)\sin\alpha\sin i+\cos i\right]drd\theta$$ The angle $\alpha$ ranges from $-\pi$/2 to $\pi$/2 while the inclination $i$ ranges from 0 to $\pi$. This covers all possible viewing angles of the disk, since the symmetry of the disk makes some viewing angles redundant. Splitting up the equation for the flux from the disk helps to make the problem simpler to understand and more tractable. It also fits with the fact that we do not need to calculate the temperature structure of the entire disk. The symmetry of the disk allows us the calculate the temperature of the convex and concave side from $0<\theta<\pi/2$ and then apply this temperature profile to the rest of the disk. The integral is split into eight parts: $$\begin{aligned} \textstyle F_{\nu,{\bf u}} = \int^{R_{disk}}_{r_{min}}\left[\int_0^{\pi/2}B(T_{concave})f_{up}+\int_0^{\pi/2}B(T_{convex})f_{low}+\int_{\pi/2}^{\pi}B(T_{concave})f_{low}+\int_{\pi/2}^{\pi}B(T_{convex})f_{up}\right]\nonumber\\ \textstyle +\int^{R_{disk}}_{r_{min}}\left[\int_{\pi}^{3\pi/2}B(T_{convex})f_{up}+\int_{\pi}^{3\pi/2}B(T_{concave})f_{low}+\int_{3\pi/2}^{2\pi}B(T_{concave})f_{up}+\int_{3\pi/2}^{2\pi}B(T_{convex})f_{low}\right]\label{eqn_flux}\end{aligned}$$ where $f(r,\theta)=dS {\bf n_d}\cdot{\bf u}p(r,\theta)$, and $p(r,\theta)$ is a binary function used to determine if the point $P(r,\theta)$ is visible to the observer. The integration is done over both the upper and lower sides of the disk in order to account for inclination angles greater than $90^{\circ}$ where the lower half of the disk is visible. If the inclination is 0 then the observer is face on to the upper half of the disk, which has both a concave and convex side. If the inclination is $180^{\circ}$ then the observer is face on to the lower half of the disk, which includes both a concave and a convex side. Treating each quarter of the disk separately allows us to use the symmetry of the temperature profile but still treat general azimuthal viewing angles. ### Calculating the value of p The above description sets out the basics for how to calculate the temperature structure and SED for a warped inner disk. Most of this is derived from TB96, which treated these situations generally enough to apply to any type of warp. The main differences between this inner warp and the outer warp from TB96 comes from the calculation of $p(r,\theta)$. This section describes the conditions used to calculate $p(r,\theta)$ for the particular warp used here. The first condition is that the observer is facing the point $P(r,\theta)$. For inclinations less than $90^{\circ}$ the observer will see mostly the upper half of the disk, while at inclinations greater than $90^{\circ}$ the observer will see mostly the lower half of the disk. There are select inclinations close to edge on where at inclinations less than $90^{\circ}$ some of the lower disk can be seen. For example, if figure \[delblock\_vex\] had an observer in the upper left viewing the disk close to edge on they would be able to see some of the lower convex side that is illustrated in the figure. In general it can be determined if the observer is facing the point $P(r,\theta)$ based on the dot product ${\bf n_d}\cdot{\bf u}$. The normal is defined as extending on the upper side of the disk and the dot product will be greater than zero if ${\bf n_d}$ and ${\bf u}$ lie along the same direction. Therefore, the upper part of the disk can be seen if the dot product is greater than 0 while the lower part of the disk can be seen when the dot product is negative. Now we determine if the point $P(r,\theta)$ is blocked by either the star or the disk. First we consider whether the star blocks the far side of the disk. This applies for inclinations less than $90^{\circ}$ where the star may block part of the upper convex side, as is demonstrated in figure \[inclim\]. The limit at which this condition applies is given by $$\tan i_{lim}=\frac{r_{min}-R_*\cos i_{lim}}{gr_{min}\cos\alpha+R_*\sin i_{lim}}$$ [cccc]{} 0.005& 59.7 & 78.2 & 83.9\ 0.01& 59.4 & 77.9 & 83.7\ 0.03& 59.3 & 76.7 & 82.5\ 0.05& 57.2 & 75.6 & 81.4\ 0.07& 56.1 & 74.5 & 80.3\ 0.1& 54.5 & 72.8 & 78.6\ 0.3& 44.7 & 62.3 & 67.8\ 0.5& 36.7 & 53.1 & 58.3\ If the inclination is greater than $i_{lim}$ then part of the disk is blocked by the star. In this case we can use the discussion of TB96 to determine if the point $P(r,\theta)$ is blocked by the star (figure \[disk\_block\] and table \[ilim\]). Defining $C=r\sin(\theta-\alpha)$ the only time the star can block the disk is when $C<R_*$ otherwise $p(r,\theta)=1$. If $C<R_*$ then $p(r,\theta)=1$ only if $r\cos\theta<r_D\cos\theta_D$, where $\theta_D=\pi-\arcsin(C/r_D)$ and $r_D$ is the positive root of the following equation: $$\left(-H(r_D)\sqrt{1-\left(\frac{C}{r_D}\right)^2}-z_M\right)\sin i+\left(r_D\sqrt{1-\left(\frac{C}{r_D}\right)^2}+x_M\right)\cos i=0$$ where $H(r)=gr_{min}\left(\frac r{r_{min}}\right)^{-n}\cos\alpha$ and M is the point on the northern hemisphere of the star in the plane (P,[**u**]{},z) such that [**u**]{} is tangent to the star at this point: $$\begin{aligned} x_M=-\sqrt{R_*^2-C^2}\cos i\\ y_M=C\\ z_M=\sqrt{R_*^2-C^2}\sin i\end{aligned}$$ Now consider inclinations greater than $90^{\circ}$. In this case the disk can still be blocked by the star if the warp is small enough and the inclination angle is close enough to 90 degrees (figure \[disk\_block2\]). This affects both the upper and lower part of the disk, but only from $\theta\in[\alpha+\pi/2,\alpha+3\pi/2]$. The condition for the point P to be hidden by the star is: $$|h(r,\theta)|<\frac{R_*\cos(i-\pi/2)\tan(\pi-i)-r}{tan(\pi-i)}$$ As above this only applies when $C=R_*\sin(\theta-\alpha)<R_*$. Another possibility to consider when the inclination is greater than $90^{\circ}$ is that the warp on the lower concave side is steep enough that is blocks part of this side of the disk (figure \[disk\_block3\]). This condition only applies to parts of the lower concave side, from $\theta\in[\alpha+\pi/2,\alpha+3\pi/2]$. The condition is that $\gamma_1<\gamma_2$ where $\gamma_1=i-\pi/2$ and $$\gamma_2=\arcsin\left(\frac{h_m-h(r,\theta)}{PM}\right)$$ In this case M is the highest point on the disk on the line of sight to point P. For $r\sin(\pi-\theta)>r_{min}$ the disk does block itself, but for $r\sin(\pi-\theta)<r_{min}$ we have $r_m=r_{min}$ and $\theta_m=\arcsin(r\sin(\theta-\alpha)/r_m)$. The quantity PM is the distance between the point P and the point M ($\sqrt{(h_m-h)^2+(r_m-r)^2}$). Once all of the conditions have been considered the flux from the disk can be calculated using equation \[eqn\_flux\]. These conditions would apply to any type of warp whose maximum height above the midplane occurs at the inner edge of the disk, as opposed to at the outer edge of the disk, regardless of the exact functional form of the warp (ie. power law vs. exponential). Spiral Wave =========== Temperature Profile of Spiral Wave ---------------------------------- The third type of disk that we attempt to model contains a spiral wave. As with the warped disks we follow the derivation of TB96 to derive the temperature structure and SED for this disk. The derivation for the temperature structure is very similar to that of the inner warp, only with slightly different definitions of the boundaries. For the part of the disk inside the wave, the disk is not blocked by the wave but the amount of the star seen can change. For points far from the wave, the disk is like a flat disk and $\delta_{min}=0$. For points on the wave, as it rises above the midplane, more of the lower half of the star will become visible. How much of the lower half of the star is visible depends on the location and height of the point on the wave. In this case the lower limit on $\delta$ is: $$\delta_{min}=-\arctan(h/(r-r_{min}))\\$$ This limit will continue to increase until the point on the wave can see the entire star and then $\delta_{min}=-\delta_{max}$. The other limits stay the same as in the previous models: $$\begin{aligned} \varepsilon_{min}=0\nonumber\\ \varepsilon_{max}=\pi/2\nonumber\\ \\ \delta_{max}=\arcsin(R_*/d)\nonumber\\ d^2=r^2+h^2\nonumber\\\end{aligned}$$ For the parts of the disk behind the wave, some of the star may be obscured. In this case: $$\begin{aligned} \delta_{min}=\arctan(h_{sw}/(r-r_{sw}))\\ h_{sw}=gr_{min-sw}(1-m\theta/2\pi)\nonumber\\ r_{sw}=r_{min-sw}(1+n\theta)\nonumber\\\end{aligned}$$ This is similar to the concave side of the inner warp, where the warp can obscure part of the star. The only difference is that the maximum height of the disk does not occur at the inner edge of the disk, but instead occurs at the location of the spiral wave. When $\delta_{min}>\delta_{max}$ then the entire star is blocked and that point on the disk is only heated by viscous dissipation. With these definitions and equation (6) we can calculate the temperature of the disk. Calculating SED of Spiral Wave ------------------------------ The flux from the disk is given by: $$F_{\nu,{\bf u}}=\int^{R_{disk}}_{r_{min}}\int^{2\pi}_{0}B(T_{disk})f_{up}$$ There is no symmetry in the disk that allows us to split the disk into different parts, as with the concave and convex pieces of the inner warp. We also only consider inclinations less than $90^{\circ}$, where we only see the upper disk, since the lower disk will look the same as the upper disk. The one occultation effect we include is the blocking of the disk by the wave. For $\alpha-\pi/2<\theta<\alpha+\pi/2$, the near side of the disk, the wave can block the part of the disk that is at smaller radius than the wave. For $\alpha+\pi/2<\theta<\alpha+3\pi/2$, the side of the disk on the other side of the star from the observer, the outer disk can be blocked by the wave. This effect can become important for modest inclinations, given the typical wave heights we consider here. To determine if a point on the disk is blocked by the wave we first need to determine where the line of sight intersects the wave. This is illustrated in figure \[wave\_block1\] and is given by: $$x = r\sin\theta = r_{min-sw}(1+n\theta_M)\sin(\theta_M)$$ Here $\theta_m$ is the azimuthal coordinate of the point where the wave intersects the line of sight. We assume that the point M lies at the peak of the spiral wave. This is only an approximation, although the narrowness of the wave make it an accurate one. The angle between the line connecting the points P and M and the midplane is $\gamma$ (Fig. \[wave\_block\]). When $\gamma>\pi/2-i$ then point $P(r,\theta)$ is blocked. $$\tan\gamma=\left(\frac{h_M-h_P}{r_M-r_P}\right)$$ We ignore occultation effects due to the star blocking the disk, which we did consider in the inner and outer warp model. Based on our experience with the warped disks and the typical dimensions of the disk, these are negligible effects that will only play a role very close to edge on. We also do not consider situations where the wave on the near side of the disk can block the far side of the disk. The exclusion of these two effects prevents us from considering the spiral wave at inclinations very close to $90^{\circ}$. Stellar Flux ============ Next we consider the flux coming from the star. We follow a similar procedure as above where the equation for the stellar flux is modulated by a binary function ($\varepsilon(\phi,\psi)$) which equals 1 when that part of the star is not blocked by the disk and it equals zero when the star is blocked by the disk. The angles $\phi,\psi$, shown in figure \[star\], are the azimuthal and polar angles of a point on the surface of the star relative to the center of the star and the z axis (the same z axis as for the disk). The x-axis of this coordinate system is in the same direction as the line of sight, and will differ from the x-axis of the disk by the angle $\alpha$. The flux from the star is: $$F_*=B_{\nu}\int\int_{surface}\varepsilon(\phi,\psi)d{\bf A}$$ To determine the surface over which we integrate, we need to know the points of the star that are seen by the observer (ie. which side of the star is facing the observer). These points will be those that have ${\bf u}\cdot d{\bf A}\geq0$ where $$\begin{aligned} {\bf u}=\sin i\hat{x}+\cos i\hat{z}\\ d{\bf A}=R_*^2\sin\psi d\psi d\phi(\cos\phi\sin\psi\hat{x}+\sin\phi\cos\psi\hat{y}+\cos\psi\hat{z})\end{aligned}$$ The evaluation of $\varepsilon(\phi,\psi)$ will depend on the type of warp/wave and the orientation of the observer. First consider inclinations less than $90^{\circ}$. In this case the warp/wave streching above the midplane may block some of the star. The entire star will be blocked if the following condition is met: $$h(r,\alpha)-r\tan(\pi/2-i)>R_*$$ where r is location of the peak of the vertical disturbance and $h$ is the maximum height of the warp or wave at the angle $\alpha$. The exact value of $r$ and $h$ will depend on whether we are considering the outer warp, inner warp, or spiral wave (ie. $r=r_{min}$, $h(r,\alpha)=gr_{min}\cos(\alpha)$ for the inner warp). This condition is illustrated in figure \[star\_block\] for the inner warp, which is the disk that is the most likely to occult the star. None of the star will be blocked if the inclination is less than $i_{lim}$ (discussed earlier). A generic version of the equation for $i_{lim}$ that can be applied to all of the disks is: $$\tan i_{lim}=\frac{r-R_*\cos i_{lim}}{h(r,\alpha)+R_*\sin i_{lim}}$$ When the inclination falls between these two limits only a fraction of the star is blocked. We can use the discussion of TB96 section A2.2 to determine if a point of the star’s surface is hidden by the star The point Q is a point on the surface of the star that intersects the line of sight and the upper edge of the disk. If the point $N(\phi,\psi)$ lies above Q then the observer can see this part of the star, otherwise it is hidden and $\varepsilon(\phi,\psi)=0$. The vertical coordinate of Q, $z_Q$, is the greatest root of the following equation: $$(1+\tan^2 i)z_Q^2-2\tan i(h\tan i-r\cos\alpha)z_Q+(h\tan i-r\cos\alpha)^2-R_*^2+R_*^2\sin^2\psi\cos^2\phi=0$$ If $z_N\geq z_Q$ then $\varepsilon(\phi,\psi)=1$ otherwise $\varepsilon(\phi,\psi)=0$ with $z_N$ being given by: $$z_N=-R_*\sin\psi\sin\phi\sin i+R_*\cos\psi\cos i$$ For the inner and outer warp we consider inclinations greater than $90^{\circ}$ where the disk can still block part of the star, although it is less likely because the disk curves away from the observer. This is illustrated in figure \[star\_block2\] for the inner warp, but can also apply to the outer warp in the limit that $h$ goes to zero. The point at which the line of sight is perpendicular to the normal of the disk sets a limit to the distance z above the midplane that an observer can see. If this distance is less than the radius of the star then some of the star is blocked by the disk. The point D at which the the line of sight is perpendicular to the disk occurs when ${\bf u}\cdot{\bf n}=0$ (figure \[star\_block2\]). If this condition is met at a radius $r_D$ then a point $N(\phi,\psi)$ on the stellar surface will be blocked if $$R_*\cos\psi>h(r_D,\alpha)+r_D\tan(i-\pi/2)$$ If there is no point at which ${\bf u}\cdot{\bf n}=0$ then $r_D=r_{min}$ and the same condition for being able to see the star is used. When this condition is met $\varepsilon(\phi,\psi)=0$, otherwise $\varepsilon(\phi,\psi)=1$. All of these different occultations are combined to determine $\varepsilon(\phi,\psi)$. The flux is determined by integrating over the entire surface and added to the flux from the disk to create the observed SED.
{ "pile_set_name": "ArXiv" }
--- abstract: | In this article we propose some Maple procedures, for teaching purposes, to study the basics of General Relativity (GR) and Cosmology. After presenting some features of GRTensorII, a package specially built to deal with GR, we give two examples of how one can use these procedures. In the first example we build the Schwarzschild solution of Einstein equations, while in the second one we study some simple cosmological models.   \ [**Keywords:**]{} general relativity $\bullet$ GRTensorII $\bullet$ cosmology [**PACS (2010):**]{} 98.80.-k $\bullet$ 04.30.-w $\bullet$ 01.40.Ha\ author: - | [**Ciprian A. Sporea[^1] , Dumitru N. Vulcanov**]{}[^2]\   \ [West University of Timişoara, Faculty of Physics,]{}\ [V. Parvan Ave. no. 4, 300223, Timişoara, Romania]{}\ title: '**Using Maple + GRTensorII in teaching basics of General Relativity and Cosmology**' --- Introduction ============ Cosmology (i.e. the modern theory of Universe dynamics) has became in the last two decades an attractive field of human knowledge including also an intense media campaign. This was possible, among other reasons, also because in this time period several space missions performed cosmological measurements -like COBE, WMAP, Plank, BICEP2- thus transforming cosmology from a pure theoretical field also into an experimental one. Thus teaching cosmology, even at undergraduate level, in physics faculties comes to be a compulsory topic. Cosmology is based on two major pillars [@8],[@13]: astrophysics as a phenomenological tool and general relativity (GR) as the main theory, thus making it, unfortunately, difficult to teach to undergrad students. The mathematical structure or GR is based on differential geometry [@9], [@10] and learning it means that the student must first get familiar with the main instruments of differential geometry (such as tensor calculus on curved manifolds, Riemannian curvature, metric connection, covariant derivative, etc), which often means cumbersome and lengthy hand calculations. To illustrate these facts let us remind that GR is based on field equations known as Einstein equations, namely $$\label{EE} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+ \Lambda g_{\mu\nu}=-\kappa T_{\mu\nu}$$ where $R_{\mu \nu}$ is the Ricci tensor, $g_{\mu\nu}$ the metric tensor and $T_{\mu\nu}$ represents the stress-energy tensor. We denoted by $\Lambda$ the cosmological constant and $\kappa=8\pi G/c^4$. The components of the Ricci tensor are given by [@8],[@9] $$\label{Ricci} R_{\mu\nu}=\partial_\lambda\Gamma^\lambda_{\ \mu\nu}-\partial_\nu\Gamma^\lambda_{\ \mu\lambda}+\Gamma^\lambda_{\ \mu\nu}\Gamma^\sigma_{\ \lambda\sigma}-\Gamma^\sigma_{\ \mu\lambda}\Gamma^\lambda_{\ \nu\sigma}$$ where $\Gamma^\lambda_{\ \mu\nu}$ are the so called Chrisstoffell symbols wich in Riemannian geometry [@10] describe the structure of the curved space-time underlying GR and the associated metric tensor is compatible with the connection described by the above Chrisstoffel symbols, namely $$\label{Chris} \Gamma^\lambda_{\ \mu\nu} = g^{\lambda\sigma}\left( \partial_\mu g_{\nu\sigma} + \partial_\nu g_{\mu\sigma} - \partial_\sigma g_{\mu\nu}\right)$$ All these make even the most determined students to lose interest in studying cosmology (and GR too). Dozens of pages with hundreds of terms containing partial differentials to be hand processed could scare any student (and not only!). As today students are by passing day more skilled in computer manipulation and with a more and more advanced practice in programming, the modern teaching of GR and cosmology should use intensively computer facilities for algebraic programming, tensor manipulation and of course numerical and graphical facilities [@11], [@14], [@15]. The use of computer algebra was in the view of physicists even since the beginning of computer science both for teaching and research purposes. Computer algebra (or algebraic programming codes) evolved from early days of REDUCE package (see for example [@19],[@20], [@11]) till recent developments using integrated platforms as Maple and Mathematica in different fields of physics, not only in general relativity (see for example [@11], [@17], [@18] and more recently [@22], [@21]). Some years ago [@4] we published our experience in this using the REDUCE platform. Unfortunately, in the last years REDUCE lost the market in favour of more intergraded and visual platforms such as Maple [@1] and Mathematica, thus we adapted our experience and program packages to Maple and we’ve made use of the free package GTTensorII [@2] adapted for doing GR. The aim of the present article is to report our new experience in this direction. The article is organised as follows. The next section introduces the main features of GRTensorII in doing tensorial symbolic computation in GR and Riemannian geometry. Section no. 3 describes the way we can obtain an exact solution of Einstein equations. We used again, as in the main classical text on GR, the Schwarzschild solution. This is the most famous solution used today intensively in describing the motion in the solar system and for studying black-holes physics [@12]. The last section is dedicated to describe how one can use Maple and GRTensorII for cosmology (and teaching it). In both these two above sections we gave the main Maple commands which can be put together to have short programs to be used during the computer lab hours and even during the lectures. The article ends with a short section where we present the main conclusions and some ideas for future developments. Short presentation of the GRTensorII package ============================================= GRTensorII is a computer algebra package built within the Maple platform as a special set of libraries [@1]. It is a free distributed package (see [@2]) and it is adapted for dealing with computer algebra manipulation in general relativity. Thus it is designed for dealing with tensors and other geometric objects specific to Riemannian geometry (a metric that is compatible with the connection, symmetric connection - torsion free manifolds). In what follows, we will present some of the main features offered by GRTensorII. The library is based on a series of special commands all starting with $"gr"$ (for example $grcalc, grdisplay, gralter, grdefine$, etc) for dealing with a series of (pre)defined geometric objects such as the metric tensor, Ricci tensor and scalar, Einstein tensor, Chrisstoffell symboles, etc. To start the GRTensorII package one must type in a (new) Maple session the following commands > restart; > grtw(); The restart command causes the Maple kernel to clear its internal memory so that Maple acts (almost) as if just started. The second command initializes the GRTensorII package and gives some information about the current version. The GRTensorII library allows us to build our own space-time manifold. The most easiest way to do this is by creating a metric tensor $ g_{\mu\nu} $ with the help of $>makeg( )$ command. The result of operating this command will be a special ASCII file containing the main information about the metric and stored in a folder called “metrics” within the GRTensorII library. The folder “metrics” contains also a collection of predefined metric files distributed with the package (for egz. schw.mpl, schmidt.mpl, vdust.mpl, etc). Another way in which one can specify a space-time manifold is by loading a predefined metric from the “metrics” folder. This can be done with the help of two commands > qload(metricName); > grload(metricName, meticFile); The geometry built using $>makeg()$, $>qload() $ and $>grload()$ commands will fix the background on which all the later operations and calculations will be performed. One of the main advantages of GRTensorII library is that it allows us to do complicated operations on tensorial objects, regardless of how many indices those objects poses. The main command that permit us to do those calculations is $>grcalc( objectSeq )$ that calculates the components of tensors; For example $>grcalc ( R(dn,dn), R(up,dn,dn,dn) )$ ask the program to calculate the covariant components of the Ricci tensor $ R_{\mu\nu} $ and the components of the standard curvature Riemann tensor $ R^{\sigma}_{\mu\nu\lambda} $. The command $ >grdisplay()$ can be used to display the components of GRTensorII objects which have been previously calculated for a particular space-time. Before displaying the calculated components of an object it is indicated to use the command $>gralter()$ in order to simplify them. The $>grcalc()$ calculates all the components of a given object, so if one wants to calculate only a specific component then it can use the command $>grcalc1 ( object, indexList )$. For example: $>gecalc1 ( R(dn,dn,dn,dn), [t, r, \theta, \phi])$. Besides the predefined objects that exist in GRTensorII we can also define new objects (scalars, vectors, tensors) with the help of the command $>grdef()$. For example $>grdef ( `G2\{a\ \ b\} := R\{a\ \ b\} - (1/2)*Ricciscalar*g\{a\ \ b\} + Lambda*g\{a\ \ b\}`)$ defines a contravariant two index tensor, $G2_{a b}$, which it is explicitly assigned to an expression involving a number of previously defined (or predefined) tensors. The syntax in $> grdef()$ command follows naturally the usual tensorial operations which defines the new objects. Another important command of GRTensorII is $> grcoponent()$ which allows us to extract a certain component of a tensorial object. The extracted component can be used as a standard Maple object for later processing (symbolically, graphically and numerically). Although GRTensorII was designed initially for Riemannian differential geometry it can be easily extended to other types of geometries, such as ones with torsion or higher order alternative theories of gravity [@5], [@6]. We end this section by making the important observation that using GRTensorII does not impose any restriction in using all the numerical, graphical and symbolic computation facilities of Maple (as it happens with other packages even for Maple). Thus we can combine al these facilities for an efficient use of the Maple platform. Example 1: Schwarzschild type solutions ======================================= General relativity and its applications (such as cosmology) are based on Einstein equations (\[EE\]) as main field equations. They have many exact solutions, although these second order nonlinear differential equations have no unique analytical general solution. The most famous exact solution of Einstein equations is the Schwarzschild solution [@8], [@9] describing the gravitational field around a pointlike mass M (or outside a sphere of mass M). This solution is used today for describing the black-hole dynamics and was used in the first attempts in applying GR to the motion of planets and planetoids in our solar system. It is obvious that from a pedagogical point of view finding an exact solution of Einstein equations could be a good introductory lesson in applications of GR. Next we will derive this solution following the natural steps : - identifying the symmetries of the system for which we build the solution; - building a metric tensor compatible with the above symmetry; - building the shape of the stress-energy tensor components (if any exists); - calculating the Ricci tensor and the components of Einstein equations; - solving the above equations after a close inspection of them. These above steps could be done manually and usually it takes several hours of hard calculations (even straightforward and even for an experienced person). Our advise for anyone who wants to teach GR and/or cosmology is to do this traditional step with the students. It will be a good lesson and a motivation to proceed in using algebraic computing facilities (here Maple+GRTensorII). Thus the above steps are clearly transposable in computer commands in Maple+ GRTensorII. For the Schwarzschild solution the symmetry is clearly spherical and static (no time dependence including the time inversion, namely $t \rightarrow -t$ ). Thus we will use a spherical symmetric metric tensor as [@8]: $$\label{sferic} ds^2=e^{2\lambda(r)}dt^2 + e^{2\mu(r)}dr^2 + r^2\left( d\theta^2 + sin^2(\theta)d\phi^2\right)$$ in spherical coordinates $(t,r,\theta,\phi)$, where $\lambda(r)$ and $\mu(r)$ are the two unknown functions of the radial coordinate $r$ to be found at the end. Thus the student already in front of a computer or station having started a Maple session will be guided to compose the next sequence of commands > restart; grtw(); > makeg(sferic); >...... > grdisplay(metric); grdisplay(ds); where after the two commands for starting the GRTensortII the command $> makeg$ will create the ASCII file $sferic.mpl$ containing the information on the metric we will built. The series of dots above represent those steps where the user has to answer with the type of the metric, symmetry and of course its components one by one. The last two lines given above are calculating the metric and displays its shape in the form of a matrix. After this we can continue to do some calculations or to close the session. The metric we produce can be loaded anytime later in another sessions. The next step of our demonstrative program will be to point out and calculate the Einstein equations, but not before introducing the stress-energy components. It is obvious that in this case the strass-energy components are cancelled as we calculate the gravity field outside the source (the pointlike mass or a sphere). Thus in this case we will solve the so called vacuum Einstein equations i.e. $R_{\mu \nu}=0$ [@9]. In this view we will built a sequence of Maple commands (for a new session): > restart; grtw(); > qload(sferic); grcalc(R(dn,dn)); > gralter(R(dn,dn),simplify); grdisplay(R(dn,dn)); where after loading the metric tensor with the command $>qload$ we calculate the Ricci tensor components, simplify and display them as a 4x4 matrix (using the last $>grdisplay$ command). It is a good and interesting experience, before proceeding with the solving of Einstein eqs. to insert in the above lines the next ones > grcalc(Chr(up,dn,dn)); grdisplay(Chr(up,dn,dn)); immediately after loading the metric tensor with $> qload$ which very fast calculates and displays the 40 components of the Chrisstoffel symbols using relation (\[Chris\]). Of course if we want maximum effect of these, before calculating with Maple and GRTensorII we advise the teacher to calculate, by hand, together with the students at least 3 of 4 of these components. This will take some time of hard calculations and many mistakes when done for the first time. To continue it is now much more simple to extract the components of the Ricci tensor one by one as Maple objects in order to process them to solve the obtained equations. It will be a sequence of $>grcomponent$ commands, namely : > ec0:=grcomponent(R(dn,dn),[t,t]); > ec1:=grcomponent(R(dn,dn),[r,r]); > ec2:=grcomponent(R(dn,dn),[theta,theta]); > ec3:=grcomponent(R(dn,dn),[phi,phi]); obtaining the four Einstein equations of the problem. The rest of the Ricci tensor components are zero. A simple inspection of the above obtained four equations reveals that only two of them are independent. Also we can eliminate the second order derivative of the $\lambda(r)$ function between $ec0$ and $ec1$. These can be checked by using the next command lines > expand(simplify(ecu2-ecu3/sin(theta)^2)); > ecu0;ecu1;ecu2; > l2r:=solve(subs(diff(lambda(r),r,r)=l2r,ecu0),l2r); > expand(simplify(subs(diff(lambda(r),r,r)=l2r,ecu0))); > ecu11:=expand(simplify(subs(diff(lambda(r),r,r)=l2r,ecu1))); The first of the above commands checks the equality between $ecu2$ and $ecu3$, which simply gives a “0” (zero) and the next ones simply display the remaining three equations. The command that follows extracts the second order derivative of $\lambda(r)$ from $ecu0$ (substituting it with an intermediate constant $lr2$) and the next one substitutes the result in $ecu1$ and we obtained a new equation $ecu11$. Thus we have now only two equations, $ecu11$ and $ecu2$, namely $$\label{ecu11} \frac{2}{r}\partial_r\lambda(r)+\frac{2}{r}\partial_r\mu(r)=0$$ $$\label{ecu2} 1-e^{-2\mu(r)}+ \left[ r\,\partial_r\mu(r)-r\,\partial_r\lambda(r) \right]e^{-2\mu(r)}=0$$ These two differential equations need to be solved next, in order to obtain the solution. Of course we can now follow a classical strategy solving them manually as is done in any textbook (see [@8] for example). But it is also possible to continue with Maple, using the $>dsolve$ command for solving differential equations (including systems of differential equations). Thus we write the next command > dsolve({ecu11,ecu2},{mu(r),lambda(r)}); which gives us the function $\mu(r)$ as $$\begin{aligned} \mu(r)=\frac{1}{2} ln \left( \frac{r}{r e^{C1} -1} \right) +\frac{C1}{2} \nonumber\end{aligned}$$ In particular a close inspection of both equations $(ecu11, ecu2)$ reveals that $ecu11$ is simply a relation between the derivatives of the two functions, namely $$\begin{aligned} \frac{d \lambda (r)}{d r} +\frac{d \mu (r)}{d r} =0 \nonumber\end{aligned}$$ This shows that we will need only one integration constant and we can solve the equations only for one function as done above. One can write the constant $C1$ as $$\begin{aligned} C1= ln \left( \frac{1}{r_s} \right) \nonumber\end{aligned}$$ where the new introduced constant $r_s$ will be determined later. With these we can rewrite the $ecu2$ and applying again the $>dsolve$ command on it we obtain in the Schwarzschild (type) solution as $$\label{FSS} ds^2 = \left(1-\frac{r_s}{r} \right)c^2dt^2 + \left(1-\frac{r_s}{r}\right)^{-1}dr^2 + r^2\left[d\theta^2 + sin^2(\theta)d\phi^2 \right]$$ The constant $r_s$ is known under the name of Schwarzschild radius and can be determined using the newtonian limit of the field equations as it is done in any textbook (see [@8] for example). The precise value of it is $r_s= 2MG/c^2 $ but this has nothing to do with algebraic computing. Example 2: Simple cosmological models ===================================== Modern cosmology is based on general relativity and Einstein equations, from which we can derive the so called Friedman equations. The latter ones form the core of all cosmological models. The most used metric for describing the dynamics of the universe in a cosmological model is the Friedman-Robertson-Walker metric (FRW), which in spherical coordinates has the following line element [@9] $$\label{cm1} ds^2=c^2dt^2-a^2(t)\left[\frac{dr^2}{1-kr^2}+r^2(d\theta^2+\sin^2\theta d\phi^2)\right]$$ where $k$ is the curvature constant and we are using the $(+,-,-,-)$ signature for the metric. Usually, this $k$ constant is taken to be $1$ (for closed universes), $-1$ (for open universe) and $0$ for a flat one. In (\[cm1\]) we denoted by $a(t)$ the scale factor, that in the end will be the only unknown function of a cosmological model. This scale factor is directly related to the evolution of the universe. For the FRW metric (obtained form the cosmological principle i.e. the universe is spatially homogenous and isotropic) the scale factor is a function only on time. By introducing FRW metric (\[cm1\]) into the Einstein equations (\[EE\]) and assuming that the energy-momentum tensor is of a prefect fluid form [@8] $$\label{rm3} T^{\mu\nu}=\left( \rho+\frac{p}{c^2} \right) u^\mu u^\nu-p g^{\mu\nu}$$ one arrives to the Friedman-Lemaitre equations $$\label{rm4} \begin{split} &\ddot a=-\frac{4\pi G}{3}\left(\rho+\frac{p}{c^2} \right)a + \frac{1}{3}\Lambda c^2a\\ &\dot a^2=\frac{8\pi G}{3}\rho a^2+\frac{1}{3}\Lambda c^2a^2-c^2k \end{split}$$ If the cosmological constant $\Lambda$ is set to zero in eqs.(\[rm4\]) then the equations are called simply the Friedman equations. In eq. (\[rm3\]) $u^\mu$ represnets the 4-velocity of the cosmological fluid, while $p$ and $\rho$ stands for the pressure, respectively the mass density of the fluid. In the same manner as done in [@3] we can compose a sequence of GRTensorII commands for obtaining the Friedman equations (\[rm4\]). A student can write the program on a computer in less than an hour to the a job that if it is done by hand calculations it will take several good hours to a very good student. The basic lines of the GRTensorII program are as follows: > restart; > grtw(); qload(metrica_FRW); > grdef(`u{^a}:=[1,0,0,0]`); > grdef(`T{^a ^b}:=(rho(t)+p(t)/c^2)*u{^a}*u{^b}-p(t)*g{^a ^b}`); > ... > ec0:=R1_0-T1_0; > ec1:=R1_1-T1_1; > ec1:=subs(diff(a(t),t,t)=-(1/6)*K*c^4*rho(t)*a(t)- (1/2)*K*c^2*p(t)*a(t)+(1/3)*Lambda*c^2*a(t),ec1): ec0; ec1; The first line of commands starts the GRTensorII and loads the FRW metric (\[cm1\]). In the next lines we define the 4-velocity and the stress-energy tensor (\[rm3\]) of the cosmological fluid. Now follows a series of more technical commands which can be found in the supplementary web material [@16], commands that allows us to calculate and write the final form of Friedman equations (\[rm4\]). Friedman equations (\[rm4\]) are in fact a system of two differential equations with three unknowns: $a(t)$, $p$ and $\rho$. Thus one needs to find a third equation in order to completely solve the problem. In standard cosmology we use as a third relation the equation of state $$\label{rm5} p(t)=w\rho(t)c^2$$ where $w$ is a constant ( $w=0$ for pressureless ’dust’, $w=1/3$ for radiation and $w=-1$ for vacuum). Let us further introduce the dimensionless quantities (see for example [@9]), usually called density parameters, which are defined by $$\label{rm5} \Omega_i(t)\equiv \frac{8\pi G}{3H^2(t)}\rho_i(t)$$ where $ H(t)=\dot a(t)/a(t)$ is known as the Hubble parameter and $i$ stands for matter, radiation and the cosmological constant $\Lambda$. Besides these three quantities one can also define a curvature density parameter $$\label{rm5} \Omega_k(t)= -\frac{c^2k}{H^2(t)a^2(t)}$$ Rewriting the second equation of (\[rm4\]) in terms of the new defined density parameters we arrive at a very simple expression $$\label{rm6} \Omega_m + \Omega_r + \Omega_{\Lambda} + \Omega_k = 1$$ Taking all the above into account and introducing a normalised scale factor $ A(t)=a(t)/a_0(t_{now})$ (with $a_0(t_{now})$ representing the value of the scale factor at the present epoch) we can finally write an equation for the evolution of the scale factor, namely $$\label{rm7} H^2(t)=H_0^2(\Omega_{r,0}a^{-4}+ \Omega_{m,0}a^{-3}+ \Omega_{k,0}a^{-2} +\Omega_{\Lambda,0})$$ where $\Omega_{i,0}$ are the values of the densities measured at the present epoch. Below we give the main GRTensorII command lines with the help of which we arrive at equation (\[rm7\]). The complete sequence of commands can be found in the supplementary web material [@16]. > ec1:=subs(diff(a(t),t)=H(t)*a(t),ec1); > rho(t):=rho_m0*(a_0/a(t))^3+rho_r0*(a_0/a(t))^4; > ec1:=subs(Omega_k0=1-Omega_m0-Omega_r0-Omega_Lambda0,ec1); We can now use the other numerical and computational facilities of Maple in order to numerically solve equation (\[rm7\]) and express the results as plots (see Fig. 1) of the scale factor as a function of cosmic time. For that we use the following code (see also the supplementary web material [@16]): > ecu:=diff(A(t), t)-sqrt(Omega_m0/A(t)+Omega_r0/A(t)^2+ Omega_Lambda0*A(t)^2+1-Omega_m0-Omega_r0-Omega_Lambda0); > ecu_a:=subs(Omega_m0 =0.3,Omega_Lambda0=0.7,Omega_r0=0,ecu); > sys1:={ecu_a,A(0)=1}: > f1:=dsolve(sys1,numeric): > odeplot(f1,t=-2..2,axes=boxed,numpoints=1000,color=black); [0.47]{} ![Time evolution of the scale factor[]{data-label="fig1ab"}](plot3 "fig:"){width="\textwidth"} [0.47]{} ![Time evolution of the scale factor[]{data-label="fig1ab"}](plot2 "fig:"){width="\textwidth"} Conclusions and further developments ==================================== The article describes a way in which some simple computer programs in Maple and GrTensorII can be used in teaching GR and cosmology. It is obvious that the speed of learning main concepts in GR (and subsequently differential geometry) can be successfully enhanced, avoiding large hand computation steps and a lot of natural mistakes. On the other hand it is clear that the lectures will need to take place in a computer lab, which can be another way to increase the attractiveness of GR and cosmology. We illustrated our experience with short and simple commands without sophisticated tricks normally a professional in the field is using (for example building procedures and libraries via the symbolic computation facilities of Maple). The small and short programs we described here can be also used as a strong basis for further developments in view of more sophisticated and advanced examples. For instance one can develop the above procedures for cosmology in generalised theories of gravity, like those with higher order Lagrangians (as we done in [@6]). Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by a grant of the Romanian National Authority for Scientific Research, Programme for research-Space Technology and Advanced Research-STAR, project nr. 72/29.11.2013 between Romanian Space Agency and West University of Timisoara. C.A.Sporea was supported by the strategic grant POSDRU/159/1.5/S/137750, Project “Doctoral and Postdoctoral programs support for increased competitiveness in Exact Sciences research” cofinanced by the European Social Found within the Sectorial Operational Program Human Resources Development 2007 – 2013. [99]{} S. Weinberg, “Cosmology”, Oxford University Press,2008. B. Schutz, “A first curse in General Relativity”, Cambridge University Press, 2009. M. P. Hobson, G. P. Efstathiou, A. N. Lasenby, “General Relativity, An Introduction for Physicists”, Cambridge University Press, 2006. L.I. Nicolaescu, “Lectures on the Geometry of Manifolds”, World Scientific, 1996. J. Grabmeier, E. Kaltofen, V. Weispfenning, “Computer Algebra Handbook: Foundations · Applications · Systems”, Springer, 2003. F.W. Hehl; R.A. Puntigam; H. Ruder, “Relativity and Scientific Computing ”, Springer, 1996. D. Stauffer; F.W. Hehl; N. Ito; V. Winkelmann; J.G. Zabolitzky, “Computer Simulations and Computer Algebra”, Springer-Verlag, 1993. R.A. d’Inverno, “Algebraic computing in general relativity”, GRG 6(6), 576-593 (1975). C. Heinicke, F.W. Hehlhel, “Computer algebra in gravity”, http://arxiv.org/abs/gr-qc/0105094 F.Y. Wang, “Physics with MAPLE: The Computer Algebra Resource for Mathematical Methods in Physics”, WILEY-VCH Verlag GmbH Co. KGaA,Weinheim, 2006. F.Y.-H. Wang, “Relativistic orbits with computer algebra”, Am. J. Phys. 72,1040 (2004). Maple for Physics Students: Complete Set of Lectures. http://www.maplesoft.com/applications/view.aspx?SID=4743 Trandafir et al., “Elementary tight-binding method for simple electronic structure calculations - An educational approach to modeling conjugated dyes for dye-sensitized solar cells”, Rom. Rep. Phys. 66, 574 (2014). F.A. Ghergu, D.N. Vulcanov, “The Use of the algebraic programming in teaching general relativity”, Comput.Sci.Eng. 3 (2001) 65-70. Maple User Manual, Waterloo Maple Inc.2005. http://grtensor.phy.queensu.ca/ I.I. Cotaescu, C. Crucean, C.A. Sporea, “Elastic scattering of Dirac fermions on Schwarzschild black holes”, arXiv:1409.7201. D.N. Vulcanov, “Calculation of the Dirac equation in curved spacetimes with possible torsion using MAPLE and REDUCE”, Comp. Phys. Comm., 154 (3), pp. 205-218. D.N. Vulcanov, G.S. Djordjevic, C.A. Sporea, “REM - the Shape of Potentials for f(R) Theories in Cosmology and Tachyons”, in Cosmology and Particle Physics beyond Standard Models. Ten Years of the SEENET-MTP Network, edited by Luis Alvarez-Gaume, Goran S. Djordjevic, Dejan Stojkovc, CERN-Proceedings-2014-001, pp. 165-169. D.N. Vulcanov, V. D. Vulcanov, “Maple+GrTensorII libraries for cosmology” - http://arxiv.org/abs/cs/0409006v1 D.N. Vulcanov, G.S. Djordjevic, “On cosmologies with non-minimally coupled scalar fields, the ’reverse engineering method’ and the Einstein frame”, Rom.J.Phys. 57 (2012) 1011-1016. The Supplementary Materials consist of a Maple worksheet illustrating the calculation of Einstein and Friedman equations for some simple cosmological models. [^1]: E-mail: ciprian.sporea89@e-uvt.ro [^2]: E-mail: vulcan@physics.uvt.ro
{ "pile_set_name": "ArXiv" }
--- abstract: 'Debris disks are tenuous, dust-dominated disks commonly observed around stars over a wide range of ages. Those around main sequence stars are analogous to the Solar System’s Kuiper Belt and Zodiacal light. The dust in debris disks is believed to be continuously regenerated, originating primarily with collisions of planetesimals. Observations of debris disks provide insight into the evolution of planetary systems; the composition of dust, comets, and planetesimals outside the Solar System; as well as placing constraints on the orbital architecture and potentially the masses of exoplanets that are not otherwise detectable. This review highlights recent advances in multiwavelength, high-resolution scattered light and thermal imaging that have revealed a complex and intricate diversity of structures in debris disks, and discusses how modeling methods are evolving with the breadth and depth of the available observations. Two rapidly advancing subfields highlighted in this review include observations of atomic and molecular gas around main sequence stars, and variations in emission from debris disks on very short (days to years) timescales, providing evidence of non-steady state collisional evolution particularly in young debris disks.' author: - 'A. Meredith Hughes,$^1$ Gaspard Duchêne,$^{2,3}$ Brenda C. Matthews$^{4,5}$' title: 'Debris Disks: Structure, Composition, and Variability' --- circumstellar disks, planet formation, extrasolar planetary systems, main sequence stars, planetesimals, circumstellar matter 1. Debris disks are a common phenomenon around main sequence stars, with current detection rates at $\sim 25$% despite the limited sensitivity of instrumentation (for Kuiper Belt analogues, a few $\times$ Solar System levels for even the nearest stars, and orders of magnitude from Solar System levels for exozodis) indicating that this figure is unquestionably a lower limit. Debris disks are more commonly detected around early-type and younger stars, and they frequently show evidence of two dust belts, like the Solar System. 2. High-resolution imaging of structures in outer debris disks reveals a rich diversity of structures including narrow and broad rings, gaps, haloes, wings, warps, clumps, arcs, spiral arms, and eccentricity. There is no obvious trend in disk radial extent with stellar spectral type or age. Most of the observed structures can be explained by the presence of planets, but for most structures there is also an alternative proposed theoretical mechanism that does not require the presence of planets. 3. The combined analysis of a debris disk’s SED with high-resolution (polarized) scattered light images and thermal emission maps is a powerful method to constrain the properties of its dust. Despite some shortcomings in current models, a coherent picture arises in which the grain size distribution is characterized by a power law similar to that predicted from collisional models, albeit with a minimum grain size that is typically a few times larger than the blowout size. Beyond the ubiquitous silicates, constraints on dust composition remain weak. Remarkably, dust in most debris disks appears to be characterized by a nearly universal scattering phase function that also matches that observed for dust populations in the Solar System, possibly because most dust grains share a similar aggregate shape. 4. Many debris disk systems harbor detectable amounts of atomic and/or molecular gas. Absorption spectroscopy of a few key edge-on systems reveals volatile-rich gas in a wide variety of atomic tracers, and there is evidence of an enhanced C/O ratio in at least some systems. Emission spectroscopy reveals that CO gas is common, especially around young A stars. The quantities of molecular gas found in most systems are likely insufficient to strongly affect the dust dynamics or planet formation potential. While the origin of the gas is still a matter of discussion, there is increasing evidence that the sample is not homogeneous. The molecular gas in some systems is clearly second-generation like the dust, while in other systems the disk is likely to be a “hybrid” with second-generation dust coexisting with at least some primordial gas. 5. Detection of time variable features in debris disk SEDs, images and light curves on timescales of days to years provide a window into ongoing dynamical processes in the disks and potentially the planetesimal belts as well. 6. Planet-disk interaction is now directly observable in a handful of systems that contain both a directly imaged planet(ary system) and a spatially resolved debris disk. These rare systems provide valuable opportunities to place dynamical constraints on the masses of planets, and thereby to calibrate models of planetary atmospheres. <!-- --> 1. [*M star exploration:*]{} Most studies of debris disk structure and composition have so far focused on F, G, K, and A stars. While the incidence of M star debris disks appears to be low, the known disks include some of the most iconic systems including AUMic, and their gas and dust properties are poorly understood. Given then high frequency of terrestrial planets around M dwarfs, understanding their debris disks is a high priority. 2. [*Physics of Debris Disk Morphology:*]{} Multiwavelength imaging is beginning to untangle the underlying physical mechanisms sculpting debris disk structure. Dynamically-induced disk structures from planets or stellar flybys should affect even the largest grains, whereas those caused by the ISM or radiation effects tend to produce the largest effect on the smallest grains. As limits on gas emission improve, gas becomes a less plausible mechanism for sculpting narrow and eccentric rings in many systems. Imaging of structures across the electromagnetic spectrum, particularly comparison of ALMA thermal imaging with high-contrast scattered light imaging, will be key for moving from categorization of structure to conceptualization of the underlying physics. 3. [*The connection between hot, warm and cold dust belts:*]{} While multiple components are often detected in debris disk systems, identifying correlations between them has not yet delivered a fully coherent picture. Understanding the relative importance of in situ dust production and dust migration, as well as the physical mechanisms explaining the latter, remains an open question. Future observations sensitive enough to detect dust at intermediate radii between separate belts will help in clarifying this issue. 4. [*Variability:*]{} While studies of gas absorption variability have a long and fascinating history – revealing the dynamics and composition of falling evaporating bodies – studies of the variability of dust, both directly in emission and indirectly through stellar variability, are in their infancy. Currently there are a handful of known instances, each of which has multiple proposed explanations ranging from planetary dynamics to collisional avalanches. While large variations in integrated infrared light are rare, they are important for understanding stochastic events. Furthermore, the recent detection of fast-moving scattered light features in the dust around AUMic suggests that at least in some cases systems previously assumed to be static might be variable on more subtle spatial and flux contrast scales. Debris disk variability is an area that is wide open for discovery and modeling, particularly in the approaching epoch of [*JWST*]{}. 5. [*Gas statistics and chemistry:*]{} As the number of gas detections in debris disks increases, the opportunity for characterization of atomic and molecular abundances of the gas also grows. Clear opportunities include surveys designed to measure incidence of gas emission in a statistically meaningful way, in addition to detection of molecules other than CO and the corresponding opportunity to characterize the composition of exocometary gas. It would also be beneficial to amass rich data sets comparable to that of $\beta$Pic for a sample of several different objects so that modeling of both the atomic and molecular components can begin to explore the diversity of exosolar gas composition and better distinguish between primordial and secondary gas origins. DISCLOSURE STATEMENT {#disclosure-statement .unnumbered} ==================== The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== The authors wish to thank the following people for providing feedback and commentary on the article: Christine Chen, Kevin Flaherty, Paul Kalas, Grant Kennedy, Sasha Krivov, Luca Matra, Aki Roberge, Kate Su and Ewine van Dishoeck. The authors thank Eugene Chiang and Mark Wyatt for points of clarification. The authors are also grateful to the following people who agreed to share data used in the preparation of the figures in this review: Dan Apai, John Carpenter, Cail Daley, Bill Dent, Carsten Dominik, Jane Greaves, Paul Kalas, Markus Kasper, Mihoko Konishi, Meredith MacGregor, Sebastian Marino, Julien Milli, Johan Olofsson, Glenn Schneider. A.M.H. gratefully acknowledges support from NSF grant AST-1412647, and G.D. from NSF grants AST-1413718 and AST-1616479.
{ "pile_set_name": "ArXiv" }
--- abstract: | The Kähler rank was introduced by Harvey and Lawson in their 1983 paper as a measure of the [*kählerianity*]{} of a compact complex surface. In this work we generalize this notion to the case of compact complex manifolds and we prove several results related to this notion. We show that on class $VII$ surfaces, there is a correspondence between the closed positive forms on a surface and those on a blow-up in a point. We also show that a manifold of maximal Kähler rank which satisfies an additional condition is in fact Kähler.\ [*Mathematics Subject Classification (2010)*]{} 32J27 (primary), 32J15 (secondary) author: - 'Ionuţ Chiose$^{\ast}$' title: The Kähler rank of compact complex manifolds --- [^1] Introduction {#introduction .unnumbered} ============ In [@blaine], Harvey and Lawson introduced the Kähler rank of a compact complex surface, a quantity intended to measure how far a surface is from being Kähler. A surface has Kähler rank $2$ iff it is Kähler. It has Kähler rank $1$ iff it is not Kähler but still admits a closed (semi-) positive $(1,1)$-form whose zero-locus is contained in a curve. In the remaining cases, it has Kähler rank $0$. In this paper we generalize the notion of Kähler rank to compact complex manifolds of arbitrary dimension and study its properties. First, we discuss the problem of the bimeromorphic invariance of the Kähler rank. There are examples that show that it is not a bimeromorphic invariant. However, two bimeromorphic surfaces have the same Kähler rank [@chiosetoma]. This was shown by classifying the surfaces of rank $1$. In this paper we take a different approach, local in nature, which was alluded to in [@chiosetoma]. Namely, we study the problem of when a plurisubharmonic function on the blow-up is the pull-back of a smooth function. However, this method leads to an involved system of differential equations, and we were able to solve this system only up to order $3$. Thus we obtain: \[system2\] Let $X$ be a compact, complex, non-Kähler surface with $b_1(X)=1$, and let $p:X'\to X$ be the blow-up of $X$ at a point. Suppose that $\omega'$ is a closed, positive $(1,1)$ form on $X'$. Then there exists $\omega$ a closed positive $(1,1)$ form on $X$ of class ${\mathcal C}^1$ such that $p^*\omega=\omega'$. Second, we study the manifolds of maximal Kähler rank, i.e., those manifolds that admit a positive $d$-closed $(1,1)$-form of strictly positive volume. It is conjectured that such manifolds are in the Fujiki class ${\mathcal C}$. Under an additional condition, we prove that they are in fact K" ahler: Let $X$ be a compact complex manifold of dimension $n$ such that there exists $\{\alpha\}\in H^{1,1}_{BC}(X,{\mathbb R})$ a nef class such that $$\int_X\alpha^n>0$$ Suppose moreover that there exists $h$ a Hermitian metric on $X$ such that $$i\partial\bar\partial h=0, \partial h\wedge\bar\partial h=0$$ Then $X$ is Kähler. The same method yields a simpler proof of a key theorem of Demailly and Păun in [@demaillypaun]. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Radu Alexandru Todor for his help with the proof of Proposition \[radu\]. Definition and examples ======================= The Kähler rank of a manifold is the maximal rank a closed positive $(1,1)$-form can reach on the manifold: Let $X$ be a compact complex manifold of dimension $n$. The Kähler rank of $X$, denoted $Kr(X)$, is $$Kr(X)=\max\left\{k\vert \exists\omega\in {\mathcal C}^{\infty}_{1,1}(X,{\mathbb R}),\omega\geq 0, d\omega=0, \omega^k\neq 0\right\}$$ The original definition in [@blaine] for surfaces required that the form $\omega$ appearing in the definition have zeroes in a analytic subset of $X$. Corollary 4.3 in [@chiosetoma] shows that the definition above coincides with the one in [@blaine] for surfaces. Note that if $Kr(X)=\dim X$ then for every $p\in\overline{0,n}$ the operator $\partial :H^{p,0}(X)\to H^{p+1,0}(X)$ is zero, while, if $Kr(X)=0$ then $\partial :H^{1,0}(X)\to H^{2,0}(X)$ is into. Indeed, if $\sigma\in H^{1,0}(X)\setminus\{0\}$ satisfies $\partial\sigma=0$, then $i\sigma\wedge\bar\sigma$ is a closed, non-zero positive $(1,1)$-form. As in the surface case considered in [@blaine], on a compact complex manifold $X$ of Kähler rank $Kr(X)=k$, there exists a complex analytic canonical foliation ${\mathcal F}$ of codimension $k$. It is defined on the open set $${\mathcal B}=\{x\in X\vert\exists\omega\in {\mathcal C}^{\infty}_{1,1}(X,{\mathbb R}), d\omega=0, \omega\geq 0, \omega^k(x)\neq 0\}$$ and is characterized by $\omega^k\vert{\mathcal F}=0,\forall\omega\geq 0,d\omega=0$. A compact complex surface $X$ has Kähler rank $2$ if and only if it is Kähler (see remark \[kahsurf\] below) and this is equivalent to $b_1(X)$ even (see [@lamari]). When $b_1(X)$ is odd but at least $3$, then $H^{1,0}(X)\neq 0$ and if $\sigma$ is a non-zero holomorphic $1$-form on $X$ then it is $d$-closed, hence $i\sigma\wedge\bar\sigma$ is a $d$-closed positive $(1,1)$-form on $X$. If $b_1(X)=1$, then the main results of [@chiosetoma] and [@brunella] show that the only surfaces of Kähler rank equal to $1$ are the Inoue surfaces and some Hopf surfaces. The other known surfaces (the other Hopf surfaces and the Kato surfaces) have Kähler rank $0$. In [@hironaka] the author constructed an example of a $3$-fold $X$ which is a proper modification of a Kähler manifold but which is not Kähler. In fact, it is a proper modification $p:X\to {\mathbb P}^3$ of the projective space. One can take $p^*\omega_{FS}$, where $\omega_{FS}$ is the Fubiny-Study metric, to obtain a closed positive $(1,1)$-form, not everywhere degenerate, on a manifold that is not Kähler. Therefore, unlike the surface case, in higher dimensions there are manifolds of maximal Kähler rank and which are not Kähler. The well-known Iwasawa $3$-fold is the quotient $H/\Gamma$ where $H$ is the group of matrices of the form $$\left( \begin{array}{ccc} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{array} \right)$$ with complex entries, and $\Gamma$ is the subgroup of the matrices whose entries have integer real and imaginary entries. Then the holomorphic $1$-forms on $dx$, $dy$ and $dz-xdy$ on $H$ induce three holomorphic $1$-forms on $H/\Gamma$ denoted by $\sigma_1$, $\sigma_2$ and $\sigma_3$ respectively. Then $d\sigma_3=-\sigma_1\wedge\sigma_2$, hence $\sigma_3$ is not $d$-closed, therefore $Kr(H/\Gamma)\leq 2$. But $\sigma_1$ and $\sigma_2$ are $d$-closed, therefore the form $\omega=i\sigma_1\wedge\bar\sigma_1+i\sigma_2\wedge\bar\sigma_2$ is closed and positive, and $\omega^2\neq 0$, therefore the Kahler rank is $2$. In [@oguiso] the author constructed a Moishezohn $3$-fold $Y$ that contains an algebraic $1$-cycle ${\ell }$ homologous to zero and which moves and covers the whole $Y$. Such a manifold cannot have maximal Kähler rank. Indeed, if $\omega$ is a closed positive $(1,1)$-form on $Y$, and if $y\in Y$ is arbitrary, let ${\ell}'$ be a $1$-cycle passing through $y$ and which is homologous to zero. Then $$\int_{{\ell}'}\omega=0$$ and therefore at $y$, $\omega$ cannot have rank $3$. Therefore $\omega^3=0$. This example shows that for dimension at least $3$ the Kähler rank is not a bimeromorphic invariant. However, it is expected that, if $Y\to X$ is the blow-up of a compact complex manifold $X$ in a point, then $Kr(X)=Kr(Y)$. In [@fuliyau] the authors constructed a complex structure on the connected sum ${\#}_kS^3\times S^3$ of $k\geq 2$ copies of $S^3\times S^3$ and a banced metric $g^2$ which is $i\partial\bar\partial$-exact. Such a manifold has Kähler rank equal to $0$. Indeed, if $\omega$ is a closed positive $(1,1)$-form, then its trace with respect to $g^2$ is zero, hence the form $\omega$ has to be $0$. Starting with the above examples, and taking products, one can obtain compact complex manifolds of any dimension $n\geq 2$ and any Kähler rank $0\leq Kr\leq n$. The bimeromorphic invariance of the Kähler rank for class $VII$ surfaces ======================================================================== In this section we discuss the bimeromorphic invariance of the Kähler rank on class $VII$ surfaces, the only non-trivial case. We show that the problem can be reduced to a system of differential equations, and then we solve the system up to order $3$, thus proving theorem \[system2\] Preliminaries ------------- Suppose $X$ is a surface with $b_1=1$ and let $\pi: X'\to X$ be the blow-up of $X$ in a point $p$. Let $\gamma^{0,1}$ be a $\bar\partial$ closed $(0,1)$ form on $X$ which generates $H^{0,1}(X)$. Then $\gamma'^{0,1}=\pi^*\gamma^{0,1}$ generates $H^{0,1}(X')$. Let $\omega'$ be a closed, positive $(1,1)$ form on $X'$; then it is $d$ exact [@blaine], Proposition 37. We want to show that there exists $\omega$ on $X$ such that $\pi^*\omega=\omega'$. Then on $X'$, $\omega'$ can be written as $$\omega'=\mu\overline{\partial\gamma'^{0,1}}+\overline{\mu}\partial\gamma'^{0,1}+i\partial\bar\partial\phi'$$ where $\mu\in {\mathbb C}$ and $\phi'\in {\mathcal C}^{\infty}(X',{\mathbb R})$. We need to show that $\phi'$ is the pull-back of a ${\mathcal C}^{\infty}$ function $\phi$ on $X$. Locally on a disk $\Delta^2=\{\vert z\vert <1\}$ around $p$ on $X$, $\gamma^{0,1}$ is $\bar\partial$ exact, so it can be written as $\gamma^{0,1}\vert\Delta^2=\bar\partial f$, where $f\in {\mathcal C}^{\infty}(\Delta^2)$. Then on $\pi^{-1}(\Delta^2)$, $$\omega'=i\partial\bar\partial (2{\rm Im}(\bar\mu f')+\phi')$$ where $f'=\pi^*f$. Set $\varphi'=2Im(\bar\mu f)+\phi'$. We need to show that $\varphi'$ is the pull-back of a smooth function on $\Delta^2$. So let $\pi:\hat{\Delta}^2\to\Delta^2$ be the blow-up of the unit disk in ${\mathbb C}^2$, let $E$ be the exceptional divisor, and suppose that locally $\pi$ is given by $(z,w)\to (z,zw)=(z_1,z_2)$. The exceptional divisor is given by $\{z=0\}$. Let $\varphi'$ be a ${\mathcal C}^{\infty}$ function on $\hat{\Delta}^2$. Then we have There exists $\varphi$ a ${\mathcal C}^\infty$ function on $\Delta^2$ such that $\varphi'=\pi^*\varphi$ if and only if there exist $A_{\alpha,\beta}^{p,q}\in {\mathbb C}$ such that $$\label{system1} {\frac{\partial^{\alpha+\beta}\varphi'}{\partial z^{\alpha}\partial\bar z^{\beta}}\vline}_{z=0} =\sum _{p=0}^{\alpha}\sum_{q=0}^{\beta} \binom{\alpha}{p}\binom{\beta}{q}A_{\alpha,\beta}^{p,q}w^p\bar w^q$$ If $\varphi'=\pi^*\varphi$, with $\varphi\in{\mathcal C}^{\infty}(\Delta^2)$, then, from $\varphi'(z,w)=\varphi (z, zw)$ and the chain rule, we obtain the above equation with $$A_{\alpha,\beta}^{p,q}=\frac{\partial^{\alpha+\beta}\varphi}{\partial z_1^p\partial z_2^{\alpha -p}\partial\bar z_1^q\partial\bar z_2^{\beta-q}}(0)$$ Conversely, if $\varphi'$ satisfies the above conditions on its partial derivatives, then $\varphi'\vert_{E}$ is constant, and it induces a continuous function $\varphi$ on $\Delta^2$. It is actually ${\mathcal C}^{\infty}$, with the partial derivatives at $0$ equal to $A_{\alpha,\beta}^{p,q}$ as above. If the above equation \[system1\] holds only for $\alpha+\beta \leq k$, it follows that $\varphi'$ is the pull-back of a ${\mathcal C}^k$ function $\varphi$. So in order to prove that $\varphi'$ is the pull-back of a ${\mathcal C}^{\infty}$ function $\varphi$ on $\Delta^2$, it is enought to prove that $${\frac{\partial^{\alpha+\beta}\varphi'}{\partial z^{\alpha}\partial\bar z^{\beta}}\vline}_{z=0}$$ are polynomials in $w$ and $\bar w$ of degrees $\alpha$ and $\beta$ respectively. The system of differential equations ------------------------------------ Now we set up the system of differential equations which needs to be solved in order to prove that $\omega'$ is the pull-back of a smooth $\omega$. We will use the fact that $\omega'$ is of rank $1$ ([@blaine], Proposition 37), i. e., that $$\omega'\wedge\omega'=0$$ and we will show that $\varphi$ is of class ${\mathcal C}^3$, i. e., that $\omega'$ is the pull-back of a ${\mathcal C}^1$ form. First, $\omega'=i\partial\bar\partial\varphi'$ and it is positive, hence $\varphi'$ is plurisubharmonic. Restricted to the exceptional divisor $E$, it follows that $\varphi'\vert_E$ is constant. Hence $\varphi'$ is the pull-back of a continuous function $\varphi$ on $\Delta^2$. Next, denote by $$P_{\alpha,\beta}={\frac{\partial^{\alpha+\beta}\varphi'}{\partial z^{\alpha}\partial\bar z^{\beta}}\vline}_{z=0}$$ which are ${\mathcal C}^{\infty}$ functions on ${\mathbb C}$. Since $\varphi'$ is defined on the whole $\hat{\Delta}^2$, the functions $P_{\alpha,\beta}$ satisfy the following [*growth conditions*]{}: $$\label{growth} w^{\alpha}\bar w^{\beta}P_{\alpha,\beta}\left(\frac 1w\right )$$ can be extended to ${\mathcal C}^{\infty}$ functions at $0$. Consider the equation $\omega'\wedge\omega'=0$ written in local coordinates $(z,w)$: $$\frac{\partial^2\varphi'}{\partial z\partial\bar z}\cdot \frac{\partial^2\varphi'}{\partial w\partial\bar w}= \frac{\partial^2\varphi'}{\partial z\partial\bar w}\cdot\frac{\partial^2\varphi'}{\partial w\partial\bar z}$$ Take $$\frac{\partial^{\alpha+\beta}}{\partial z^{\alpha}\partial\bar z^{\beta}}$$ and restrict it to $z=0$; we obtain $$\sum_{p=0}^{\alpha}\sum_{q=0}^{\beta}\binom{\alpha}{p}\binom{\beta}{q}P_{p+1,q+1}\frac{\partial^2P_{\alpha-p,\beta-q}}{\partial w\partial\bar w}=$$ $$\label{system} =\sum_{p=0}^{\alpha}\sum_{q=0}^{\beta}\binom{\alpha}{p}\binom{\beta}{q}\frac{\partial P_{p+1,q}}{\partial\bar w}\frac{\partial P_{\alpha-p,\beta-q+1}}{\partial w}$$ which gives a system of partial differential equations in the unknowns $P_{\alpha,\beta}$ which satisfy the conditions \[growth\] and moreover $\overline{P}_{\alpha,\beta}=P_{\beta,\alpha}$. We know that $P_{0,0}$ is constant, and from $$P_{1,1}\cdot\frac{\partial^2 P_{0,0}}{\partial w\partial\bar w}=\frac{\partial P_{1,0}}{\partial\bar w}\cdot\frac{\partial P_{0,1}}{\partial w}$$ we obtain that $P_{1,0}$ is holomorphic, and from the growth condition \[growth\] it follows that $P_{1,0}$ has the desired form, i. e., it is a polynomial in $w$ of degree $1$. This shows that $\varphi$ is a function of class ${\mathcal C}^1$. The proof of Theorem \[system2\] -------------------------------- We complete the proof of theorem \[system2\]. We show that $\varphi$ is in fact of class ${\mathcal C}^3$, hence $\omega$ is of class ${\mathcal C}^1$. For $\alpha=2$ and $\beta=0$ in \[system\] we obtain $$P_{1,1}\cdot\frac{\partial^2P_{2,0}}{\partial w\partial\bar w}=2\frac{\partial P_{2,0}}{\partial\bar w}\cdot\frac{\partial P_{1,1}}{\partial w}$$ and for $\alpha=1,\beta=1$ we obtain $$P_{1,1}\cdot\frac{\partial^2P_{1,1}}{\partial w\partial\bar w}=\frac{\partial P_{2,0}}{\partial\bar w}\cdot\frac{\partial P_{0,2}}{\partial w}+\frac{\partial P_{1,1}}{\partial\bar w}\cdot\frac{\partial P_{1,1}}{\partial w}$$ Set $$f=\frac{\partial P_{2,0}}{\partial\bar w}$$ and $g=P_{1,1}$. Then $f$ and $g$ satisfy the following properties: they are ${\mathcal C}^{\infty}$ functions on ${\mathbb C}$; $g$ has real values; the functions $$\label{growth1} w\bar w g\left(\frac 1w\right)$$ and $$\label{growth2} \frac{w^2}{\bar w^2}\cdot f\left(\frac 1w\right)$$ are ${\mathcal C}^{\infty}$ at $0$, and moreover $f$ and $g$ satisfy the following equations: $$\label{1eq} \frac{\partial f}{\partial w}\cdot g=2 f\cdot\frac{\partial g}{\partial w}$$ $$\label{2eq} g\cdot\frac{\partial^2g}{\partial w\partial\bar w}=\vert f\vert^2+\left|\frac{\partial g}{\partial w}\right|^2$$ We will show the following \[radu\] $f=0$ and $g$ is a quadratic form of rank $1$, i. e., $g(w)=|a+bw|^2$. Let $D_g$ be the non-zero set of $g$, i. e., $D_g=\{w\in {\mathbb C}|g(w)\neq 0\}$. If $D_g=\emptyset$, then $g=0$ and from \[2eq\] it follows that $f=0$. If $D_g={\mathbb C}$, then $g$ is never $0$, and from \[1eq\] it follows that there exists $h$ holomorphic on ${\mathbb C}$ such that $f=\bar h g^2$. We can assume that $g>0$ on ${\mathbb C}$. Then from \[2eq\] it follows that $\ln g$ is subharmonic, hence $\ln |f|$ is subharmonic on $D_f=\{w\in {\mathbb C}|f(w)\neq 0\}$. It follows that $|f|^2$ is subharmonic on ${\mathbb C}$ and since $f$ is bounded (from \[growth2\]), it follows that $|f|$ is constant. If $|f|\neq 0$, then from $f=\bar h g^2$ we obtain that $i\partial\bar\partial\ln g=0$ and from \[2eq\] we get that $|f|=0$, contradiction. Hence $f=0$ and equation \[2eq\] implies that $\ln g$ is harmonic, i. e., $g=\exp({\rm Re} j)$, where $j$ is a holomorphic function on ${\mathbb C}$. From condition \[growth1\] on $g$ it follows that $j$ is constant, hence also $g$ is constant. Now assume that $D_g\neq \emptyset, {\mathbb C}$ and denote by $D_g'$ a connected component of $D_g$. Assume that $g>0$ on $D_g'$. From \[1eq\] it follows that $f=\bar h g^2$ where $h$ is a holomorphic function on $D_g'$. Again \[2eq\] implies that $\ln g$ is subharmonic on $D_g'$ and so $\ln |f|$ is subharmonic on $D_g'\cap D_f$. Let $w_0\in\partial D_g'$ (the boundary of $D_g'$) and set $$f'(w)=\frac{f(w)}{\sqrt{|w-w_0|}}$$ as a function on $D_g'$. Since $\ln|f|$ is subharmonic, it follows that $\ln|f'|$ is also subharmonic on $D_g'$, so $|f'|^2$ is subharmonic on $D_g'$. Moreover, $f=0$ on the boundary $\partial D_g'$ (this follows again from \[2eq\]) except possibly at $w_0$, and $\lim_{w\to \infty} |f'(w)|=0$ because $f$ is bounded at infinity (from \[growth2\]). Since $f(w_0)=0$ it follows that $f'$ can be extended to a continuous function at $w_0$, with $f'(w_0)=0$. Hence $|f'|$ is a subharmonic function on $D_g'$, $f'=0$ on $\partial D_g'\cup\{\infty\}$, hence from the maximum principle, it follows that $f'=0$ on $D_g'$, hence also $f=0$ on $D_g'$. Since $f=0$ on $\{w\in {\mathbb C}|g(w)=0\}$, we get that $f=0$ on the whole ${\mathbb C}$. So $g$ satisfes the equation $$g\cdot\frac{\partial^2 g}{\partial w\partial\bar w}=\frac{\partial g}{\partial w}\cdot\frac{\partial g}{\partial\bar w}$$ and $$w\bar w\cdot g\left ( \frac 1w\right )$$ is ${\mathcal C}^{\infty}$ at $0$. If $g$ has two zeroes, $w_0$ and $w_1$, $w_0\neq w_1$, we consider as above $D_g'$ a connected component of $D_g$. Assume that $g>0$ on $D_g'$. Then $\ln g$ is harmonic on $D_g'$. Let $$g'(w)=\frac{g(w)}{\sqrt{|w-w_0|^3}\sqrt{|w-w_1|^3}}$$ Then $\ln g'$ is harmonic on $D_g'$, so $g'$ is subharmonic. Moreover, it is $0$ on the boundary $\partial D_g'$ of $D_g'$, except possibly at $w_0$ and $w_1$. But at $w_0$, $g(w_0)=0$ and $$\frac{\partial g}{\partial w}(w_0)=\frac{\partial g}{\partial \bar w}(w_0)=0$$ and the same at $w_1$, which implies that $g'$ is continuous on the whole boundary $\partial D_g'$. At infinity, $g$ approaches $0$, and again by the maximum principle we obtain that $g=0$ on $D_g'$, contradiction. This shows that $g$ has exactly one zero. Assume that $g(w_0)=0$. Then consider the function $$g''(w)=\frac{g(w)}{|w-w_0|^2}$$ on ${\mathbb C}\setminus \{w_0\}$. Then $\ln g''$ is harmonic on ${\mathbb C}\setminus \{w_0\}$, and it is bounded at infinity. Moreover, since $g(w_0)=0$ and $dg(w_0)=0$, it follows that $g''$ is bounded near $w_0$. Hence $g''$ is a bounded, subharmonic function on ${\mathbb C}\setminus \{w_0\}$, so it is constant. Therefore $g(w)=C|w-w_0|^2$. Returning to our previous notations, we showed that $P_{2,0}$ is holomorphic, hence it is a polynomial of degree $2$ in $w$, and that $P_{1,1}$ is a polynomial of degree $\leq 1$ in $w$ and $\bar w$. Hence $\varphi$ is a function of class ${\mathcal C}^2$ and $\omega$ is continuous. Next, we show that if $P_{1,1}\neq 0$, then $\varphi$ is actually ${\mathcal C}^{3}$. First, we can assume, without loss of generality, that $P_{1,1}$ is constant. Indeed, if $P_{1,1}(w)=C|w-w_0|^2$, then we replace the functions $P_{\alpha,\beta}$ by $$\frac{1}{(w-w_0)^{\alpha}(\bar w-\bar w_0)^{\beta}}P_{\alpha,\beta}(w)$$ and we end up with the same system of differential equations and the same [*growth conditions*]{}. When $\alpha=3$ and $\beta=0$ in \[system\] we obtain $$P_{1,1}\cdot\frac{\partial ^2 P_{3,0}}{\partial w\partial \bar w}=3\cdot \frac{\partial P_{3,0}}{\partial\bar w}\cdot\frac{\partial P_{1,1}}{\partial w}$$ and when $\alpha=2$ and $\beta=1$ we obtain $$P_{1,1}\cdot\frac{\partial^2P_{2,1}}{\partial w\partial\bar w}+2\cdot P_{2,1}\cdot\frac{\partial P_{1,1}}{\partial w\partial\bar w}=\frac{\partial P_{1,1}}{\partial \bar w}\cdot\frac{\partial P_{2,1}}{\partial w}+2\cdot\frac{\partial P_{2,1}}{\partial\bar w}\cdot\frac{\partial P_{1,1}}{\partial w}$$ $P_{1,1}$ is a non-zero constant, so the equations imply that both $P_{3,0}$ and $P_{2,1}$ are harmonic. By using the [*growth conditions*]{} we obtain that $P_{3,0}$ is holomorphic and that $P_{2,1}$ has the desired form. If $P_{1,1}=0$, things get more complicated, but we can still show that $\varphi$ is of class ${\mathcal C}^3$. If $\omega (0)=0$, then for $\alpha+\beta=4$ the system \[system\] implies the following equations: $$3f\cdot\frac{\partial g}{\partial w}=2\frac{\partial f}{\partial w}\cdot g$$ $$\bar g\cdot\frac{\partial f}{\partial w}+3g\cdot\frac {\partial^2 g}{\partial w\partial \bar w}=3\frac{\partial g}{\partial w}\cdot\frac{\partial g}{\partial\bar w}+3f\frac{\partial\bar g}{\partial w}$$ $$2g\cdot\frac{\partial^2\bar g}{\partial w\partial\bar w}+2\bar g\frac{\partial^2 g}{\partial w\partial\bar w}= \frac{\partial g}{\partial w}\cdot\frac{\partial\bar g}{\partial\bar w}+4\frac{\partial g}{\partial \bar w}\cdot\frac{\partial\bar g}{\partial w}+f\cdot\bar f$$ where $$f=\frac{\partial P_{3,0}}{\partial\bar w}$$ and $g=P_{2,1}$ and we have the corresponding [*growth conditions*]{} for $f$ and $g$. This system can be solved by using similar methods as in Proposition \[radu\], so we omit it. Manifolds of maximal Kähler rank ================================ In this section we show that a compact complex manifold $X$ of dimension $n$ such that $Kr(X)=n$ and which moreover admits a special Hermitian metric is in fact Kähler: Let $X$ be a compact complex manifold such that there exists a [*nef*]{} class $\{\alpha\}\in H^{1,1}_{BC}(X,{\mathbb R})$ such that $$\int_X\alpha^n>0$$ Suppose moreover that $X$ supports a Hermitian metric $h$ such that $$\label{compatibility} i\partial\bar\partial h=\partial h\wedge\bar\partial h =0$$ Then $\{\alpha\}$ is [*big*]{} and $h$ is $\partial+\bar\partial$ cohomolgous to a Kähler metric. In particular $X$ is Kähler. Here [*big*]{} means that the class $\{\alpha\}$ contains a Kähler current, i.e., a closed positive current that dominates some Hermitian metric. Condition \[compatibility\] is needed in order to bound some integrals (see \[integralinequality\] below) and it is equivalent to $$i\partial\bar\partial h^k=0,\forall k=\overline{1,n-1}$$ The condition \[compatibility\] appeared in the work [@guanli], where the authors attempted to solve the Monge-Ampère equation on Hermitian manifolds. \[kahsurf\] When $n=2$, the existence of a Hermitian form satisfying \[compatibility\] is well-known, and we obtain another proof of the fact that a surface of Kähler rank equal to $2$ is Kähler. When $n=3$ just the equation $i\partial\bar\partial h=0$ is needed. The above theorem is a particular case of a conjecture of Demailly and Păun (see [@demaillypaun], Conjecture 0.8) which states that if a manifold admits a nef class of strictly positive self-intersection, the the manifold is in Fujiki class ${\mathcal C}$, i.e., it is bimeromorphic to a Kähler manifold. First, we show that $\{\alpha\}$ is big. We need to show that there exists $\varepsilon_0>0$ and a distribution $\chi$ such that $\alpha+i\partial\bar\partial\chi\geq \varepsilon_0 h$. According to Lamari’s result [@lamari], Lemme 3.3, this is equivalent to showing that $$\int_X\alpha\wedge g^{n-1}\geq \varepsilon_0\int_X h\wedge g^{n-1}$$ for any Gauduchon metric $g^{n-1}$ on $X$. So suppose that $\forall m\in {\mathbb N},\exists g_m^{n-1}$ a Gauduchon metric such that $$\int_X\alpha \wedge g_m^{n-1} \leq\frac 1m\int_X h\wedge g_m^{n-1}$$ We can assume that $$\int_X h\wedge g_m^{n-1}=1$$ and therefore $$\int_X \alpha\wedge g_m^{n-1}\leq \frac 1m$$ Since $\{\alpha\}$ is nef, for every $m$ we can find $\psi_m\in {\mathcal C}^{\infty}(X,{\mathbb R})$ such that $\alpha+i\partial\bar\partial\psi_m\geq -\frac {1}{2m}h$. The main result of [@tossatiw] implies that we can solve the equation $$\label{monge} \left (\alpha+\frac 1m h+i\partial\bar\partial\varphi_m\right )^n=C_m g_m^{n-1}\wedge h$$ for a function $\varphi_m\in{\mathcal C}^{\infty}(X,{\mathbb R})$ such that if we set $\alpha_m=\alpha+\frac 1m h+i\partial\bar\partial\varphi_m$, then $\alpha_m>0$. The constant $C_m$ is given by $$C_m=\int_X\left (\alpha+\frac 1m h\right )^n\geq\int_X\alpha^n=C>0$$ Now $$\label{integralinequality} \int_X\alpha_m^{n-1}\wedge h=\int_X h\wedge \left(\alpha+\frac 1m h\right )^{n-1}\leq \int_Xh\wedge \left(\alpha+h\right )^{n-1}=M$$ and if we set $$E=\left \{\frac{\alpha_m^{n-1}\wedge h}{g_m^{n-1}\wedge h}> 2M\right \}$$ then $$\label{edelta} \int_{E}g_m\wedge h\leq \frac 12$$ Therefore on $X\setminus E$ we have $\alpha_m^{n-1}\wedge h\leq 2M g_m^{n-1}\wedge h$. By looking at the eigenvalues of $\alpha_m$ with respect to $h$, from \[edelta\] and \[monge\], it follows that on $X\setminus E$ we have $$\alpha_m\geq \frac{C_m}{2nM}h$$ Therefore $$\label{ineq} \int_X \alpha_m\wedge g_m^{n-1}\geq\int_{X\setminus E}\alpha_m\wedge g_m^{n-1}\geq \frac {C_m}{2nM}\int_{X\setminus E}h\wedge g_m^{n-1}=$$ $$=\frac{C_m}{2nM}\left (\int_Xh\wedge g_m^{n-1}-\int_{E}h\wedge g_m^{n-1}\right )\geq \frac {C}{4nM}$$ On the other hand $$\int_X\alpha_m\wedge g_m^{n-1}=\int_X\alpha\wedge g_m^{n-1}+\frac 1m\int_X h\wedge g_m^{n-1}\leq \frac 2m$$ contradiction with \[ineq\]. Therefore $\{\alpha\}$ is big, and from [@demaillypaun] it follows that $X$ is in the Fujiki class ${\mathcal C}$. Theorem 2.2 in [@chiose] implies that a manifold in the Fujiki class ${\mathcal C}$ and which is $SKT$ (strong Kähler with torsion, i.e., it supports a $i\partial\bar\partial$-closed Hermitian metric), is in fact Kähler. A very similar method gives a much simpler proof of a key result in [@demaillypaun] Theorem 0.5 that a nef class on a compact Kähler manifold of strictly positive self-intersection contains a Kähler current. Indeed, suppose $\{\alpha\}$ is not big, then by Lamari [@lamari] there exists a sequence of Gauduchon metrics such that $$\int_X\alpha\wedge g_m^{n-1}\leq \frac 1m$$ and $$\int_Xh\wedge g_m^{n-1}=1$$ If $h$ is assumed to be Kähler, the proof proceeds as above to obtain a contradiction. This proof is not independent of the proof of Demailly and Păun. In a few words, we replaced the explicit and involved construction of the metrics $\omega_{\varepsilon}$ in [@demaillypaun] by the abstract sequence of Gauduchon metrics given by the Hahn-Banach theorem, via Lamari [@lamari] An adaptation of the proof of Theorem 0.5 in [@demaillypaun] can not work in our case. One of the obstructions is that, if a complex manifold $X$ admits a Hermitian metric with property \[compatibility\], then it is not clear that $X\times X$ admits a Hermitian metric with the same property. We should also point out that a simplified proof of another part of the proof of the Demailly and Păun theorem on the Kähler cone was given recently by Collins and Tosatti [@collinstosatti]. Together with the above proof, one obtains a more compact proof of the main result in [@demaillypaun] [Dyn52b]{} : [*Compact complex surfaces*]{}, Ergebnisse der Mathematik und ihrer Grenzgebiete, Berlin, Springer-Verlag, (2004). , [*A characterization of Inoue surfaces*]{}, to appear in Comm. Math. Helv. , [*On compact complex surfaces of Kähler rank one*]{}, Amer. J. Math., [**135**]{} (2013), no. 3, 851–860 , [*Obstructions to the existence of Kähler structures on compact complex manifolds*]{}, to appear in Proc. Amer. Math. Soc. , [*Kähler currents and null loci*]{}, arXiv:1304.5216 , [*Numerical characterization of the Kähler cone of a compact Kähler manifold*]{}, Annals of Math. 159 (2004) 1247-1274 , [*Balanced metrics on non-Kähler Calabi-Yau threefolds*]{} , J. Diff. Geom. [**90**]{} (2012), 81–130 , [*Complex Monge-Ampère equations and totally real submanifolds*]{}, Adv. Math. [**225**]{} (2010), no. 3, 1185–-1223 , [*An intrinsic characterization of K" ahler manifolds*]{}, Invent. Math. [**74**]{} (1983), no. 2, 169–198 , [*An example of a non-Kählerian complex-analytic deformation of Kählerian complex structures*]{}, Ann. of Math. (2) [**75**]{} 1962 190–208 , [*Courants kählériens et surfaces compactes*]{}, Ann. Inst. Fourier (Grenoble) [**49**]{} (1999), no. 1, 263–-285 , [*Two remarks on Calabi-Yau Moishezon treefolds*]{}, J. Reine Angew. Math. [**452**]{} (1994), 153–161 , [*The complex Monge-Ampere equation on compact Hermitian manifolds*]{}, J. Amer. Math. Soc. [**23**]{} (2010), no.4, 1187–1195 Address: [*Ionuţ Chiose*]{}:\ Institute of Mathematics of the Romanian Academy\ P.O. Box 1-764, Bucharest 014700\ Romania\ [Ionut.Chiose@imar.ro]{}\ [^1]: $\ast$ Supported by a Marie Curie International Reintegration Grant within the $7^{\rm th}$ European Community Framework Programme and CNCS grant PN-II-ID-PCE-2011-3-0269
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a constructive solution to the $N$-representability problem—a full characterization of the conditions for constraining the two-electron reduced density matrix (2-RDM) to represent an $N$-electron density matrix. Previously known conditions, while rigorous, were incomplete. Here we derive a hierarchy of constraints built upon (i) the bipolar theorem and (ii) tensor decompositions of model Hamiltonians. Existing conditions $D$, $Q$, $G$, $T1$, and $T2$, known classical conditions, and new conditions appear naturally. Subsets of the conditions are amenable to polynomial-time computations of strongly correlated systems.' author: - 'David A. Mazziotti' date: 'Submitted November 8, 2011; Published [*Phys. Rev. Lett*]{} [**108**]{}, 263002 (2012)' title: 'Structure of Fermionic Density Matrices: Complete $N$-representability Conditions' --- The wavefunction of a many-electron quantum system contains significantly more information than necessary for the calculation of energies and properties. In 1955 Mayer proposed in [*Physical Review*]{} computing the ground-state energy variationally as a functional of the two-electron reduced density matrix (2-RDM) which, unlike the wavefunction, scales polynomially with the number $N$ of electrons [@RDM; @CY00; @M55]. However, the 2-electron density matrix must be constrained to represent a many-electron (or $N$-electron) density matrix (or wavefunction); otherwise, the minimized energy is unphysically below the ground-state energy for $N>2$. Coleman called these constraints [*$N$-representability conditions*]{} [@C63], and the search for them became known as the $N$-representability problem [@GP64; @H78; @E78; @E79; @P78; @R07]. In 1995 the National Research Council ranked the $N$-representability problem as one of the top unsolved theoretical problems in chemical physics [@NRC]. While progress was limited for many years, recent advances in theory and optimization [@EJ00; @N01; @M04; @P04; @C06; @E07; @A09; @S10; @M11] have enabled the application of the variational 2-RDM method to studying strong correlation in quantum phase transitions [@GM06], quantum dots [@RM09], polyaromatic hydrocarbons [@GM08], firefly bioluminescence [@GM10], and metal-to-insulator transitions [@SGM10]. Despite the recent computational results with 2-RDM methods, a complete set of $N$-representability conditions on the 2-RDM—not dependent upon higher-order RDMs—has remained unknown. While formal solutions of the $N$-representability problem were developed in the 1960s [@GP64; @K67], practically they required the $N$-electron density matrix [@RDM; @CY00]. In this Letter we present a constructive solution of the $N$-representability problem that generates a complete set of $N$-representability conditions on the 2-RDM. The approach is applicable to generating the $N$-representability conditions on the $p$-RDM for any $p \le N$. The conditions arise naturally as a hierarchy of constraints on the 2-RDM, which we label the $(2,q)$-positivity conditions, where the $(2,2)$- and $(2,3)$-positivity conditions include the already known $D$, $Q$, $G$, $T1$, and $T2$ conditions [@C63; @GP64; @E78; @P04]. The second number in $(2,q)$ corresponds to the higher $q$-RDM which serves as the starting point for the derivation of the condition. A key advance in extending the $(2,q)$-positivity conditions for $q>3$ is the use of tensor decompositions in the model Hamiltonians that expose the boundary of the $N$-representable 2-RDM set. The decompositions allow the terms in the model Hamiltonians to have no more than two-body interactions through the cancelation of all higher 3-to-$q$-body terms. A second important element is the recognition that when $q=r$ where $r$ is the rank of the one-electron basis set the positivity conditions are complete. The hierarchy of conditions can be thought of as a collection of model Hamiltonians [@P78]. For example, the ‘basic’ (2,2)-positivity conditions are both necessary and sufficient constraints for computing the ground-state energies of pairing model Hamiltonians [@CY00; @M04], often employed in describing long-range order and superconductivity. Consider a quantum system composed of $N$ fermions. A matrix is a fermionic [*density matrix*]{} if and only if it is: ([*i*]{}) Hermitian, ([*ii*]{}) normalized (fixed trace), ([*iii*]{}) antisymmetric in the exchange of particles, and ([*iv*]{}) positive semidefinite. A matrix is [*positive semidefinite*]{} if and only if its eigenvalues are nonnegative. The $p$-particle reduced density matrix ($p$-RDM) can be obtained from the $N$-particle density matrix by integrating over all but the first $p$ particles $$\label{eq:Dp} {}^{p} D = {N \choose p} \int{ {}^{N} D \, d(p+1) \dots dN } .$$ The set of ${}^{N} D$ is a convex set which we denote as $P^{N}$ while the set ${}^{p} D$ is a convex set which we denote as $P^{p}_{N}$, the set of $N$-representable $p$-particle density matrices. A set is [*convex*]{} if and only if the convex combination of any two members of the set is also contained in the set $$w \, {}^{N} D_{1} + (1-w) \, {}^{N} D_{2} \in P^{N},$$ where $0 \le w \le 1$. The integration in Eq. (\[eq:Dp\]) defines a linear mapping from $P^{N}$ to $P^{p}_{N}$, which preserves its convexity. The energy of a quantum system in a stationary state can be computed from the Hamiltonian traced against the state’s density matrix. For a system of $N$ fermions we have $$\label{eq:EN} E = {\rm Tr}({\hat H} \, {}^{N} D) .$$ If the Hamiltonian is a $p$-body operator, meaning that it has at most $p$-particle interactions, then the energy can be written as a functional of only the $p$-RDM $$\label{eq:Ep} E = {\rm Tr}({\hat H} \, {}^{p} D) .$$ For a system of $N$ electrons the Hamiltonian generally has at most pairwise interactions, and hence, the energy can be expressed as a linear functional of the 2-RDM. Except when $N=2$, however, minimizing the energy as a functional of a two-electron density matrix ${}^{2} D \in P^{2}$ yields an energy that is much too low. To obtain the correct ground-state energy, we must constrain the two-electron density matrix to be $N$-representable, that is $^{2} D \in P^{2}_{N}$. Based on the equivalence of the energy expectation values in Eqs. (\[eq:EN\]) and (\[eq:Ep\]), we can use the set $P^{p}_{N}$ of $N$-representable $p$-particle density matrices to define a set ${P^{p}_{N}}^{*}$ of $p$-particle (Hamiltonian) operators $^{p} {\hat O}$ that are positive semidefinite in their trace with any $N$-particle density matrix $$\label{eq:Oset} {P^{p}_{N}}^{*} = \{ ^{p} {\hat O} | {\rm Tr}(^{p} {\hat O} \, ^{p} D) \ge 0~{\rm for~all}~^{p} D \in {P^{p}_{N}} \}.$$ The set ${P^{p}_{N}}^{*}$ is said to be the [*polar*]{} (or dual) of the set $P^{p}_{N}$. Importantly, by the [*bipolar theorem*]{} [@K67; @R71], the set ${P^{p}_{N}}^{*}$ also fully defines its polar set $P^{p}_{N}$ as follows $$\label{eq:Dset} P^{p}_{N} = \{ ^{p} D | {\rm Tr}(^{p} {\hat O} \, ^{p} D) \ge 0~{\rm for~all}~^{p} {\hat O} \in {{P^{p}_{N}}^{*}} \}.$$ By Eq. (\[eq:Dset\]) we have a complete characterization of the $N$-representable $p$-RDMs from a knowledge of all operators $^{p} {\hat O} \in {P^{p}_{N}}^{*}$ [@K67]. This analysis shows formally that there exists a solution to the $N$-representability problem [@GP64; @K67], but it does not provide a mechanism for characterizing the set ${P^{p}_{N}}^{*}$. ![The convex set ${P^{2}_{N}}^{*}$ of 2-body operators that are positive semidefinite in their trace with any $N$-particle density matrix is contained within the convex set ${P^{3}_{N}}^{*}$ of analogous 3-body operators, which in turn is contained within the set ${P^{r}_{N}}^{*}$. Hence, the extreme points of ${P^{2}_{N}}^{*}$ can be characterized completely by the convex combination of the extreme points of ${P^{r}_{N}}^{*}$, which are given by Eq. (\[eq:rpos\]).[]{data-label="f:n2"}](polar_set_v3.eps) To characterize ${P^{p}_{N}}^{*}$, we assume that the $N$-fermion quantum system has $r$ orbitals and hence, $r-N$ holes. A convex set can be defined by the enumeration of its [*extreme elements*]{}, that is the elements (or members) that cannot be expressed by a convex combination of other elements [@CY00; @R71]. The definition of ${P^{p}_{N}}^{*}$ in Eq. (\[eq:Oset\]) for $p \le N$ can be extended in second quantization to include $p > N$ $$\label{eq:Oset2} {P^{p}_{N}}^{*} = \{ ^{p} {\hat O} | {\rm Tr}(^{p} {\hat O} \, ^{N} D) \ge 0~{\rm for~all}~^{N} D \}$$ with the $^{p} {\hat O}$ being polynomials in creation and annihilation operators of degree $2p$. Because in second quantization the value of $N$ is defined in the density matrices $^{N} D$ rather than in the operators $^{p} {\hat O}$ [@S89], the set ${P^{p}_{N}}^{*}$ provides complete $N$-representability conditions on the $p$-RDM for any $N$ between 2 and $r$. The extreme operators in the set ${P^{r}_{N}}^{*}$ can be written as Hermitian squares of operators [@H02] $$\label{eq:rpos} {^{r} {\hat O}_{i}} = {^{r} {\hat C}_{i}} \, {^{r} {\hat C}_{i}^{\dagger}},$$ where the ${}^{r} {\hat C}_{i}$ are polynomials in the creation and annihilation operators of degree less than or equal to $r$ (i.e., Eqs. (\[eq:T21\]) and (\[eq:T22\])). Because any operator ${}^{p} {\hat C}$ with $p>r$ reduces to a polynomial of degree $r$ in its operation on any ${}^{N} D$, the sets ${P^{p}_{N}}^{*}$ with $p>r$ do not contain additional information about the positivity of ${}^{N} D$. To establish this reduction, we rearrange terms in ${}^{p} {\hat C}$ of degree greater than $r$ into a normal order with either more than $N$ annihilation operators to the right of the creation operators or more than $r-N$ creation operators to the right of the annihilation operators; in either situation, the terms of degree greater than $r$ vanish in their operation upon any $^{N} D$. The operators ${}^{p} {\hat O}$ that constrain the $p$-RDM to be $N$-representable in Eq. (\[eq:Dset\]) are also necessary to constrain the $q$-RDM to be $N$-representable where $q>p$; formally, each ${}^{p} {\hat O} \in {P^{p}_{N}}^{*}$ can be lifted by inserting the number operator to the $(q-p)$ power to form a ${}^{q} {\hat O} \in {P^{q}_{N}}^{*}$ [@M04]. Therefore, as illustrated in Fig. 1, we have the following set relations $${P^{2}_{N}}^{*} \subseteq {P^{3}_{N}}^{*} \subseteq {P^{p}_{N}}^{*} ... \subseteq {P^{r}_{N}}^{*} .$$ Consequently, extreme operators $^{r} {\hat O}_{i}$ of ${P^{r}_{N}}^{*}$ can be combined convexly to produce all $p$-body operators $^{p} {\hat O} \in {P^{p}_{N}}^{*}$, and hence, the extreme points of ${P^{p}_{N}}^{*}$ can be characterized completely by the convex combination of the extreme points of ${P^{r}_{N}}^{*}$. More generally, convex combinations of extreme $^{q} {\hat O}_{i} \in {P^{q}_{N}}^{*}$ generate all $p$-body operators $^{p} {\hat O} \in {P^{p}_{N}}^{*}$ for $p < q$. Depending upon the order of the creation and annihilation operators in $^{r} {\hat O}_{i}$, the normal-ordered terms will have either positive or negative coefficients. Convex combinations of the $^{r} {\hat O}_{i}$ can be chosen to cancel the coefficients of all terms of degree greater than $p$. Extreme elements are generated from the minimum number of convex combinations to effect the cancelation. This characterization of the set ${P^{p}_{N}}^{*}$ provides a [*constructive solution*]{} of the $N$-representability problem for the $p$-RDM. The constructive solution—convex combinations of the operators in Eq. (\[eq:rpos\])—generates the existing $N$-representability conditions as well as new conditions. The [*(1,1)-positivity conditions*]{} [@C63] are derivable from the subset of $^{r} {\hat C}_{i}$ operators in Eq. (\[eq:rpos\]) of degree 1 $$\begin{aligned} {\hat C}_{D} & = & \sum_{j}{ b_{j} {\hat a}^{\dagger}_{j} } \\ {\hat C}_{Q} & = & \sum_{j}{ b_{j} {\hat a}_{j} } .\end{aligned}$$ Keeping the trace of the corresponding one-body operators $^{1} {\hat O}_{D}$ and $^{1} {\hat O}_{Q}$ against the 1-RDM nonnegative for all values of $b_{j}$ yields the conditions, $^{1} D \succeq 0$ and $^{1} Q \succeq 0$, where ${}^{1} D$ and $^{1} Q$ are matrix representations of the 1-particle and the 1-hole RDMs and the symbol $\succeq$ indicates that the matrix is constrained to be positive semidefinite. Similarly, the [*(2,2)-positivity conditions*]{} [@GP64] follow from considering the $^{r} {\hat C}_{i}$ operators of degree 2 in Eq. (\[eq:rpos\]) $$\begin{aligned} {\hat C}_{D} & = & \sum_{jk}{ b_{jk} {\hat a}^{\dagger}_{j} {\hat a}^{\dagger}_{k} } \\ {\hat C}_{Q} & = & \sum_{jk}{ b_{jk} {\hat a}_{j} {\hat a}_{k} } \\ {\hat C}_{G} & = & \sum_{jk}{ b_{jk} {\hat a}^{\dagger}_{j} {\hat a}_{k} } .\end{aligned}$$ Restricting the trace of the corresponding two-body operators $^{2} {\hat O}_{D}$, $^{2} {\hat O}_{Q}$, and $^{2} {\hat O}_{G}$ against the 2-RDM to be nonnegative for all values of $b_{jk}$ defines the conditions, $^{2} D \succeq 0$, $^{2} Q \succeq 0$, and $^{2} G \succeq 0$, which constrain the probabilities for finding two particles, two holes, and a particle-hole pair to be nonnegative, respectively. ------------------------------------------------------------------- (2,4)-Positivity Conditions ------------------------------------------------------------------- ${\rm Tr}( (3 {\hat C}_{\rm xxxx} {\hat C}_{\rm xxxx}^{\dagger} + {\hat C}_{\rm xxxo} {\hat C}_{\rm xxxo}^{\dagger} + {\hat C}_{\rm xxox} {\hat C}_{\rm xxox}^{\dagger} + {\hat C}_{\rm xoxx} {\hat C}_{\rm xoxx}^{\dagger} + {\hat C}_{\rm oxxx} {\hat C}_{\rm oxxx}^{\dagger} + {\hat C}_{\rm oooo} {\hat C}_{\rm oooo}^{\dagger}) \, {}^{2} D) \ge 0$ ${\rm Tr}( (3 {\hat C}_{\rm xxxo} {\hat C}_{\rm xxxo}^{\dagger} + {\hat C}_{\rm xxxx} {\hat C}_{\rm xxxx}^{\dagger} + {\hat C}_{\rm xxoo} {\hat C}_{\rm xxoo}^{\dagger} + {\hat C}_{\rm xoxo} {\hat C}_{\rm xoxo}^{\dagger} + {\hat C}_{\rm oxxo} {\hat C}_{\rm oxxo}^{\dagger} + {\hat C}_{\rm ooox} {\hat C}_{\rm ooox}^{\dagger}) \, {}^{2} D) \ge 0$ ${\rm Tr}( (3 {\hat C}_{\rm xxox} {\hat C}_{\rm xxox}^{\dagger} + {\hat C}_{\rm xxoo} {\hat C}_{\rm xxoo}^{\dagger} + {\hat C}_{\rm xxxx} {\hat C}_{\rm xxxx}^{\dagger} + {\hat C}_{\rm xoox} {\hat C}_{\rm xoox}^{\dagger} + {\hat C}_{\rm oxox} {\hat C}_{\rm oxox}^{\dagger} + {\hat C}_{\rm ooxo} {\hat C}_{\rm ooxo}^{\dagger}) \, {}^{2} D) \ge 0$ ${\rm Tr}( (3 {\hat C}_{\rm xoxx} {\hat C}_{\rm xoxx}^{\dagger} + {\hat C}_{\rm xoxo} {\hat C}_{\rm xoxo}^{\dagger} + {\hat C}_{\rm xoox} {\hat C}_{\rm xoox}^{\dagger} + {\hat C}_{\rm xxxx} {\hat C}_{\rm xxxx}^{\dagger} + {\hat C}_{\rm ooxx} {\hat C}_{\rm ooxx}^{\dagger} + {\hat C}_{\rm oxoo} {\hat C}_{\rm oxoo}^{\dagger}) \, {}^{2} D) \ge 0$ ${\rm Tr}( (3 {\hat C}_{\rm oxxx} {\hat C}_{\rm oxxx}^{\dagger} + {\hat C}_{\rm oxxo} {\hat C}_{\rm oxxo}^{\dagger} + {\hat C}_{\rm oxox} {\hat C}_{\rm oxox}^{\dagger} + {\hat C}_{\rm ooxx} {\hat C}_{\rm ooxx}^{\dagger} + {\hat C}_{\rm xxxx} {\hat C}_{\rm xxxx}^{\dagger} + {\hat C}_{\rm xooo} {\hat C}_{\rm xooo}^{\dagger}) \, {}^{2} D) \ge 0$ ${\rm Tr}( (3 {\hat C}_{\rm xxoo} {\hat C}_{\rm xxoo}^{\dagger} + {\hat C}_{\rm xxox} {\hat C}_{\rm xxox}^{\dagger} + {\hat C}_{\rm xxxo} {\hat C}_{\rm xxxo}^{\dagger} + {\hat C}_{\rm xooo} {\hat C}_{\rm xooo}^{\dagger} + {\hat C}_{\rm oxoo} {\hat C}_{\rm oxoo}^{\dagger} + {\hat C}_{\rm ooxx} {\hat C}_{\rm ooxx}^{\dagger}) \, {}^{2} D) \ge 0$ ${\rm Tr}( (3 {\hat C}_{\rm xoox} {\hat C}_{\rm xoox}^{\dagger} + {\hat C}_{\rm xooo} {\hat C}_{\rm xooo}^{\dagger} + {\hat C}_{\rm xoxx} {\hat C}_{\rm xoxx}^{\dagger} + {\hat C}_{\rm xxox} {\hat C}_{\rm xxox}^{\dagger} + {\hat C}_{\rm ooox} {\hat C}_{\rm ooox}^{\dagger} + {\hat C}_{\rm oxxo} {\hat C}_{\rm oxxo}^{\dagger}) \, {}^{2} D) \ge 0$ ${\rm Tr}( (3 {\hat C}_{\rm xoxo} {\hat C}_{\rm xoxo}^{\dagger} + {\hat C}_{\rm xoxx} {\hat C}_{\rm xoxx}^{\dagger} + {\hat C}_{\rm xooo} {\hat C}_{\rm xooo}^{\dagger} + {\hat C}_{\rm xxxo} {\hat C}_{\rm xxxo}^{\dagger} + {\hat C}_{\rm ooxo} {\hat C}_{\rm ooxo}^{\dagger} + {\hat C}_{\rm oxox} {\hat C}_{\rm oxox}^{\dagger}) \, {}^{2} D) \ge 0$ ------------------------------------------------------------------- In general, the $(q,q)$-positivity conditions [@EJ00; @M04] follow from restricting all $q$-body operators ${}^{q} {\hat O}$ in Eq. (\[eq:rpos\]) to be nonnegative in their trace against the $q$-RDM [@M04]. While the $(q,q)$-positive operators are not two-body operators for $q>2$, convex combinations of them generate two-body operators $^{2} {\hat O} \in {P^{2}_{N}}^{*}$ that enforce the $N$-representability of the 2-RDM. We refer to necessary $N$-representability conditions arising from convex combinations of $(q,q)$-positivity conditions as $(2,q)$-positivity conditions. The simplest such constraints, the [*(2,3)-positivity conditions*]{}, arise from keeping convex combinations of 3-body operators in Eq. (\[eq:rpos\]) nonnegative; for example, $$\begin{aligned} ^{2} {\hat O}_{T1} & = & \frac{1}{2} ( {\hat C}_{T1,1} \, {\hat C}_{T1,1}^{\dagger} + {\hat C}_{T1,2} {\hat C}_{T1,2}^{\dagger} ) \\ ^{2} {\hat O}_{T2} & = & \frac{1}{2} ( {\hat C}_{T2,1} \, {\hat C}_{T2,1}^{\dagger} + {\hat C}_{T2,2} {\hat C}_{T2,2}^{\dagger} )\end{aligned}$$ where $$\begin{aligned} {\hat C}_{T1,1} & = & \sum_{jkl}{ b_{jkl} {\hat a}^{\dagger}_{j} {\hat a}^{\dagger}_{k} {\hat a}^{\dagger}_{l} } \\ {\hat C}_{T1,2} & = & \sum_{jkl}{ b^{*}_{jkl} {\hat a}_{j} {\hat a}_{k} {\hat a}_{l}} \\ {\hat C}_{T2,1} & = & \sum_{jkl}{ b_{jkl} {\hat a}^{\dagger}_{j} {\hat a}^{\dagger}_{k} {\hat a}_{l} } + \sum_{j}{ b_{j} {\hat a}^{\dagger}_{j} } \label{eq:T21} \\ {\hat C}_{T2,2} & = & \sum_{jkl}{ b^{*}_{jkl} {\hat a}_{j} {\hat a}_{k} {\hat a}^{\dagger}_{l} } + \sum_{j}{d_{j} {\hat a}_{j}} \label{eq:T22} .\end{aligned}$$ These conditions, known as the $T1$ and generalized $T2$ conditions were developed by Erdahl [@E78] and implemented by Zhao [*at al.*]{} [@P04] and Mazziotti [@M04]. In general, they significantly improve the accuracy of the 2-positivity conditions. Although the constructive proof given above indicates that a complete set of $N$-representability conditions can be generated from convex combinations of extreme elements of ${P^{r}_{N}}^{*}$, additional conditions have not been discovered beyond the (2,2)- and (2,3)-positivity conditions. For example, what about (2,4)-positivity conditions—that is, $N$-representability constraints on the 2-RDM arising from convex combinations of 4-body operators in Eq. (\[eq:rpos\])? First, we derive a class of (3,4)-positivity conditions on the 3-RDM. Consider the nonnegativity of the following operator ${\hat O}$ formed by the convex combination of a pair of 4-body operators from Eq. (\[eq:rpos\]) $$\label{eq:O} {\hat O} = \frac{1}{2} ( {\hat C}_{\rm xxxx} \, {\hat C}_{\rm xxxx}^{\dagger} + {\hat C}_{\rm xooo} {\hat C}_{\rm xooo}^{\dagger} )\\$$ where the symbols ${\rm x}$ and ${\rm o}$ represent creation and annihilation operators, respectively, in the ${\hat C}$ operators defined as follows $$\begin{aligned} {\hat C}_{\rm xxxx} & = & \sum_{jklm}{ b_{jklm} {\hat a}^{\dagger}_{j} {\hat a}^{\dagger}_{k} {\hat a}^{\dagger}_{l} {\hat a}^{\dagger}_{m} } \\ {\hat C}_{\rm xooo} & = & \sum_{jklm}{ d_{jklm} {\hat a}^{\dagger}_{j} {\hat a}_{k} {\hat a}_{l} {\hat a}_{m} } .\end{aligned}$$ Importantly, the expectation value of ${\hat O}$ with $d_{jklm} = b_{jklm}$ requires the 4-RDM because the cumulant part ${}^{4} \Delta$ of the 4-RDM [@M98; @RDM] does not vanish $$\sum_{jklmpqst}{ b_{jklm} b^{*}_{pqst} \, (^{4} \Delta^{jklm}_{pqst} - ^{4} \Delta^{jqst}_{pklm} ) } \neq 0 .$$ To obtain additional $N$-representability conditions requires that the dependence of the ${\hat C}$ operators on the expansion coefficients be [*generalized from linear to nonlinear*]{}. Specifically, to obtain 3-RDM conditions beyond the (3,3)-positivity constraints, we must factor the 4-particle expansion coefficients $b_{jklm}$ and $d_{jklm}$ into products of 3- and 1-particle coefficients $b_{j} b_{klm}$ and $b_{j} b^{*}_{klm}$ which cause the cumulant part of the 4-RDM in $\langle \Psi | {\hat O} | \Psi \rangle $ to vanish $$\sum_{jklmpqst}{ b_{j} b_{klm} b^{*}_{p} b^{*}_{qst} \, (^{4} \Delta^{jklm}_{pqst} - ^{4} \Delta^{jklm}_{pqst} ) } = 0 .$$ The (3,4)-positivity condition, represented by Eq. (\[eq:O\]) and the tensor decomposition of the expansion coefficients, is part of a class of (3,4)-conditions that arises from all distinct combinations of two 4-particle metric matrices that differ from each other in the replacement of [*three*]{} second-quantized operators by their adjoints. A class of [*(2,4)-positivity conditions*]{}, shown in Table \[t:24\], can be derived from convex combinations of the above (3,4)-positivity conditions that cancel the 3-particle operators, that is the products of six second-quantized operators. To effect the cancelation, the nonlinearity of the expansion coefficients of ${\hat C}$ must be increased from $b_{j} b_{klm}$ to $b_{j} c_{k} d_{l} e_{m}$. Specifically, the ${\hat C}$ operators in Table \[t:24\] are defined as $${\hat C}_{\rm uvwz} = \sum_{jklm}{ b^{\rm u}_{j} c^{\rm v}_{k} d^{\rm w}_{l} e^{\rm z}_{m} {\hat a}^{\rm u}_{j} {\hat a}^{\rm v}_{k} {\hat a}^{\rm w}_{l} {\hat a}^{\rm z}_{m} } ,$$ where ${\hat a}^{u}_{j}$ and $b^{\rm u}_{j}$ are ${\hat a}^{\dagger}_{j}$ and $b^{*}_{j}$ if ${\rm u}={\rm x}$ and ${\hat a}_{j}$ and $b_{j}$ if ${\rm u}={\rm o}$. Each of the eight (2,4)-positivity conditions in Table \[t:24\] generates an additional condition by switching all ${\rm x}$’s and ${\rm o}$’s in accordance with [*particle-hole duality*]{}, the symmetry between particles and holes. The (2,4)-conditions become the diagonal $N$-representability conditions [@E78; @Cuts; @D02; @KM08] when $b$, $c$, $d$, and $e$ are restricted to be unit vectors; they are more general than the unitarily invariant diagonal conditions because these four vectors are not required to be orthogonal. These (2,4)-positivity conditions are only representative of the process by which complete conditions can be constructed from the solution of the $N$-representability problem presented in this Letter. Additional (2,4)-conditions in this class can be generated from reordering creation and annihilation operators in the conditions of Table \[t:24\], and other extreme (2,4)-conditions can be constructed from lifting the (2,3)-conditions. A comprehensive list of (2,4)-positivity conditions as well as (2,3)-, (2,5)-, and (2,6)-positivity conditions, which are consistent with the constructive solution, will be presented elsewhere [@M12]. The (2,5)- and (2,6)-conditions include extensions of three and eighteen classes of known diagonal conditions, respectively. The set ${P^{2}_{N}}^{*}$ of $N$-representability conditions on the 2-RDM contains the set ${C^{2}_{N}}^{*}$ of [*classical $N$-representability conditions*]{} [@E78; @Cuts; @D02; @KM08], which ensure that the two-electron reduced density function (2-RDF), the diagonal (classical) part of the 2-RDM, can be represented by the integration of a $N$-particle density function. In different fields the set $C^{2}_{N}$ of $N$-representable 2-RDF has been given different names: cut polytope  [@Cuts] in combinatorial optimization and the correlation (or Boole) polytope [@Cuts; @P89] in the study of 0-1 programming or Bell’s inequalities. The set $C^{2}_{N}$, previously characterized, has important applications in global optimization including the search for the global energy minima of molecular clusters [@KM08], the study of classical fluids [@Fluid], the max-cut problem in circuit design and spin glasses [@Cuts], lattice holes in the geometry of numbers, pair density (2-RDF) functional theory [@D02], and the investigation of generalized Bell’s inequalities [@P89]. The characterization of the set $P^{2}_{N}$ of $N$-representable 2-RDMs represents a significant generalization of the solution of the classical $N$-representability problem (the Boole 0-1 programming problem). In addition to its potentially significant applications to the study of correlation in many-fermion quantum systems, knowledge of the set $P^{2}_{N}$ may have important applications to “quantum” analogues of problems in circuit design and the geometry of numbers. The complete set of $N$-representability conditions firmly solidifies 2-RDM theory as a fundamental theory of many-body quantum mechanics with two-particle interactions. Rigorous lower bounds to the ground-state energy of strongly correlated quantum systems can be computed and improved in polynomial time from subsets of the complete $N$-representability conditions [@M11] (Minimizing the energy with a fully $N$-representable 2-RDM is a non-deterministic polynomial (NP) complete problem because $C^{2}_{N} \subset P^{2}_{N}$ with optimization in $C^{2}_{N}$ known to be NP-complete [@Cuts]). The present result raises challenges and opportunities for future research that include (i) implementing the higher $N$-representability conditions which are not in the form of traditional semidefinite programming [@M04; @P04; @M11], and (ii) determining which of the new conditions are most appropriate for different problems in many-particle chemistry and physics. Beyond their potential computational applications, the complete $N$-representability conditions for fermionic density matrices provide new fundamental insight into many-electron quantum mechanics including the identification and measurement of correlation and entanglement. The author thanks D. Herschbach, H. Rabitz, and A. Mazziotti for encouragement, and the NSF, ARO, Microsoft Corporation, Dreyfus Foundation, and David-Lucile Packard Foundation for support. [99]{} , edited by D. A. Mazziotti, Advances in Chemical Physics Vol. 134 (Wiley, New York, 2007). A. J. Coleman and V. I. Yukalov, [*Reduced Density Matrices: Coulson’s Challenge*]{} (Springer, New York 2000). J. E. Mayer, Phys. Rev. [**100**]{}, 1579 (1955). A. J. Coleman, Rev. Mod. Phys. [**35**]{} 668 (1963). C. Garrod and J. Percus, J. Math. Phys. [**5**]{}, 1756 (1964). J. E. Harriman, Phys. Rev. A [**17**]{}, 1257 (1978). R. M. Erdahl, Int. J. Quantum Chem. [**13**]{}, 697 (1978). R. M. Erdahl, Rep. Math. Phys. [**15**]{}, 147 (1979). J. K. Percus, Int. J. Quantum Chem. [**13**]{}, 89 (1978). M. Rosina, Adv. Chem. Phys. [**134**]{}, 11 (2007). F. H. Stillinger et al., [*Mathematical Challenges from Theoretical/Computational Chemistry*]{} (National Academic Press, Washington, D.C., 1995). R. M. Erdahl and B. Jin in [*Many-electron Densities and Density Matrices*]{}, edited by J. Cioslowski, (Kluwer, Boston, 2000). M. Nakata, H. Nakatsuji, M. Ehara, M. Fukuda, K. Nakata, and K. Fujisawa, J. Chem. Phys. **114**, 8282 (2001). D. A. Mazziotti, Phys. Rev. A **65**, 062511 (2002); Phys. Rev. Lett. [**93**]{}, 213001 (2004); Phys. Rev. A **74**, 032501 (2006). Z. Zhao, B. J. Braams, H. Fukuda, M. L. Overton, and J. K. Percus, J. Chem. Phys. [**120**]{}, 2095 (2004); M. Fukuda, B. J. Braams, M. Nakata, M. L. Overton, J. K. Percus, M. Yamashita, and Z. Zhao, Math. Program. Ser. B **109**, 553 (2007). E. Cancés, G. Stoltz, and M. Lewin, J. Chem. Phys. **125**, 064101 (2006). R. M. Erdahl, Adv. Chem. Phys. [**134**]{}, 61 (2007). B. Verstichel, H. van Aggelen, D. Van Neck, P. W. Ayers, and P. Bultinck, Phys. Rev. A [**80**]{}, 032508 (2009). N. Shenvi and A. F. Izmaylov, Phys. Rev. Lett. [**105**]{}, 213003 (2010). D. A. Mazziotti, Phys. Rev. Lett. [**106**]{}, 083001 (2011). G. Gidofalvi and D. A. Mazziotti, Phys. Rev. A [**74**]{}, 012501 (2006). A. E. Rothman and D. A. Mazziotti, Phys. Rev. A [**78**]{}, 032510 (2008). G. Gidofalvi and D. A. Mazziotti, J. Chem. Phys. [**129**]{}, 134108 (2008). L. Greenman and D. A. Mazziotti, J. Chem. Phys. [**133**]{}, 164110 (2010). A. Sinitskiy, L. Greenman, and D. A. Mazziotti, J. Chem. Phys. [**133**]{}, 014104 (2010). H. Kummer, J. Math. Phys. [**8**]{}, 2063 (1967). R. T. Rockafellar, [*Convex Analysis*]{} (Princeton University Press, Princeton, 1970). P. R. Surj[á]{}n, [*Second Quantized Approach to Quantum Chemistry: An Elementary Introduction*]{} (Springer-Verlag, New York, 1989). J. W. Helton, Ann. of Math. [**156**]{}, 675 (2002); J. W. Helton and S. McCullough, Trans. Amer. Math. Soc. [**356**]{}, 3721 (2004). D. A. Mazziotti, Chem. Phys. Lett. [**289**]{}, 419 (1998). D. A. Mazziotti, Phys. Rev. A [**85**]{}, 062507 (2012). M. M. Deza and M. Laurent, [*Geometry of Cuts and Metrics*]{} (Springer, New York, 1997). P. W. Ayers and E. R. Davidson, Adv. Chem. Phys. [**134**]{}, 443 (2007). E. Kamarchik and D. A. Mazziotti, Phys. Rev. Lett. [**99**]{}, 243002 (2007). J. Crawford, S. Torquato, and F. H. Stillinger, J. Chem. Phys. [**119**]{}, 7065 (2003). I. Pitowsky, Math. Program. [**50**]{}, 395 (1991).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Klaassen in [@Klaassen2015] proposed a method for the detection of data manipulation given the means and standard deviations for the cells of a oneway ANOVA design. This comment critically reviews this method. In addition, inspired by this analysis, an alternative approach to test sample correlations over several experiments is derived. The results are in close agreement with the initial analysis reported by an anonymous whistleblower [@Anonymous2012]. Importantly, the statistic requires several similar experiments; a test for correlations between 3 sample means based on a single experiment must be considered as unreliable.' author: - Hannes Matuschek bibliography: - 'references.bib' title: 'Fraud detection with statistics: A comment on *Evidential Value in ANOVA-Regression Results in Scientific Integrity Studies* (Klaassen, 2015).' --- Introduction ============ An analysis of means and standard deviations [@Peeters2015], culled from a series of scientific publications, led to a request for retraction of a subset of the papers [@UvA2015]. The analysis was based on a method reported in Klaassen [@Klaassen2015] aimed at detecting a type of data manipulation that causes correlations between condition means of samples that are assumed to be independent. Specifically, given a one-way balanced ANOVA design with 3 conditions, $X_{i},i=1,...,3$, the means obtained by averaging over the scores of $n$ different subjects in each condition, are samples of a 3-dimensional normal distribution $$\label{eq:lik} \left(\begin{array}{c} X_{1}\\ X_{2}\\ X_{3} \end{array}\right) \sim \mathcal{N}\left(\left(\begin{array}{c} \mu_{1}\\ \mu_{2}\\ \mu_{3} \end{array}\right) , n^{-1}\left(\begin{array}{ccc} \sigma_{1}^{2} & \sigma_{1}\sigma_{2}\rho_{1} & \sigma_{1}\sigma_{3}\rho_{2}\\ \sigma_{1}\sigma_{2}\rho_{1} & \sigma_{3}^{2} & \sigma_{2}\sigma_{3}\rho_{3}\\ \sigma_{1}\sigma_{3}\rho_{2} & \sigma_{2}\sigma_{3}\rho_{3} & \sigma_{3}^{2} \end{array}\right)\right) ,$$ where $\mu_{i}$ are the unknown *true* expected values and $\sigma_{i}$ the unknown sample standard deviations of the scores under the respective conditions and $\rho_{i}$ their correlations. The ANOVA assumes independence between the samples of the conditions, such that $\rho_{i}=0$. Indeed, given only samples of $X_{i}$ and estimates of $\sigma_{i}$, the sample correlations $\rho_{i}$ are not directly accessible. \[ht!\] ![Condition means ($x_1$, $x_2$ and $x_3$) and standard deviations for the 12 experiments reported in [@Vision2014]. The condition means $x_1$ and $x_3$ have been connected by a line to visualize the deviance from a perfect linear behavior of the condition means. \[fig:jfd12\]](fig1.pdf "fig:"){width="75.00000%"} An anonymous whistleblower pointed out [@Anonymous2012], that the results in the studies under suspicion (i.e [@Vision2014], compare Figure \[fig:jfd12\]), show a *super linear* pattern which appears *too good to be true*. Importantly, the authors of the original publications did not necessarily expect such patterns of equidistant means; they expected an ordinal, not a linear relation between the three condition means. Nevertheless, the reanalyses were carried out under the assumption of an expected strict linear relation between means. The reason was that this strict assumption is conservative with respect to an inference of data manipulation[^1]. Under the assumption of a strictly linear relationship between the group means, $\mu_i=\alpha+\beta\cdot i$, the scores can be described as $X_{i}=\alpha+\beta\cdot i+\epsilon_i$ which implies that $0=E[Z]=E[X_{1}-2X_{2}+X_{3}]=\mu_1-2\mu_2+\mu_3$. This linear-combination of sample means $X_i$ yields a new random variable $Z$ with the (univariate) normal distribution $Z\sim\mathcal{N}(0,n^{-1}\sigma_{Z}^{2}(\vec{\sigma},\vec{\rho}))$. Where $\sigma_{Z}^{2}(\vec{\sigma},\vec{\rho}) = \sigma_{1}^{2} + 4\,\sigma_{2}^{2} + \sigma_{3}^{2} - 4\sigma_{1}\sigma_{2}\rho_{1} - 4\sigma_{2}\sigma_{3}\rho_{3} + 2\sigma_{1}\sigma_{3}\rho_{2}$. Note that the random variable $Z$ can be seen as the deviance from the strictly linear behavior $\alpha+\beta\, i$. Introducing correlations between the samples increases or decreases the variance of $Z$. Klaassen [@Klaassen2015] assumes that a plausible data manipulation (e.g., adjusting the mean of the middle sample towards the mean of the means of the lower and upper samples to achieve significant differences between the groups) leads to a decrease of the variance of $Z$, $\sigma_{Z}^{2}(\cdot,\cdot)$. Such a variance reduction may have gone unnoticed as *humans tend to underestimate variance* in data. As mentioned above, the results under suspicion show a *super linear* behavior and hence a small variance in $Z$ which may not be expected given the group variances $\sigma_{i}^{2}$ under the assumption of independence. Consequently, Klaassen [@Klaassen2015] used a simple likelihood-ratio test to decide whether there is evidence for data manipulation in terms of a *evidential value* as $$V=\frac{\underset{\vec{\rho}\in\mathcal{F}}{\text{max}\,}f(z|\sigma_{z}(\vec{\sigma},\vec{\rho}))}{f(z|\sigma_{z}(\vec{\sigma},\vec{0}))}\,,$$ comparing the maximum likelihood of all feasible vectors of correlations $\vec{\rho}$ with the likelihood of $z$ under the assumption of $\vec{\rho}=\vec{0}$, where $$\mathcal{F} = \left\{ \vec{\rho}:\rho_{i}\in(-1,1),\,\rho_{1}^{2}+\rho_{2}^{2}+\rho_{3}^{2}-2\rho_{1}\rho_{2}\rho_{3}<1,\right. \left. \,\sigma_{Z}(\vec{\sigma},\vec{\rho})\le\sigma_{Z}(\vec{\sigma},\vec{0})\right\} \,,$$ is the set of feasible correlation vectors, maintaining that the covariance matrix (in eq. \[eq:lik\]) remains positive definite and ensures that $\sigma_{Z}(\vec{\sigma},\vec{\rho})\le\sigma_{Z}(\vec{\sigma},\vec{0})\,\forall\,\vec{\rho}\in\mathcal{F}$. As the *true* sample standard deviations $\vec{\sigma}$ are unknown, they might be replaced by the reported ones $\vec{s}$, since $\vec{s}\rightarrow\vec{\sigma}$ as $n\rightarrow\infty$. An asymptotic test statistic\[sec:teststat\] ============================================ Without any knowledge of the test statistics, i.e. the distribution of $V$ under the null hypothesis $H_{0}$ (independent group means), it is not possible to interpret the value $V$ and hence to decide whether a certain value of $V$ does provide evidence for the presence of sample correlations. The estimates of the sample variances ($\vec{s}{}^{2}$) are themselves random variables with some unknown distribution. It is therefore rather unlikely to obtain a closed form expression for the test statistic, even under restrictive assumptions about the distribution of $\vec{s}$. Nevertheless, as proposed by Klaassen [@Klaassen2015], one may assume that asymptotically $\vec{s}\rightarrow\vec{\sigma}$ as $n\rightarrow\infty$. Then one can assume that the sample variances $\vec{\sigma}$ are fixed and known, allowing for the construction of an upper-bound asymptotic test statistic. The likelihood to obtain a specific value $z$, given the sample variances $\vec{\sigma}^{2}$ and correlations $\vec{\rho}$ is $$f(Z=z|\sigma_{Z}(\vec{\sigma},\vec{\rho}))=\frac{\sqrt{n}}{\sqrt{2\pi}\sigma_{Z}(\vec{\sigma},\vec{\rho})}\exp\left\{ -\frac{n\, z^{2}}{2\,\sigma_{Z}^{2}(\vec{\sigma},\vec{\rho})}\right\}$$ and therefore $$V = \underset{\vec{\rho}\in\mathcal{F}}{\text{max}}\, \frac{\sigma_{Z}(\vec{\sigma},\vec{0})}{\sigma_{Z}(\vec{\sigma},\vec{\rho})} \exp\left\{ -\frac{n\, z^{2}}{2\,\sigma_{Z}^{2}(\vec{\sigma},\vec{\rho})} + \frac{n\, z^{2}}{2\,\sigma_{Z}^{2}(\vec{\sigma},\vec{0})} \right\} \,.$$ Now, let $a=\frac{\sigma_{Z}(\vec{\sigma},\vec{\rho})}{\sigma_{Z}(\vec{\sigma},\vec{0})}$ be the relative standard deviation and $\sigma_{0}=\sigma_{Z}(\vec{\sigma},\vec{0})$ then $$V=\underset{a\in\mathcal{A}}{\text{\text{max}}}\, a^{-1}\,\exp\left\{ -\frac{n\, z^{2}}{2\, a^{2}\sigma_{0}^{2}}+\frac{n\, z^{2}}{2\sigma_{0}^{2}}\right\} \,.$$ The feasible set of all $a$ values $\mathcal{A}$ is implicitly defined by the feasible set of correlations as $$\mathcal{A}=\left\{ \frac{\sigma_{z}(\vec{\sigma},\vec{\rho})}{\sigma_{z}(\vec{\sigma},\vec{0})}:\:\vec{\rho}\in\mathcal{F}\right\} \,.$$ From this it follows immediately that $\mathcal{A}\subseteq(0,1]$ as $\sigma_{Z}(\vec{\sigma},\vec{\rho})\leq\sigma_{Z}(\vec{\sigma},\vec{0})\,\forall\,\vec{\rho}\in\mathcal{F}$. Under a *worst-case* scenario, one may assume $\mathcal{A}=(0,1]$. This implies that for every $a\in(0,1]$ it is possible to find a feasible correlation vector $\vec{\rho}\in\mathcal{F}$ such that $\sigma_{Z}(\vec{\text{\ensuremath{\sigma}}},\vec{\rho})=a\,\sigma_{0}$. Please note that this is not ensured in general. The *worst-case* assumption, however, allows one to obtain upper-bounds for the distribution of $V$ under $H_{0}$ analytically by relaxing the constraints on $a$ implied by the feasibility constraints on $\vec{\rho}$. Within this setting one gets $$V\le\hat{V}=\underset{a\in(0,1]}{\mbox{max}}a^{-1}\,\exp\left\{ -\frac{n\, z^{2}}{2a^{2}\sigma_{0}^{2}}+\frac{n\, z^{2}}{2\sigma_{0}^{2}}\right\} \,.$$ With $\tilde{z}=\frac{\sqrt{n}z}{\sigma_{0}}$, the normalized $z$ with respect to the expected standard deviation under $H_{0}$ $$\hat{V}=\underset{a\in(0,1]}{\mbox{max}}a^{-1}\,\exp\left\{ -\frac{\tilde{z}^{2}}{2a^{2}}+\frac{\tilde{z}^{2}}{2}\right\} \,.$$ Straightforward computation reveals $$0=\partial_{a}\left(\log\left[a^{-1}\,\exp\left\{ -\frac{\tilde{z}^{2}}{2a^{2}}+\frac{\tilde{z}^{2}}{2}\right\} \right]\right)\quad\Rightarrow\quad a^{2}=\tilde{z}^{2}\,,$$ and therefore $$\hat{V}=\begin{cases} 1 & :\,\left|\tilde{z}\right|>1\\ \left|\tilde{z}\right|^{-1}\exp\left\{ \frac{\tilde{z}^{2}-1}{2}\right\} & :\,\text{else}\,. \end{cases}$$ Under the *worst-case* scenario, an upper-bound evidential value $\hat{V}\ge V$ can be computed directly without maximizing the likelihood-ratio numerically. This result was also found by Klaassen (compare eq. 18 in [@Klaassen2015]). Knowing that the maximum $\hat{V}$ is achieved at $\tilde{z}^{2}=a^{2}$ and therefore $\frac{nz^{2}}{\sigma_{0}^{2}} = \frac{\sigma_{Z}^{2}(\vec{\sigma},\vec{\rho})}{\sigma_{0}^{2}}$, one may conclude that the likelihood-ratio test compares the expected variance $\sigma_{0}^2$ under $H_{0}$ with a variance estimated from a single sample. Such a variance estimate is known to be unreliable and therefore the evidential value for a single experiment must be unreliable, too. This issue is discussed in detail in the next section. Testing multiple experiments\[sec:multi\] ========================================= Klaassen [@Klaassen2015], see also [@Peeters2015] suggested to obtain the evidential value $V$ for an article consisting of more than one experiment as the product of the evidential values $V_{j}$ of the single experiments in the article. The evidential value $V$ of a publication given $N$ experiments is then $$V=\prod_{j=1}^{N}V_{j}=\prod_{j=1}^{N}\underset{\vec{\rho}\in\mathcal{F}_{j}}{\max}\frac{f(z_{j}|\sigma_{Z}(\vec{\sigma}_{j},\vec{\rho}))}{f(z_{j}|\sigma_{Z}(\vec{\sigma}_{j},\vec{\rho}))}\,.$$ Given that $V_{j}\ge1$, this immediately implies that the product grows exponentially with the number of experiments even if $H_{0}$ is true. Instead of obtaining the evidential value for every single experiment in an article, which (in a worst-case scenario) is based on a variance estimator from a single sample ($\sigma_{Z,j}^{2}=n_{j}z_{j}^{2}$), one may try to base that variance estimation on $N$ samples provided by the $N$ experiments in an article. I.e. $$V=\underset{\vec{\rho}\in\mathcal{F}}{\max}\prod_{j=1}^{N}\frac{f(z_{j}|\sigma_{Z}(\vec{\sigma}_{j},\vec{\rho}))}{f(z_{j}|\sigma_{Z}(\vec{\sigma}_{j},\vec{\rho}))}\,,$$ where the feasible set $\mathcal{F}=\bigcap_{j=1}^{N}\mathcal{F}_{j}$, is just the intersect of all feasible sets $\mathcal{F}_{j}$ of every experiment. The idea of this alternative approach is simple: We cannot make a reliable statement about the probability of observing a single *suspiciously* small $\tilde{z}_{j}$, particularly as $0=E[Z]$ under $H_{0}$. However, observing a *suspiciously* small $\tilde{z}$ repeatedly is unlikely and may indicate sample correlations between groups. Following the *worst-case* scenario above, the joint evidential value for $N$ experiments is asymptotically $$\begin{aligned} \hat{V} & = & \underset{a\in(0,1]}{\max}a^{-N}\,\exp\left\{ -\sum_{j=0}^{N}\frac{n_{j}\, z_{j}^{2}}{2\, a^{2}\sigma_{0,j}^{2}}+\sum_{j=0}^{N}\frac{n_{j}\, z_{j}^{2}}{2\sigma_{0,j}^{2}}\right\} \\ & = & \underset{a\in(0,1]}{\max}a^{-N}\,\exp\left\{ -\sum_{j=0}^{N}\frac{\tilde{z}_{j}^{2}}{2\, a^{2}}+\sum_{j=0}^{N}\frac{\tilde{z}_{j}^{2}}{2}\right\} \,,\end{aligned}$$ where again $\tilde{z}{}_{j}=\frac{\sqrt{n_{j}}z_{j}}{\sigma_{0,j}}$ and $\sigma_{0,j}=\sigma_{Z}(\sigma_{j},\vec{0})$. A straightforward computation reveals the surprisingly familiar result $$a^{2}=\frac{1}{N}\sum_{j=1}^{N}\tilde{z}^{2}\,.$$ This implies that, in a *worst-case* scenario, the joint likelihood-ratio compares a variance estimate based on $N$ samples with the expected one. And finally $$\begin{aligned} \hat{V} & = & \begin{cases} 1 & :1\le\frac{1}{N}\sum_{j=1}^{N}\tilde{z}^{2}\\ \frac{\exp\left\{ -\frac{N}{2}+\frac{\sum_{j=0}^{N}\tilde{z}_{j}^{2}}{2}\right\}}{\left(\frac{1}{N}\sum_{j=1}^{N}\tilde{z}_{j}^{2}\right)^{\frac{N}{2}}}& :\,\text{else}. \end{cases}\end{aligned}$$ Note that the joint evidential value for $N$ experiments relies on the fact that $\tilde{Z}_{j}\sim\mathcal{N}(0,1)$ i.i.d. under $H_{0}$ and therefore $\sum_{j=1}^{N}\tilde{Z}_{j}^{2}\sim\chi_{N}^{2}$. Hence the test statistics for sample correlations between groups can be expressed as a simple chi-squared statistic and one does not need to make the detour of obtaining an approximate distribution of $V$ under $H_{0}$. Relation to the $\Delta F$ test =============================== The $\chi^{2}$-test derived in the last section is closely related to the $\Delta F$-test suggested by the whistleblower [@Anonymous2012]. This test was also included in the report for the University of Amsterdam [@Peeters2015]. Under $H_{0}$ and the assumption of a linear trend, the p-values of the $\Delta F$-test for a single experiment within an article are distributed uniformly in $[0,1]$. Using Fisher’s method, it is then possible to obtain a p-value for an article comprising several experiments. The major difference between these two methods is that the $\Delta F$-test first determines a p-value for every study and tests whether the resulting p-values $p_{j}$ are *to good to be true* while the chi-square test introduced here assesses this value directly by inspecting whether the relative deviations form perfect linearity $\tilde{z}_{j}^{2}$ are *to good to be true*. Therefore, unsurprisingly, the two methods yield very similar results (see Table \[tab:pvalues\]). Article $\chi^{2}$-test $\Delta F$-tests Classification ------------------------------------------------------------------------------------------------------------------------------ ----------------- ------------------ ---------------- JF09.JEPG [@Forster2009a] 8.06e-07 2.30e-07 strong JF11.JEPG [@Forster2011] 8.73e-07 3.53e-07 strong JF.D12.SPPS [@Vision2014] 7.14e-09 1.82e-08 strong L.JF09.JPSP [@Liberman2009] 6.44e-4 8.46e-5 strong L.JF09.JPSP\* 0.03 0.02 – JF.LS09.JEPG [@Forster2009] 0.25 0.11 strong JF.LK08.JPSP [@Forster2008] 0.81 0.66 inconclusive D.JF.L09.JESP [@Denzler2009] 0.93 0.52 inconclusive Reference [@Hagtvedt2011; @Hunt2008; @Kanten2011; @Lerouge2009; @Malkoc2010; @Polman2011; @Rook2011; @Smith2008; @Smith2006] 0.11 0.14 – : Comparison of p-values obtained with the direct $\chi^{2}$ and $\Delta F$ tests for studies classified as providing strong or inconclusive statistical evidence for low veracity by Peeters et al. [@Peeters2015]. The first three studies listed in the table were reported by the whistleblower [@Anonymous2012]. Note the divergence for JF.LS09.JEPG between the present analysis and [@Peeters2015]. Only those studies from [@Peeters2015] were considered here which provide at least $8$ experiments. \[tab:pvalues\] ![The distribution of $\tilde{z}_j$ (short dashes at the bottom of each panel) for each experiment from the articles listed in Table \[tab:pvalues\]. The solid line shows the expected distribution of $\tilde{Z}_j$ under $H_0$ while the dashed line shows the normal distribution with $0$-mean and the variance estimated from the samples $\tilde{z}_j$. \[fig:ztilde\]](fig2.pdf){width="75.00000%"} Both methods, the $\chi^2$ and $\Delta F$ tests, are *conservative* compared to the V-value approach by Klaassen [@Klaassen2015]. For example, the article [JF.LS09.JEPG]{} in Table \[tab:pvalues\] was classified with *strong statistical evidence for low veracity* [@Peeters2015] (compare also Figure \[fig:jfls09\]). In contrasts, the $\chi^2$ and $\Delta F$ methods, yield p-values of $\approx 0.25$ and $\approx 0.11$, respectively, suggesting that there is no evidence of sample correlations between groups. The three methods agree for the studies [JF.LK08.JPSP]{} and D.JF.L09.JESP which were classified with *inconclusive statistical evidence for low veracity*. The three methods also agree on classifying the three articles reported by the whistleblower [@Anonymous2012] with *strong statistical evidence for low veracity*. Depending on the chosen level of significance, the article L.JF09.JPSP could be classified as *strong* or *inconclusive*. This article contains conditions for which the authors did not expected a specific rank ordering of the condition means. Peeters et al. [@Peeters2015] included these *control conditions* but reordered them according to increasing group means, yielding a p-value for the $\chi^2$-test of about $0.0006$ (L.JS09.JPSP in Table \[tab:pvalues\]). Although the assumption of equidistant group means, i.e. $0=\mu_1-2\mu_2+\mu_3$, contains the assumption of equal group-means, i.e. $\mu_1 = \mu_2 = \mu_3$ as a special case, the actual test-result depends on the ordering of the conditions. Keeping the order of conditions as reported in [@Liberman2009] yields a p-value of about $0.015$ and excluding them results in a p-value of about $0.03$, shown as L.JF09.JPSP\* in Table \[tab:pvalues\]. ![Condition means and stdandard deviations for 9 experiments from [@Forster2009]. \[fig:jfls09\]](fig3.pdf){width=".75\textwidth"} The discrepancy between the $\chi^2$ or $\Delta F$ methods and the V-value method for the JF.LS09.JEPG article [@Forster2009] is due to the tendency of the V-value method to indicate *strong evidence* if a single experiment out of a series of experiments has a very small $\tilde{z}$-value. In contrast to the V-value method, the $\chi^2$ and the augmented V-method (see Section \[sec:multi\]) take all experiments of an article into account by assuming the same correlation structure for all experiments. For the particular article [@Forster2009], the V-value approach reported *strong evidence for low veracity* because the last two experiments (compare Figure \[fig:jfls09\]) exhibit the *super linear* pattern associated with sample correlations. The $\chi^2$ and $\Delta F$ method, however, do not indicate significant sample correlations as the deviance of remaining experiments fit well into the expected distribution under $H_0$, especially the results in panels 5 & 6 in Figure \[fig:jfls09\]. Klaassen [@Klaassen2015] intended the V-value to be sensitive for single experiments. The argument is that *bad science cannot be compensated by very good science* [@Klaassen2015]. Finding a small value for $\tilde{z}_j$ in a series of experiments, however, is quiet probable[^2] even under $H_0$. Hence one could argue that a single *suspiciously small* $\tilde{z}_j$ can not be interpreted as strong evidence for sample correlations. Discussion ========== There is no doubt that, in principle, statistics can be used to detect sample correlations that are due to data manipulation. The approach proposed in [@Klaassen2015], however, is not without problems. A first problem is the missing test statistics for the evidential value $V$. Although an upper-bound asymptotic test statistics for the V-value of a single experiment can be obtained (see Section \[sec:teststat\] above and [@Klaassen2015]), the reliability of the $V$ value for a small $n$ remains unknown (as well as how large a large $n$ must be to be considered *large*). A second problem is the critical value of $V^{*}=6$ chosen by the authors, which implies (asymptotically) $p\approx0.08$. Arguably, this is a rather high probability of falsely accusing a colleague of data manipulation. A third problem is the assumption that the product of the evidence provided by every single experiment in an article can serve as a metric of evidence for data manipulation in this article. As mentioned above as well as in the comments to the article at pubpeer.com [@Pubpeer2015] and in a response by Denzler and Liberman [@Liberman2015], this assumption implies that the evidence for data manipulation grows exponentially with the number of experiments even under $H_{0}$. The probability of $V\ge2$ for a single experiment is about $p\approx0.25$. Thus, about every 4th *good* experiment will double the evidence for data manipulation. The fourth problem, finally, is a general concern. The analysis assumes a specific type of data manipulation. If this is true, the manipulation will induce correlations between condition means. Moreover, under the second assumption that $0=X_{1}-2X_{2}+X_{3}$ this correlation can be detected. Importantly, however, the reverse is not true: The detection of such correlations in the data does not necessarily imply that data were manipulated. For that reason, Peeters et al. carefully avoided in [@Peeters2015] to claim that their findings prove that data were manipulated. Instead the results are interpreted as *evidence for low data veracity*, which is justified. In [@Klaassen2015], however, Klaassen claims that its method provides evidence for manipulation. Although the origin of sample correlations cannot be determined with statistics, their presence certainly violates an ANOVA assumption. This may result in an increased type-I error rate. Therefore, the effects reported in the articles providing strong or possibly even inconclusive evidence for sample correlations (e.g [@Forster2009a; @Forster2011; @Vision2014; @Liberman2009]) may be less significant than suggested by their ANOVAs. In this comment, specifically in Section \[sec:multi\], the concept of the single-experiment evidential value was extended to multiple experiments. Moreover, a much simpler chi-squared test was provided to test the presence of correlations in the data that is similar to the test proposed in [@Anonymous2012] and yielded very similar probabilities for the presence of sample correlations. Thus, the V-value approach can serve as a test for sample correlations, if it is applied across several identical or at least similar experiments. In this case one is also able to decide whether the variability in the results is suspiciously small or not. However, estimating $\sigma_{Z}$ on the basis of a single experiment will certainly not reveal a reliable result. [^1]: It is also not clear how a suitable test could be constructed for the assumption that the means are expected only in a monotonic, not necessarily equidistant order. [^2]: I.e. for $10$ experiments ($N=10$) $p\approx 0.4$ for $\alpha=0.05$ and $p\approx 0.1$ for $\alpha=0.01$
{ "pile_set_name": "ArXiv" }
--- abstract: 'We give a framework to produce [C$^\ast$]{}-algebra inclusions with extreme properties. This gives the first constructive nuclear minimal ambient [C$^\ast$]{}-algebras. We further obtain a purely infinite analogue of Dadarlat’s modeling theorem on AF-algebras: Every Kirchberg algebra is rigidly and KK-equivalently sandwiched by non-nuclear [C$^\ast$]{}-algebras without intermediate [C$^\ast$]{}-algebras. Finally we reveal a novel property of Kirchberg algebras: They embed into arbitrarily wild [C$^\ast$]{}-algebras as rigid maximal [C$^\ast$]{}-subalgebras.' address: 'Graduate school of mathematics, Nagoya University, Chikusaku, Nagoya, 464-8602, Japan' author: - Yuhei Suzuki title: 'Non-amenable tight squeezes by Kirchberg algebras' --- Introduction ============ Thanks to the recent progress in the classification theory of amenable [C$^\ast$]{}-algebras, Elliott’s classification program has been almost completed; see [@Win] for a recent survey. As crucial ideas and techniques have been developed in this theory (see e.g., [@EGLN], [@Kir], [@MS2], [@Phi], [@TWW], and references in [@Win]), a next natural attempt is applying the theory and its byproducts to understand the structure of simple [C$^\ast$]{}-algebras beyond the classifiable class (e.g., the reduced non-amenable group [C$^\ast$]{}-algebras). To take the advantage of rich structures of classifiable [C$^\ast$]{}-algebras to understand non-amenable [C$^\ast$]{}-algebras, one possible natural strategy is to bridge two [C$^\ast$]{}-algebras from each class via a tight inclusion. Indeed tight inclusions of operator algebras receive much attentions and are deeply studied by many hands because of their importance in the structure theory of operator algebras: see e.g., [@Ham79], [@Ham85], [@KK], [@Lon], [@Oza07], [@Pop81], [@Pop], [@PopICM]. Recent highlights are the breakthrough results on [C$^\ast$]{}-simplicity [@KK], [@BKKO] (cf.  [@Ham85]), in which tight inclusions are used to reduce problems on the reduced group [C$^\ast$]{}-algebras to those of less complicated [C$^\ast$]{}-algebras. This suggests the existence of *boundary theory* for general [C$^\ast$]{}-algebras (cf. [@Oza07]). Other strategies based on expansions of [C$^\ast$]{}-algebras also work successfully in the Baum–Connes conjecture [@BCH], see e.g., [@Hig], [@HK]. These significant results are motivations behind the present work. The purpose of the present paper is to establish a new powerful framework to produce extreme examples of tight [C$^\ast$]{}-algebra inclusions. We particularly consider the following three conditions coming from the three different viewpoints: Algebraic side : absence of intermediate [C$^\ast$]{}-algebras (maximality/minimality), Topological side : KK-equivalence, Order structural side : Hamana’s operator system rigidity [@Ham79b]. We note that the third condition is a crucial ingredient of [@KK], [@BKKO]. Several questions on the first condition were posed by Ge [@Ge] for instance. The second condition is partly motivated by the Baum–Connes conjecture (cf.  [@Hig], [@HK]). We now present the Main Theorem of this paper. Throughout the paper, denote by $\mathbb{F}_\infty$ a countable free group of infinite rank. Let $A$ be a simple unital separable purely infinite [C$^\ast$]{}-algebra. Let $\alpha \colon \mathbb{F}_\infty \curvearrowright A$ be an approximately inner [C$^\ast$]{}-dynamical system. Then there is an inner perturbation $\gamma$ of $\alpha$ with the following property: Any [C$^\ast$]{}-dynamical system $\beta \colon \mathbb{F}_\infty \curvearrowright B$ on a simple [C$^\ast$]{}-algebra $B$ with $B^\beta \neq 0$ gives a rigid inclusion $B {\mathop{\rtimes _{{\mathrm r}, \beta}}}\mathbb{F}_\infty \subset (A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \gamma\otimes \beta}}} \mathbb{F}_\infty$ without intermediate [C$^\ast$]{}-algebras. Note that the statement does not exclude the case that $B= \mathbb{C}$ and the case that $\alpha$ is the trivial action. Even in these specific cases, the theorem provides many [C$^\ast$]{}-algebra inclusions of remarkable new features. The Main Theorem also sheds some light on [C$^\ast$]{}-dynamical systems. Inner automorphisms are usually regarded as trivial objects in the study of automorphisms of (single) [C$^\ast$]{}-algebras. In fact when two [C$^\ast$]{}-dynamical systems are *cocycle conjugate*, their crossed product [C$^\ast$]{}-algebras are isomorphic. However, the Main Theorem reveals that the associated [C$^\ast$]{}-algebra inclusions can be changed drastically by inner automorphisms. It is also interesting to compare these phenomena with the remarkable rigidity phenomena on the crossed product algebra inclusions studied and conjectured by Neshveyev–St[ø]{}rmer [@NS] (see also the recent works [@CD], [@Suz19]). Applications to Kirchberg algebras {#applications-to-kirchberg-algebras .unnumbered} ---------------------------------- Since it is fairly easy to construct free group [C$^\ast$]{}-dynamical systems (because of the freeness), the Main Theorem has a wide range of applications. Furthermore, although it is not immediately apparent from the statement, the Main Theorem is successfully applied to *arbitrary* Kirchberg algebras. As a consequence, we obtain novel properties of Kirchberg algebras. Recall that a [C$^\ast$]{}-algebra is said to be a *Kirchberg algebra* if it is simple, separable, nuclear, and purely infinite. We refer the reader to the book [@Ror] for basic facts and backgrounds on Kirchberg algebras. Some beautiful and rich features of Kirchberg algebras can be seen from the complete classification theorem of Kirchberg [@Kir] and Phillips [@Phi]. We also refer the reader to [@IM], [@IM2] (cf. [@DP]) for a new interaction between algebraic topology and the symmetry structure of Kirchberg algebras. As the first main consequence, we obtain the first constructive examples of nuclear minimal ambient [C$^\ast$]{}-algebras. (Here *constructive* at least means that all constructions are elementary and concretely understandable and avoid the Baire category theorem.) \[Thmint:Main\] Let $\alpha \colon \mathbb{F}_\infty \curvearrowright A$ be a [C$^\ast$]{}-dynamical system on a simple separable nuclear [C$^\ast$]{}-algebra with $A^\alpha \neq 0$. Then the reduced crossed product $A {\mathop{\rtimes _{{\mathrm r}, \alpha}}} \mathbb{F}_\infty$ admits a KK-equivalent rigid embedding into a Kirchberg algebra without intermediate [C$^\ast$]{}-algebras. Note that, in our previous work [@Suzmin], we obtained partial results in specific cases (without KK-condition), which in particular gave the first examples of nuclear minimal ambient [C$^\ast$]{}-algebras. However all constructions in [@Suzmin] depend on the Baire category theorem (applied to the space of Cantor systems of $\mathbb{F}_\infty$). It is a novelty of the present paper that our new constructions are quite elementary and avoid the Baire category theorem. The second main consequence is the following structure result on Kirchberg algebras. This theorem can be considered as a Kirchberg algebra analogue of Dadarlat’s modeling theorem for AF-algebras [@Da]. Since Kirchberg algebras have no elementary inductive limit structure, our approach naturally differs from [@Da]. \[Thmint:Kir\] For every Kirchberg algebra $A$, there are [C$^\ast$]{}-algebra inclusions $B \subset A \subset C$ satisfying the following conditions. - $B$ is non-nuclear, $C$ is non-exact, and both algebras are simple and purely infinite. - The [C$^\ast$]{}-subalgebras $B\subset A$ and $A \subset C$ are rigid and maximal. - These inclusions give KK-equivalences. As the third application, we obtain the following ubiquitous property for some [C$^\ast$]{}-algebras. Although the statement holds true for more general [C$^\ast$]{}-algebras, we concentrate on the particularly interesting case. \[Thmint:3\] Let $A$ be a unital Kirchberg algebra. Then for any unital separable [C$^\ast$]{}-algebra $B$, there exists an ambient [C$^\ast$]{}-algebra $C$ of $B$ with a faithful conditional expectation which also contains $A$ as a rigid maximal [C$^\ast$]{}-subalgebra. It is easy to see from the proof that ${\mathop{{\mathrm C}_{\mathrm r}^\ast}}(\mathbb{F}_\infty)$ also has the same property. Moreover one can show that the *free group factor* $L(\mathbb{F}_\infty)$ has the analogous property by a similar method (Remark \[Rem:LF\]). It would be interesting to ask if the other free group factors $L(\mathbb{F}_n)$; $n=2, 3, \ldots$ have the same property. A key ingredient of the proofs is amenable actions of $\mathbb{F}_\infty$ on Kirchberg algebras [@Suzeq], [@Suz19]. The existence of amenable actions (of non-amenable groups) on *simple* [C$^\ast$]{}-algebras does not seem to have been believed for a long time (cf. [@AD02], [@BO]) until [@Suzeq]. (It is notable that such actions do *not* exist in the von Neumann algebra context; see [@AD79], Corollary 4.3.) To exclude intermediate [C$^\ast$]{}-algebras of the reduced crossed product inclusions, as stated in the Main Theorem, we perturb actions by inner automorphisms. In contrast to commutative [C$^\ast$]{}-algebras, purely infinite simple [C$^\ast$]{}-algebras have sufficiently many inner automorphisms (by Cuntz’s result [@Cun]). This provides amenable actions on Kirchberg algebras which sufficiently mix projections. Another important advantage of using Kirchberg algebras is their freedom of K-theory: in contrast to the fact that the K-groups of compact spaces are restricted (for instance their K$^0$-groups must have a non-trivial order structure), there is no structural restriction on the K-groups of Kirchberg algebras. This leads to useful reduced crossed product decompositions of Kirchberg algebras by $\mathbb{F}_\infty$ (up to stable isomorphism) [@Suz19]. Organization of the paper {#organization-of-the-paper .unnumbered} ------------------------- In Section \[Sec:inn\], we develop techniques on inner perturbations of [C$^\ast$]{}-dynamical systems. This provides [C$^\ast$]{}-dynamical systems with extremely transitive properties. In Section \[Sec:proof\], we study restrictions of intermediate objects of certain structures associated with [C$^\ast$]{}-dynamical systems obtained in Section \[Sec:inn\]. In Section \[Sec:rigidity\], we discuss rigidity properties of inclusions. In particular, after improving the constructions of inner perturbations in Section \[Sec:inn\], we complete the proof of the Main Theorem. Finally, in Section \[Sec:Kir\], we prove the consequences of the Main Theorem (Theorems \[Thmint:Main\] to \[Thmint:3\]). To handle the non-unital case, we need technical results, which we discuss in the Appendix. As a byproduct, we extend the tensor splitting theorem [@Zac], [@Zsi] (cf. [@GK]) to the non-unital case. Finally, we remark that, except for some cases, the crossed product splitting theorem obtained in [@Suz19] is *not available* because of the failure of central freeness. Our method to exclude intermediate [C$^\ast$]{}-algebras is a sophisticated version of the argument developed in our previous work [@Suzmin]. Because of non-commutativity and K-theoretic obstructions, we need technical improvements. For basic facts on [C$^\ast$]{}-algebras and discrete groups, we refer the reader to the book [@BO]. For basic facts on K-theory and KK-theory, see the book [@Bla]. Notations {#notations .unnumbered} --------- Here we fix some notations. Notations not explained in the article should be very common in operator algebra theory. - For $\epsilon>0$ and for two elements $x$, $y$ of a [C$^\ast$]{}-algebra, denote by $x\approx_{\epsilon} y$ if $\|x -y\| <\epsilon$. - The symbols ‘$\otimes$’, ‘${\mathop{\rtimes _{\mathrm r}}}$’, ‘$\rtimes_{\rm alg}$’ stand for the minimal tensor products (of [C$^\ast$]{}-algebras and completely bounded maps) and the reduced [C$^\ast$]{}- and algebraic crossed products respectively. - For a [C$^\ast$]{}-algebra $A$, denote by $A^{\rm p}$, $A_+$ the set of projections and the cone of positive elements in $A$ respectively. - For a unital [C$^\ast$]{}-algebra $A$, denote by $A^{\rm u}$ the group of unitary elements in $A$. - For a [C$^\ast$]{}-algebra $A$, denote by $\mathcal{M}(A)$, $Z(A)$, $A{^{\ast\ast}}$ the multiplier algebra of $A$, the center of $A$, and the second dual of $A$ respectively. - When there is an obvious [C$^\ast$]{}-algebra embedding $A \rightarrow \mathcal{M}(B)$, we regard $A$ as a [C$^\ast$]{}-subalgebra of $\mathcal{M}(B)$ via the obvious embedding. Such a situation often occurs in the tensor product, the free product, and the crossed product constructions. - For the reduced crossed product $A {\mathop{\rtimes _{\mathrm r}}}\Gamma$ and $s\in \Gamma$, denote by $u_s$ its canonical implementing unitary element in $\mathcal{M}(A {\mathop{\rtimes _{\mathrm r}}}\Gamma)$. - For the reduced crossed product $A {\mathop{\rtimes _{\mathrm r}}}\Gamma$, denote by $E\colon A{\mathop{\rtimes _{\mathrm r}}}\Gamma \rightarrow A$ the conditional expectation satisfying $E(a u_s)=0$ for all $a\in A$ and $s\in \Gamma \setminus \{e\}$ (called the canonical conditional expectation). - For a [C$^\ast$]{}-dynamical system $\alpha \colon \Gamma \curvearrowright A$, denote by $A^\alpha$ the fixed point algebra of $\alpha$: $$A^\alpha:=\{ a\in A: \alpha_s(a)=a {\rm~for~all~}s\in \Gamma\}.$$ - For two [C$^\ast$]{}-dynamical systems $\alpha \colon \Gamma \curvearrowright A$ and $\beta \colon \Gamma \curvearrowright B$, denote by $\alpha \otimes \beta$ the diagonal action of $\alpha$ and $\beta$, that is, the action $\Gamma \curvearrowright A \otimes B$ defined to be $(\alpha \otimes \beta)_s := \alpha_s \otimes \beta_s$ for $s\in \Gamma$. - For a [C$^\ast$]{}-algebra $A$, denote by $1$ the unit of $\mathcal{M}(A)$. - For a unital [C$^\ast$]{}-algebra $A$, denote by $\mathbb{C}$ the subspace of $A$ spanned by $1$. Inner perturbations of [C$^\ast$]{}-dynamical systems {#Sec:inn} ===================================================== In this section, we develop techniques on *inner perturbations* of free group [C$^\ast$]{}-dynamical systems. This provides [C$^\ast$]{}-dynamical systems with an extreme transitivity; see Proposition \[Prop:upert\]. The results in this section play crucial roles in the proof of the Main Theorem. We first introduce the following metric spaces of projections in a [C$^\ast$]{}-algebra. Let $A$ be a [C$^\ast$]{}-algebra. For $x_1, x_2 \in {\mathrm K}_0(A)$, we define $${\rm P}(A; x_1, x_2):= \left\{(p_1, p_2) \in (A^{\rm p} \setminus \{0 \})^2: [p_1]_0=x_1,\ [p_2]_0=x_2,\ p_1 \perp p_2,\ p_1 + p_2 \neq 1\right\}$$ (possibly empty). We equip ${\rm P}(A; x_1, x_2)$ with the metric given by the [C$^\ast$]{}-norm on $A\oplus A$. Note that each ${\rm P}(A; x_1, x_2)$ is closed in $A\oplus A$. Observe that every automorphism $\alpha$ on $A$ which acts trivially on the K$_0$-group induces an isometric homeomorphism $$(p_1, p_2) \mapsto (\alpha(p_1), \alpha(p_2))$$ on each ${\rm P}(A; x_1, x_2)$. In this paper we employ the following definition of amenability for [C$^\ast$]{}-dynamical systems which is introduced by Anantharaman-Delaroche [@AD]. \[Def:ame\] A [C$^\ast$]{}-dynamical system $\alpha \colon \Gamma \curvearrowright A$ is said to be *amenable* if the induced action $\Gamma \curvearrowright Z(A{^{\ast\ast}})$ is amenable in the von Neumann algebra sense. Although there is another property of [C$^\ast$]{}-dynamical systems called amenable (see e.g., [@BO]), in this paper amenability always means the property in Definition \[Def:ame\] without specified. In this paper, we do not use the definition directly, but use the following facts. 1. It is clear from the definition that when one of [C$^\ast$]{}-dynamical systems $\alpha$, $\beta$ of $\Gamma$ is amenable, so is $\alpha \otimes \beta$. 2. When the underlying [C$^\ast$]{}-algebra is nuclear, amenability of [C$^\ast$]{}-dynamical systems is equivalent to the nuclearity of the reduced crossed product ([@AD], Theorem 4.5). We give one more basic property of amenability. This immediately follows from the definition, but plays an important role in this paper. Before giving the statement, we recall and introduce a few definitions. Recall that an automorphism $\alpha$ of a [C$^\ast$]{}-algebra $A$ is said to be *inner* if there exists $u\in \mathcal{M}(A)^{\rm u}$ satisfying $\alpha(x)={\mathop{\rm ad}}(u)(x):=uxu^\ast$ for all $x\in A$. Denote by ${\rm Inn}(A)$ the group of inner automorphisms of $A$. For two [C$^\ast$]{}-dynamical systems $\alpha, \beta \colon \Gamma \curvearrowright A$, we say that *$\beta$ is an inner perturbation of $\alpha$* if $\beta_s \circ \alpha_{s}^{-1} \in {\rm Inn}(A)$ for all $s\in \Gamma$. Note that, as ${\rm Inn}(A)$ forms a normal subgroup in the automorphism group of $A$, to check that $\beta$ is an inner perturbation of $\alpha$, we only need to confirm the condition on a generating set of $\Gamma$. \[Lem:ame\] Amenability of [C$^\ast$]{}-dynamical systems is stable under inner perturbations. Inner automorphisms on a [C$^\ast$]{}-algebra $A$ induce the identity map on $Z(A{^{\ast\ast}})$. We remark that ${\rm Inn}(A)$ in Lemma \[Lem:ame\] is not replaceable by its pointwise norm closure (*the group of approximately inner automorphisms*). In fact, when the acting group is a non-commutative free group, by [@KOS], any [C$^\ast$]{}-dynamical system on a simple separable [C$^\ast$]{}-algebra admits a non-amenable approximately inner perturbation (see the Proposition in [@Suzfp] for details and an application). We next recall the following basic observation. The proof is essentially contained in [@Cun] and should be well-known but we include it for completeness. \[Lem:trans\] Let $A$ be a purely infinite simple [C$^\ast$]{}-algebra. Then for any $x_1, x_2 \in {\mathrm K}_0(A)$, the induced action ${\rm Inn}(A) \curvearrowright {\rm P}(A; x_1, x_2)$ is transitive. Note first that in the unital case, the statement follows from [@Cun], Section 1. To consider the non-unital case, we first show that each orbit of the action is open in ${\rm P}(A; x_1, x_2)$. Let $(p_1, p_2), (q_1, q_2) \in {\rm P}(A; x_1, x_2)$ be given. Assume that $\|p_1 - q_1\|, \|p_2 -q_2\|<1/12$. By Lemma 7.2.2 in [@BO], there is $u \in \mathcal{M}(A)^{\rm p}$ with $q_1=up_1u^\ast$, $\|1-u\|< 1/3$. This implies $\|up_2u^\ast - q_2\|< 1$. Applying Lemma 7.2.2 in [@BO] to the projections $up_2u^\ast$, $q_2$ in the [C$^\ast$]{}-algebra $(1-q_1)\mathcal{M}(A)(1-q_1)$, we obtain $v\in \mathcal{M}(A)^{\rm u}$ with $vup_2u^\ast v^\ast =q_2$, $vq_1=q_1$. Now it is clear that ${\mathop{\rm ad}}(vu)(p_i) =q_i$ for $i=1, 2$. By [@Zha], $A$ admits a (not necessary increasing) approximate unit $(e_j)_{j \in J}$ consisting of projections. Thus for any $(p_1, p_2)\in {\rm P}(A; x_1, x_2)$, by standard applications of functional calculus and the observation in the previous paragraph, for any sufficiently large $j \in J$, one can find $\alpha \in {\rm Inn}(A)$ with $\alpha(p_1+p_2) \lneq e_j$. This reduces the proof to the unital case, and thus completes the proof. Now we are able to show the following result. We say that a group action $\Gamma \curvearrowright X$ on a topological space is *minimal* if all $\Gamma$-orbits are dense in $X$. \[Prop:upert\] Let $\alpha \colon \mathbb{F}_\infty \curvearrowright A$ be a [C$^\ast$]{}-dynamical system on a separable purely infinite simple [C$^\ast$]{}-algebra $A$ whose induced action on ${\mathrm K}_0(A)$ is trivial. Then there exists an inner perturbation $\beta$ of $\alpha$ satisfying the following conditions. 1. For any $x_1, x_2 \in {\mathrm K}_0(A)$, the induced action $\mathbb{F}_\infty \curvearrowright {\rm P}(A; x_1, x_2)$ of $\beta$ is minimal. 2. Let $S_\beta$ denote the set of all $p \in A^{\rm p}$ whose stabilizer subgroup of $\beta$ contains at least two canonical generating elements of $\mathbb{F}_\infty$. Then $S_\beta$ is dense in $A^{\rm p}$. Since any automorphism on $A$ which acts trivially on K$_0(A)$ induces an isometric homeomorphism on each ${\rm P}(A; x_1, x_2)$, to check condition (1), we only need to find a dense orbit in each ${\rm P}(A; x_1, x_2)$. For each $x_1, x_2 \in {\mathrm K}_0(A)$, choose a dense sequence $(p[x_1, x_2, n, 1], p[x_1, x_2, n, 2])_{n =1}^\infty$ in $ {\rm P}(A; x_1, x_2)$ such that each term appears at least twice in the sequence. Denote by $S$ the canonical generating set of $\mathbb{F}_\infty$. We fix a bijective map $$f \colon {\mathrm K}_0(A) \times {\mathrm K}_0(A) \times \mathbb{N}\times \{1, 2\} \rightarrow S.$$ For each $(x_1, x_2, n) \in {\mathrm K}_0(A) \times {\mathrm K}_0(A) \times \mathbb{N}$, choose $u[x_1, x_2, n, i] \in \mathcal{M}(A)^{\rm u}$; $i=1, 2$ satisfying $$({\mathop{\rm ad}}(u[x_1, x_2, n, 1])\circ \alpha_{f(x_1, x_2, n, 1)})(p[x_1, x_2, 1, j])= p[x_1, x_2, n, j],$$ $$({\mathop{\rm ad}}(u[x_1, x_2, n, 2]) \circ\alpha_{f(x_1, x_2, n, 2)})(p[x_1, x_2, n, j])= p[x_1, x_2, n, j],$$ for $j=1, 2$ (this is possible by Lemma \[Lem:trans\]). For $s\in S$, define $$\beta_{s}:={\mathop{\rm ad}}(u[f^{-1}(s)]) \circ \alpha_{s}.$$ This formula defines an inner perturbation $\beta$ of $\alpha$. It is clear from the choice of $u$’s that $\beta$ satisfies the required conditions. Transitivity conditions and absence of intermediate objects {#Sec:proof} =========================================================== In this section, we use conditions (1) and (2) in Proposition \[Prop:upert\] to exclude certain intermediate objects. These two conditions can be seen as a non-commutative variant of the property $\mathcal{R}$ defined for Cantor systems in [@Suzmin], Proposition 3.3. On the one hand, because the property $\mathcal{R}$ requires an extreme transitivity (seemingly opposite to amenability of topological dynamical systems), it seems hopeless to obtain a constructive amenable example. On the other hand, in contrast to this, we have already obtained constructive amenable [C$^\ast$]{}-dynamical systems satisfying these two conditions, thanks to high non-commutativity of the underlying algebras. A ($\mathbb{C}$-linear) subspace $X$ of a [C$^\ast$]{}-algebra is said to be *self-adjoint* if $x^\ast \in X$ for all $x\in X$. For a [C$^\ast$]{}-dynamical system $\alpha \colon \Gamma \curvearrowright A$, a subspace $X \subset A$ is said to be *$\alpha$-invariant* if it satisfies $\alpha_s(X)=X$ for all $s\in \Gamma$. To study intermediate [C$^\ast$]{}-algebras, we first show that, when the underlying algebra is simple and purely infinite, from condition (1), we obtain the best possible restriction on invariant closed subspaces. The reason why we need to study subspaces rather than just subalgebras is as follows. For a [C$^\ast$]{}-subalgebra $C$ of the reduced crossed product $A {\mathop{\rtimes _{{\mathrm r}, \alpha}}} \Gamma$ satisfying $u_sCu_s^\ast =C$ for all $s\in \Gamma$, the set $E(C)$ forms an $\alpha$-invariant self-adjoint subspace of $A$, but it is *not necessary a subalgebra*. \[Prop:invsp\] Let $A$ be a purely infinite simple [C$^\ast$]{}-algebra. Let $\alpha \colon \mathbb{F}_\infty \curvearrowright A$ be a [C$^\ast$]{}-dynamical system which acts trivially on ${\mathrm K}_0(A)$ and satisfies condition $(1)$ in Proposition \[Prop:upert\]. Then $0$, $\mathbb{C}$, $A$ are the only possible $\alpha$-invariant closed self-adjoint subspaces of $A$. Let $X$ be an $\alpha$-invariant closed self-adjoint subspace of $A$ different from $0$ and $\mathbb{C}$. It suffices to show that $X=A$. We first note that, in the unital case, the subspace $X+\mathbb{C}$ is also $\alpha$-invariant, closed, and self-adjoint. We observe that the equality $X+\mathbb{C} =A$ implies $X=A$. To see this, assume that $X+\mathbb{C}=A$, $X \neq A$, and denote by $\varphi$ the (nonzero bounded) linear functional on $A$ defined by the (Banach space) quotient map $A \rightarrow A/X \cong \mathbb{C}$. Then, since $X$ is $\alpha$-invariant, so is $\varphi$ (that is, $\varphi \circ \alpha_s=\varphi$ for all $s\in \mathbb{F}_\infty$). It follows from condition (1) that for any two projections $p_1, p_2 \in A^{\rm p}\setminus \{0, 1\}$ with $[p_1]_0 =[p_2]_0$, we have $\varphi(p_1)=\varphi(p_2)$. Since $\varphi(1)\neq 0$, one can find $p\in A^{\rm p}\setminus \{0, 1\}$ with $\varphi(p)\neq 0$. Choose pairwise orthogonal nonzero projections $(p_n)_{n=1}^\infty$ in $A$ satisfying $[p_n]_0 =[p]_0$ for all $n\in \mathbb{N}$. Then for any $N\in \mathbb{N}$, we have $N|\varphi(p)|= |\varphi(\sum_{n=1}^N p_n)|\leq \|\varphi\|$. This is a contradiction. Thus, in the unital case, we only need to show the statement under the additional assumption that $\mathbb{C} \subsetneq X$. We will prove $ A^{\rm p} \subset X$ under this assumption, which implies $X=A$ by [@Zha]. Choose a self-adjoint contractive element $h$ in $X \setminus \mathbb{C}$ whose spectrum contains $0$ and $1$. Let $\epsilon>0$ and $p \in A^{\rm p} \setminus \{0, 1\}$ be given. Since $A$ is of real rank zero [@Zha] (see also [@BP]), there exist nonzero pairwise orthogonal projections $p_1, \ldots, p_l$ in $A$ and a sequence $\lambda_1, \ldots, \lambda_{l-1}, \lambda_l=1$ in $[-1, 1] \subset \mathbb{R}$ satisfying $$h \approx_{\epsilon} \sum_{i=1}^l \lambda_i p_i,\qquad \sum_{i=1}^l p_i \neq 1.$$ By splitting the last term $p_l$ into two new projections if necessary, we may assume that $[p_l]_0 = [p]_0$. We put $G:=\{\alpha_s: s\in \mathbb{F}_\infty\}$ for short. Take $q, r_1\in A^{\rm p} \setminus\{0, 1\}$ satisfying $$q \perp r_1,\qquad q+r_1 \perp \sum_{i=1}^l p_i,\qquad [q]_0=[\sum_{i=1}^{l-1} p_i]_0.$$ Next we fix a real number $$0<\delta< \min \left\{1,\ \frac{\epsilon-\|h-\sum_{i=1}^l \lambda_i p_i\|}{4l}\right\}.$$ By applying condition (1) to $(\sum_{i=1}^{l-1} p_i, p_l) \in {\rm P}(A; [q]_0, [p_l]_0)$, we obtain $\gamma_1\in G$ satisfying $$\sum_{i=1}^{l-1} \gamma_1(p_i) \approx_{\delta} q,\qquad \gamma_1(p_l) \approx_\delta p_l.$$ By Lemma 7.2.2 (1) in [@BO], one can take $u \in \mathcal{M}(A)^{\rm u}$ satisfying $$\|u-1\|< 4\delta,\qquad u\left(\sum_{i=1}^{l-1}\gamma_1(p_i)\right)u^\ast =q.$$ Set $q_i:= u \gamma_1(p_i) u^\ast (\approx_{8\delta} \gamma_1(p_i))$ for $i=1, \ldots, l-1$. Then, since $|\lambda_i| \leq 1$ for all $i$, we have $$\sum_{i=1}^l \lambda_i \gamma_1(p_i)\approx_{8l\delta} p_l + \sum_{i=1}^{l-1} \lambda_i q_i.$$ Set $$x_2:= \frac{1}{2}\sum_{i=1}^{l-1} \lambda_i(p_i +q_i) \in (1-p_l-r_1)A(1-p_l-r_1).$$ Note that $x_2$ is self-adjoint and $\|x_2\|\leq 1/2$. Since $4l\delta + \|h-\sum_{i=1}^l \lambda_i p_i\|<\epsilon$, we obtain $$h_2:=\frac{1}{2}(h+ \gamma_1(h)) \approx_{\epsilon} p_l + x_2.$$ Next we apply the same argument to $p_l + x_2$ and $\epsilon - \| h_2-(p_l + x_2)\|$ instead of $h$ and $\epsilon$ (with the same $p_l$). As a result we obtain $\gamma_2\in G$, $r_2\in A^{\rm p} \setminus \{0, 1\}$ with $r_2 \perp p_l$, and a self-adjoint element $x_3$ in $(1-p_l-r_2)A(1-p_l-r_2)$ satisfying $$h_3 := \frac{1}{2}[h_2+ \gamma_2(h_2)]\approx_\epsilon p_l + x_3,\qquad \|x_3\|\leq \frac{1}{2^2}.$$ Fix $N \in \mathbb{N}$ satisfying $2^{-(N-1)} <\epsilon$. By iterating this argument $N$ times, we finally obtain $h_N \in X$ and $x_N \in (1- p_l)A(1-p_l)$ satisfying $$h_N \approx_{\epsilon} p_l + x_N,\qquad \|x_N\|\leq \frac{1}{2^{N-1}}<\epsilon.$$ Choose $\gamma \in G$ satisfying $\gamma(p_l)\approx_{\epsilon} p$ (which exists by condition (1)). We then obtain $p \approx_{3\epsilon} \gamma(h_N) \in X$. Since $\epsilon>0$ is arbitrary, we conclude $p\in X$. \[Rem:sf\] In the stably finite case, one cannot expect to find actions satisfying the conclusion of Proposition \[Prop:invsp\]. Indeed, let $A$ be a non-commutative [C$^\ast$]{}-algebra. Then, for any non-empty set $S$ of tracial states on $A$, the subspace $\bigcap_{\alpha\in \mathrm{Aut}(A)}\bigcap_{\tau \in S} \ker(\tau\circ \alpha) \subset A$ is proper, closed, self-adjoint, and invariant under $\mathrm{Aut}(A)$. (Note that by the Hahn–Banach theorem, this subspace is nonzero.) We now combine the subspace restriction obtained in Proposition \[Prop:invsp\] with condition (2) to effectively apply the Powers averaging argument [@Pow], [@HS]. As a result, we obtain strong restrictions of some reduced crossed product inclusions associated with [C$^\ast$]{}-dynamical systems obtained in Proposition \[Prop:upert\]. \[Thm:inter\] Let $\alpha \colon \mathbb{F}_\infty \curvearrowright A$ be an action on a purely infinite simple [C$^\ast$]{}-algebra satisfying conditions $(1)$ and $(2)$ in Proposition \[Prop:upert\]. Let $\beta \colon \mathbb{F}_\infty \curvearrowright B$ be an action on a simple [C$^\ast$]{}-algebra with $B^\beta \neq 0$. Then $0$, $B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty$, $(A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha \otimes \beta}}}\mathbb{F}_\infty$ are the only possible [C$^\ast$]{}-subalgebras of $(A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha \otimes \beta}}}\mathbb{F}_\infty$ invariant under multiplications by $B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty$. Thus, when we additionally assume that $A$ is unital, the reduced crossed product inclusion $B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty \subset (A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha \otimes \beta}}}\mathbb{F}_\infty$ has no intermediate [C$^\ast$]{}-algebras. Let $C$ be a [C$^\ast$]{}-subalgebra of $(A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha \otimes \beta}}} \mathbb{F}_\infty$ as in the statement. We first consider the case that $E(C) \subset B$. When $A$ is non-unital, this implies $C=0$. When $A$ is unital, since $C=Cu_s$ for all $s\in \mathbb{F}_\infty$, by Proposition 3.4 of [@Suz17], we have $C\subset B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty$. Since $B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty$ is simple ([@HS], Theorem I), this yields $C=0$ or $C= B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty$. We next consider the case $E(C)\not\subset B$. Observe that $E(C)$ is an $(\alpha \otimes \beta)$-invariant self-adjoint subspace of $A\otimes B$. Hence the elements of the form $({\mbox{\rm id}}_A \otimes \varphi)(E(c))$, where $\varphi$ is a pure state on $B$ and $c \in C$, span an $\alpha$-invariant self-adjoint subspace of $A$. By (the easy part of) Theorem \[Thm:TS\], this subspace is not contained in $\mathbb{C}$. Therefore, by Proposition \[Prop:invsp\], for any $\epsilon>0$ and any $p\in A^{\rm p}$, one can choose pure states $\varphi_1, \ldots, \varphi_n$ on $B$ and $c_1, \ldots, c_n \in C \setminus\{0\}$ satisfying $$\sum_{i=1}^n({\mbox{\rm id}}_A \otimes \varphi_i)(E(c_i))\approx_\epsilon p.$$ By the Akemann–Anderson–Pedersen excision theorem [@AAP] ([@BO], Theorem 1.4.10) (applied to each $\varphi_i$), for each $i=1, \ldots, n$, one can choose $b_i \in B_+$ satisfying $$\|b_i\|=1,\qquad b_i E(c_i) b_i \approx_{\epsilon/2n} [({\mbox{\rm id}}_A \otimes \varphi_i)(E(c_i))]\otimes b_i^2.$$ We fix an element $b\in (B^\beta)_{+}$ with $\|b\|=1$. By applying Lemma \[Lem:simple\] to each $b_i^2$ in $B$, we obtain finite sequences $(v_{i, j})_{j=1}^{l(i)}$, $i=1, \ldots, n$, in $B$ satisfying $$\|\sum_{j=1}^{l(i)} v_{i, j} b_i^2 v_{i, j}^\ast - b\|<\frac{\epsilon}{2n\|c_i\|},\qquad \sum_{j=1}^{l(i)} v_{i, j} v_{i, j}^\ast \leq 1.$$ Set $x_{i, j}:= v_{i, j}b_i \in B \subset \mathcal{M}((A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha \otimes \beta}}}\mathbb{F}_\infty)$ for $i=1, \ldots, n$ and $j=1, \ldots, l(i)$. We then obtain $$E(\sum_{i=1}^n\sum_{j=1}^{l(i)} x_{i, j} c_i x_{i, j}^\ast)=\sum_{i=1}^n\sum_{j=1}^{l(i)} x_{i, j} E(c_i) x_{i, j}^\ast \approx_{\epsilon} \left[\sum_{i=1}^n ({\mbox{\rm id}}_A \otimes \varphi_i)(E(c_i))\right]\otimes b \approx_{\epsilon} p \otimes b.$$ Summarizing the result, we have shown that, for any $p\in A^{\rm p}$, any $b\in (B^\beta)_+$, and any $\epsilon>0$, there exists $c\in C$ satisfying $$E(c) \approx_\epsilon p \otimes b.$$ Fix $p\in A^{\rm p}$, $\epsilon>0$, $b\in (B^\beta)_+\setminus \{0\}$, and take $c\in C$ satisfying $E(c) \approx_\epsilon p \otimes b$. We further assume that the stabilizer subgroup of $p$ contains at least two canonical generating elements $s_1$, $s_2$ of $\mathbb{F}_\infty$. Choose $c_0 \in (A \otimes B) {\mathop{\rtimes _{\mathrm{alg}}}}\mathbb{F}_\infty$ satisfying $c_0 \approx_{\epsilon} c$, $E(c_0)=p \otimes b$. We apply the Powers argument [@Pow], [@HS] to $c_0 - p\otimes b$ by using $s_1, s_2$ (cf.  [@Suzmin], Lemma 3.8). As a result, we obtain a sequence $g_1, \ldots, g_m$ in $\langle s_1, s_2 \rangle$ (the subgroup generated by $s_1$ and $s_2$) satisfying $$\frac{1}{m} \sum_{i=1}^m u_{g_i} c_0 u_{g_i}^\ast \approx_{\epsilon} p \otimes b.$$ This implies $$p \otimes b\approx_{2\epsilon}\frac{1}{m} \sum_{i=1}^m u_{g_i} c u_{g_i}^\ast \in C.$$ Since $\epsilon>0$ is arbitrary, we conclude $p \otimes b\in C$. Since $\alpha$ satisfies condition (2), $A$ is of real rank zero [@Zha], and $B$ is simple, we obtain $C=(A \otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha\otimes \beta}}}\mathbb{F}_\infty$. Rigidity properties of inclusions {#Sec:rigidity} ================================= We establish a rigidity of automorphisms for inclusions obtained in Theorem \[Thm:inter\]. It is notable that the same property plays an important role in the proof of the Galois correspondence theorem of Izumi–Longo–Popa [@ILP] (see also [@Lon]). We then slightly modify Proposition \[Prop:upert\] to give rigid inclusions. This completes the proof of the Main Theorem. By using the spectral $M$-subspaces (see [@Ped], Definition 8.1.3), we obtain the following result from Proposition \[Prop:invsp\]. \[Lem:comm\] Let $\alpha \colon \mathbb{F}_\infty \curvearrowright A$ be a [C$^\ast$]{}-dynamical system as in Proposition \[Prop:invsp\]. Assume that $A$ is unital. Then there is no non-trivial automorphism of $A$ commuting with $\alpha$. Let $\beta$ be an automorphism of $A$ commuting with $\alpha$. Then for any closed subset $K \subset \widehat{\mathbb{Z}}$ with $K^{-1}=K$, the $M$-subspace $M^\beta(K)$ forms a closed self-adjoint $\alpha$-invariant subspace of $A$. Since $1\in M^\beta(\{1\})$, by Theorem 8.1.4 (iv), (ix) of [@Ped], we have $1\not\in M^\beta(K)$ whenever $1\not\in K$. By Proposition \[Prop:invsp\], this forces that $M^\beta(K)=0$ for any compact subset $K\subset \widehat{\mathbb{Z}} \setminus \{1\}$. Consequently, by Theorem 8.1.4 (iii), (vii), (viii), (ix) of [@Ped], we have $M^\beta(\{1\})=A$. Corollary 8.1.8 in [@Ped] now yields $\beta={\mbox{\rm id}}_A$. For a completely positive map $\Phi \colon A \rightarrow B$ between [C$^\ast$]{}-algebras, when it extends to a completely positive map $\mathcal{M}(A) \rightarrow \mathcal{M}(B)$ which is strictly continuous on the unit ball, we denote by $\Phi^{\mathcal{M}}$ such a (unique) extension. Such an extension exists if $\Phi$ maps an approximate unit of $A$ to that of $B$; see Corollary 5.7 in [@Lan]. It is obvious that all completely positive maps appearing below satisfy this condition. \[Prop:rigid\] Let $\alpha, \beta$ be as in Theorem \[Thm:inter\]. Assume that $A$ is unital. Then the inclusion $C:=B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty \subset (A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha \otimes \beta}}}\mathbb{F}_\infty:=D$ has the following property. If two automorphisms $\gamma_1$, $\gamma_2$ of $D$ coincide on $C$, then $\gamma_1=\gamma_2$. To prove the statement, it suffices to show the following claim. Any automorphism $\gamma$ on $D$ with $\gamma|_{C} ={\mbox{\rm id}}_{C}$ must be trivial. Given such $\gamma$. We will show that $\gamma^{\mathcal{M}}(A)=A$. To see this, we first show that $E^{\mathcal{M}}(\gamma^{\mathcal{M}}(A)) \subset A$. Take $x \in E^{\mathcal{M}}(\gamma^{\mathcal{M}}(A)) \subset \mathcal{M}(A\otimes B)$. Then note that $x$ commutes with $B$ (by standard arguments on multiplicative domains). Therefore, for any state $\varphi$ on $A$, we have $$(\varphi \otimes {\mbox{\rm id}}_B)^{\mathcal{M}}(x)\in Z(\mathcal{M}(B)) = \mathbb{C}.$$ Here and below, we regard $\mathcal{M}(A)=A$ and $\mathcal{M}(B)$ as [C$^\ast$]{}-subalgebras of $\mathcal{M}(A\otimes B)$ in the obvious way. This shows that, for any state $\psi$ on $B$, $$(\varphi \otimes {\mbox{\rm id}}_B)^{\mathcal{M}}(({\mbox{\rm id}}_A \otimes \psi)^{\mathcal{M}}(x))= ({\mbox{\rm id}}_A \otimes \psi)^{\mathcal{M}}((\varphi \otimes {\mbox{\rm id}}_B)^{\mathcal{M}}(x))=(\varphi \otimes {\mbox{\rm id}}_B)^{\mathcal{M}}(x).$$ Observe that the maps $(\varphi \otimes {\mbox{\rm id}}_B)^{\mathcal{M}}$; $\varphi$ a state on $A$, separate the points of $\mathcal{M}(A\otimes B)$. Therefore we conclude $$x= ({\mbox{\rm id}}_A \otimes \psi)^{\mathcal{M}}(x)\in A.$$ Since $\gamma^{\mathcal{M}}(u_s)=u_s$ for all $s\in \mathbb{F}_\infty$, the unital completely positive map $$(E^{\mathcal{M}} \circ \gamma^{\mathcal{M}})|_A \colon A \rightarrow A$$ is $\mathbb{F}_\infty$-equivariant. Proposition 3.1 shows that $E^{\mathcal{M}}(\gamma^{\mathcal{M}}(A))$ is dense in $A$. Fix $b\in (B^\beta)_+\setminus \{0\}$ with $\|b\|=1$ and $\epsilon>0$. Let $p \in A^{\rm p}$ be a projection whose stabilizer subgroup of $\alpha$ contains at least two canonical generating elements $s_1$, $s_2$ of $\mathbb{F}_\infty$. Choose $x\in A$ satisfying $E^{\mathcal{M}}(\gamma^{\mathcal{M}}(x))\approx_\epsilon p$. By applying the Powers argument [@Pow], [@HS] to $\gamma^{\mathcal{M}}(x) b=\gamma(x\otimes b) \in D$ by using $s_1$ and $s_2$ (cf. the proof of Theorem \[Thm:inter\] or [@Suzmin], Lemma 3.8), we obtain a sequence $g_1, \ldots, g_n$ in $\langle s_1, s_2 \rangle$ satisfying $$p\otimes b \approx_\epsilon \frac{1}{n}\sum_{i=1}^n u_{g_i} \gamma(x \otimes b) u_{g_i}^\ast =\frac{1}{n}\sum_{i=1}^n \gamma(\alpha_{g_i}(x) \otimes b).$$ Since $\epsilon>0$ is arbitrary, we obtain $p \otimes b \in \gamma(A \otimes b)$. (Note that $\gamma$ is isometric hence $\gamma(A \otimes b)$ is closed in $D$.) By condition (2) of $\alpha$ and [@Zha], we obtain $$A \otimes b \subset \gamma(A \otimes b)=\gamma^{\mathcal{M}}(A)b.$$ By Lemma \[Lem:simple\], one can choose a net $((v_{i, \lambda})_{i=1}^{n(\lambda)})_{\lambda \in \Lambda}$ of finite sequences in $B$ satisfying $$\sum_{i=1}^{n(\lambda)}v_{i, \lambda} v_{i, \lambda}^\ast \leq 1 \qquad{\rm for~ all~ }\lambda\in \Lambda,$$ $$\lim_{\lambda \in \Lambda} \sum_{i=1}^{n(\lambda)}v_{i, \lambda} b v_{i, \lambda}^\ast=1 \qquad {\rm in~ the~ strict~ topology~ of~} \mathcal{M}(D).$$ Now for any $a\in A$, choose $x\in A$ with $a\otimes b=\gamma^{\mathcal{M}}(x) b$. Then, for any $\lambda \in \Lambda$, $$a\otimes \left(\sum_{i=1}^{n(\lambda)}v_{i, \lambda} b v_{i, \lambda}^\ast \right)= \gamma^{\mathcal{M}}(x) \left(\sum_{i=1}^{n(\lambda)}v_{i, \lambda} b v_{i, \lambda}^\ast \right).$$ By letting $\lambda$ tend to infinity, we obtain $a=\gamma^{\mathcal{M}}(x)$. Applying the same argument to $\gamma^{-1}$, we obtain $A=\gamma^{\mathcal{M}}(A)$. Thus $\gamma^{\mathcal{M}}|_A$ defines an automorphism on $A$. Since $\gamma^{\mathcal{M}}(u_s)=u_s$ for all $s\in \mathbb{F}_\infty$, the automorphism $\gamma^{\mathcal{M}}|_A$ commutes with $\alpha$. Lemma \[Lem:comm\] therefore implies $\gamma^{\mathcal{M}}|_A={\mbox{\rm id}}_A$. Since $A \cdot C$ generates $D$, we conclude $\gamma={\mbox{\rm id}}_{D}$. We now construct rigid inclusions. We first recall the definition. \[Def:rigid\] An inclusion $A \subset B$ of [C$^\ast$]{}-algebras is said to be *rigid* if the identity map ${\mbox{\rm id}}_B$ is the only completely positive map $\Phi\colon B \rightarrow B$ satisfying $\Phi|_A={\mbox{\rm id}}_A$. By slightly modifying Proposition \[Prop:upert\] (under a stronger assumption), we obtain [C$^\ast$]{}-dynamical systems satisfying a stronger condition which is useful to study the rigidity of associated inclusions. For a separable [C$^\ast$]{}-algebra $A$, when we equip the automorphism group $\mathrm{Aut}(A)$ of $A$ with the point-norm topology, it forms a Polish group. (The point-norm topology of $\mathrm{Aut}(A)$ is the weakest topology on $\mathrm{Aut}(A)$ making the evaluation maps $\alpha \mapsto \alpha(a) \in A$ norm continuous for all $a\in A$.) Indeed, take a dense sequence $(a_n)_{n=1}^\infty$ in the unit ball of $A$. Then it is not hard to see that the metric $d$ on $\mathrm{Aut}(A)$ given by $$d(\alpha, \beta):= \sum_{n=1}^\infty \frac{1}{2^{n}}\left(\|\alpha(a_n)-\beta(a_n)\|+\|\alpha^{-1}(a_n)-\beta^{-1}(a_n)\|\right); \qquad \alpha, \beta \in\mathrm{Aut}(A),$$ confirms the statement. Denote by $\overline{{\rm Inn}}(A)$ the closure of the inner automorphism group ${\rm Inn}(A)$ in $\mathrm{Aut}(A)$. We say that a [C$^\ast$]{}-dynamical system $\alpha \colon \Gamma \curvearrowright A$ is *pointwise approximately inner* if $\alpha_s \in \overline{{\rm Inn}}(A)$ for all $s\in \Gamma$. \[Prop:upert2\] Let $\alpha \colon \mathbb{F}_\infty \curvearrowright A$ be a pointwise approximately inner [C$^\ast$]{}-dynamical system on a separable purely infinite simple [C$^\ast$]{}-algebra $A$. Then there exists an inner perturbation $\beta$ of $\alpha$ satisfying the following conditions. 1. The set $\{\beta_s:s\in \mathbb{F}_\infty\}$ is dense in $\overline{{\rm Inn}}(A)$. 2. Let $S_\beta$ denote the set of all $p \in A^{\rm p}$ whose stabilizer subgroup of $\beta$ contains at least two canonical generating elements of $\mathbb{F}_\infty$. Then $S_\beta$ is dense in $A^{\rm p}$. We split the canonical generating set $S$ of $\mathbb{F}_\infty$ into two infinite subsets: $S= S_1 \sqcup S_2$. We first perturb $\alpha_s$; $s\in S_2$ by inner automorphisms as in the proof of Proposition \[Prop:upert\] to ensure condition (2). We next choose a dense sequence $(\gamma_n)_{n=1}^\infty$ in $\overline{{\rm Inn}}(A)$. Fix a bijective map $f\colon \mathbb{N}\times \mathbb{N}\rightarrow S_1$. Since each $\alpha_s$ is approximately inner, there exist $v_s\in \mathcal{M}(A)^{\rm u}$; $s \in S_1$, satisfying $$\lim_{m\rightarrow \infty} {\mathop{\rm ad}}(v_{f({n, m})})\circ\alpha_{f(n, m)}=\gamma_n \qquad {\rm for~all~}n\in \mathbb{N}.$$ These unitary elements define the desired inner perturbation of $\alpha$. \[Cor:ameO\] There is an amenable action of $\mathbb{F}_\infty$ on the Cuntz algebra $\mathcal{O}_\infty$ satisfying conditions $(1)$ and $(2)$ in Proposition \[Prop:upert2\]. Recall from the proof of Theorem 5.1 of [@Suz19] that $\mathbb{F}_\infty$ admits an amenable action $\alpha$ on $\mathcal{O}_\infty$. It follows from the construction that $\alpha$ is pointwise approximately inner. Now applying Proposition \[Prop:upert2\] (and Lemma \[Lem:ame\]) to $\alpha$, we obtain the desired action. It follows from Lemma \[Lem:trans\] that condition (1) of Proposition \[Prop:upert2\] is stronger than condition (1) of Proposition \[Prop:upert\]. \[Lem:ucp\] Let $\alpha\colon \mathbb{F}_\infty \curvearrowright A$ be an action on a unital purely infinite simple [C$^\ast$]{}-algebra satisfying condition $(1)$ in Proposition \[Prop:upert2\]. Then there is no $\mathbb{F}_\infty$-equivariant unital completely positive map $\Phi \colon A \rightarrow A$ other than ${\mbox{\rm id}}_A$ $($that is, $\mathbb{C}\subset A$ is *$\mathbb{F}_\infty$-rigid*$)$. Let $\Phi$ be as in the statement. By the assumption on $\alpha$, all inner automorphisms are in the closure of $\{\alpha_s: s\in \mathbb{F}_\infty\}$ in $\mathrm{Aut}(A)$. Therefore, for any $u\in A^{\rm u}$ and any $x\in A$, we have $\Phi(uxu^\ast)=u\Phi(x)u^\ast$. Applying the equality to $x=p \in A^{\rm p}\setminus\{0, 1\}$ and unitary elements $u$ in $pAp \oplus (1-p)A(1-p) \subset A$, we obtain $\Phi(p)=u\Phi(p)u^\ast$. Thus $\Phi(p)$ commutes with $pAp \oplus (1-p)A(1-p)$. Since $A$ is simple, $$\Phi(p) = \lambda_1 p + \lambda_2 (1-p) \qquad {\rm for~ some~} \lambda_1, \lambda_2 \geq 0.$$ We will show that $\lambda_2=0$. Take $v\in A^{\rm u}$ which satisfies $q:=vpv^\ast \lneq 1- p$ (see [@Cun]). Then $$\Phi(q)= v\Phi(p) v^\ast=\lambda_1 q+ \lambda_2(1-q).$$ This yields $$\Phi(p+q)= \lambda_1 (p+q) + \lambda_2(1-p-q)+\lambda_2= (\lambda_1+\lambda_2)(p+q)+2\lambda_2(1-p-q).$$ By iterating this argument, for any $N\in \mathbb{N}$, one can find $r_N\in A^{\rm p}\setminus\{0, 1\}$ satisfying $$\Phi(r_N) =[\lambda_{1}+ (2^{N}-1)\lambda_2] r_N + 2^N\lambda_{2}(1-r_N).$$ Since $\Phi$ is contractive, this forces $\lambda_2 =0$. Thus $\Phi(p)\leq p$ for all $p\in A^{\rm p}$. Since $\Phi$ is unital, these inequalities imply $\Phi(p)=p$ for all $p\in A^{\rm p}$. Since $A^{\rm p}$ spans a dense subspace of $A$ [@Zha], we conclude $\Phi={\mbox{\rm id}}_A$. \[Thm:rigid\] Let $\alpha\colon \mathbb{F}_\infty \curvearrowright A$ be a [C$^\ast$]{}-dynamical system on a unital purely infinite simple [C$^\ast$]{}-algebra satisfying condition $(1)$ in Proposition \[Prop:upert2\]. Let $\beta \colon \mathbb{F}_\infty \curvearrowright B$ be a [C$^\ast$]{}-dynamical system on a simple [C$^\ast$]{}-algebra. Then the inclusion $$C:=B {\mathop{\rtimes _{{\mathrm r}, \beta}}} \mathbb{F}_\infty \subset (A\otimes B) {\mathop{\rtimes _{{\mathrm r}, \alpha \otimes \beta}}}\mathbb{F}_\infty:=D$$ is rigid. Let $\Phi \colon D \rightarrow D$ be a completely positive map with $\Phi|_{C}={\mbox{\rm id}}_C$. Observe that $C$ contains an approximate unit of $D$. Hence by Corollary 5.7 of [@Lan], the $\Phi$ has a strictly continuous extension $\Phi^{\mathcal{M}} \colon \mathcal{M}(D) \rightarrow \mathcal{M}(D)$. Since $\Phi|_B={\mbox{\rm id}}_B$, standard arguments on multiplicative domains show that $$(E^\mathcal{M} \circ \Phi^{\mathcal{M}})(A)\subset \mathcal{M}(A\otimes B) \cap B'=A.$$ (For the proof of the last equality, see the proof of Proposition \[Prop:rigid\].) Since $\Phi^{\mathcal{M}}(u_s)=u_s$ for all $s\in \mathbb{F}_\infty$, the unital completely positive map $$(E^\mathcal{M} \circ \Phi^{\mathcal{M}})|_A \colon A \rightarrow A$$ is $\mathbb{F}_\infty$-equivariant. Therefore Lemma \[Lem:ucp\] implies $(E^\mathcal{M} \circ \Phi^{\mathcal{M}})|_A={\mbox{\rm id}}_A$ . Observe that for any $a\in A$ and any $x\in \mathcal{M}(D)$ satisfying $E^{\mathcal{M}}(x)=0$, we have $$\|a+x\|^2\geq \|E^{\mathcal{M}}((a+x)^\ast (a+x))\|=\|a\|^2+\|E^\mathcal{M}(x^\ast x)\|.$$ Since $E^\mathcal{M} \colon \mathcal{M}(D) \rightarrow \mathcal{M}(A\otimes B)$ is faithful, we obtain $\|a+x\|>\|a\|$ unless $x=0$. As both $E^\mathcal{M}$ and $\Phi^{\mathcal{M}}$ are contractive, the equality $(E^\mathcal{M} \circ \Phi^{\mathcal{M}})|_A={\mbox{\rm id}}_A$ implies that $\Phi^{\mathcal{M}}|_A={\mbox{\rm id}}_A$. Since $A\cdot C$ spans a dense subspace of $D$, we conclude $\Phi={\mbox{\rm id}}_D$. Now by combining Proposition \[Prop:upert2\] and Theorems \[Thm:inter\], \[Thm:rigid\], we obtain the Main Theorem. Before closing this section, we record the following elementary lemma on rigidity of [C$^\ast$]{}-algebra inclusions. This lemma will be used in the next section. \[Lem:rigidcorner\] Let $A \subset B$ be a rigid inclusion of unital purely infinite simple [C$^\ast$]{}-algebras. Let $p\in A^{\rm p}$. Then the inclusion $pAp \subset pBp$ is also rigid. Assume that the inclusion $pAp \subset pBp$ is not rigid. Take a completely positive map $\Phi \colon pBp \rightarrow pBp$ satisfying $\Phi|_{pAp}={\mbox{\rm id}}_{pAp}$ and $\Phi\neq {\mbox{\rm id}}_{pBp}$. Choose $v\in A$ satisfying $v^\ast v=1$, $vv^\ast \leq p$. Define $\Psi\colon B \rightarrow B$ to be $\Psi(x):= v^\ast\Phi(v x v^\ast)v$, $x\in B$. Then for any $x\in pBp$, since $vp \in pAp$, we obtain $$\Psi(x) = v^\ast\Phi(vpxpv^\ast )v=v^\ast vp\Phi(x)pv^\ast v=\Phi(x).$$ In particular, $\Psi\neq {\mbox{\rm id}}_B$. Also, for any $a\in A$, as $v a v^\ast \in pAp$, we have $$\Psi(a)= v^\ast \Phi(va v^\ast)v = v^\ast v a v^\ast v =a.$$ In summary, we obtain $\Psi|_A={\mbox{\rm id}}_A$, $\Psi\neq {\mbox{\rm id}}_B$. Thus the inclusion $A\subset B$ is not rigid. Applications to Kirchberg algebras: proofs of Theorems \[Thmint:Main\] to \[Thmint:3\] {#Sec:Kir} ====================================================================================== We now apply the Main Theorem to obtain the main results. Let $\beta \colon \mathbb{F}_\infty \curvearrowright \mathcal{O}_\infty$ be an action obtained in Corollary \[Cor:ameO\]. We show that $B:=A {\mathop{\rtimes _{{\mathrm r}, \alpha}}} \mathbb{F}_\infty \subset C:=(A\otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, \alpha\otimes \beta}}}\mathbb{F}_\infty$ gives the desired ambient [C$^\ast$]{}-algebra. We first show that $C$ is a Kirchberg algebra. Clearly $C$ is separable. Since $A \otimes \mathcal{O}_\infty$ is simple, purely infinite, and $\alpha\otimes \beta$ is outer (because of its amenability and the fact that $\mathbb{F}_\infty$ has no non-trivial amenable normal subgroup), it follows from Kishimoto’s theorem [@Kis] that $C$ is purely infinite and simple (see e.g. Lemma 6.3 of [@Suz19b] for details). Since $A\otimes \mathcal{O}_\infty$ is nuclear, so is $C$ by the amenability of $\alpha \otimes \beta$. Thus $C$ is a Kirchberg algebra. By Theorem \[Thm:inter\], the inclusion indeed has no intermediate [C$^\ast$]{}-algebras. By Theorem \[Thm:rigid\], the inclusion is rigid. Since the inclusion $\mathbb{C}\subset \mathcal{O}_\infty$ is a KK-equivalence [@Cun], [@PV80], so is $A \subset A \otimes \mathcal{O}_\infty$. Now it follows from Theorem 16 of [@Pim] (see also [@PV]) that the inclusion $B\subset C$ is a KK-equivalence. (Proof: We apply the exact sequences in Theorem 16 of [@Pim] to a fixed free action of $\mathbb{F}_\infty$ on a countable tree. Observe that for any countable set $I$, the inclusion $\bigoplus_I A \subset \bigoplus_I (A \otimes \mathcal{O}_\infty)$ is a KK-equivalence. By the Five Lemma and naturality of the exact sequences, the inclusion map $\iota \colon B \rightarrow C$ induces group isomorphisms $$\varphi\colon \mathrm{KK}(C, B) \rightarrow \mathrm{KK}(B, B),\qquad \psi \colon \mathrm{KK}(C, B) \rightarrow \mathrm{KK}(C, C).$$ Put $x:= \varphi^{-1}(1_B)$, $y:=\psi^{-1}(1_C)$. It then follows from the definition that $$[\iota]\hat{\otimes}_{C} x =\varphi(x)=1_B,\qquad y\hat{\otimes}_{B}[\iota]= \psi(y)=1_C.$$ Thus $x=y$ and $\iota$ is a KK-equivalence.) \[Rem:gengr\] Recall that any discrete exact group $\Gamma$ admits an amenable action on a unital purely infinite simple nuclear [C$^\ast$]{}-algebra of density character $\sharp \Gamma$; see the proof of Proposition B in [@Suzeq]. For the existence of a nuclear minimal ambient [C$^\ast$]{}-algebra, our construction works for groups of the form $\mathbb{F}_\Lambda \ast \Lambda$ for any infinite group $\Lambda$ with the approximation property [@HK]. (However the resulting ambient algebras would be mysterious, cf.  [@Suz19b]). In particular, the reduced group [C$^\ast$]{}-algebras of uncountable free groups admit a nuclear minimal ambient [C$^\ast$]{}-algebra. Let $A$ be a Kirchberg algebra. We have constructed, in the proof of the Proposition in [@Suzfp] (see also the proof of Theorem 5.1 in [@Suz19]), an action $\alpha \colon \mathbb{F}_\infty \curvearrowright C$ on a unital Kirchberg algebra $C$ in the bootstrap class whose reduced crossed product $C {\mathop{\rtimes _{{\mathrm r}, \alpha}}}\mathbb{F}_\infty$ is non-nuclear, purely infinite simple, and KK-equivalent to $\mathcal{O}_\infty$. Let $\beta \colon \mathbb{F}_\infty \curvearrowright \mathcal{O}_\infty$ be an amenable action obtained in Corollary \[Cor:ameO\]. As shown in the proofs of the Proposition in [@Suzfp] and Theorem 5.1 in [@Suz19] (by using [@Kir], [@Phi]), the crossed product $(C \otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, \alpha\otimes \beta}}}\mathbb{F}_\infty$ is stably isomorphic to $\mathcal{O}_\infty$. Fix a projection $$p \in C {\mathop{\rtimes _{{\mathrm r}, \alpha}}}\mathbb{F}_\infty$$ which generates K$_0((C \otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, \alpha\otimes \beta}}}\mathbb{F}_\infty) \cong \mathbb{Z}$. (This is possible by [@PV] and [@Cun].) Denote by $1$ the trivial action of $\mathbb{F}_\infty$ on $A$. Then by the Kirchberg $\mathcal{O}_\infty$-absorption theorem [@KP], the corner $p[(A\otimes C \otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, 1 \otimes \alpha\otimes \beta}}}\mathbb{F}_\infty]p$ is isomorphic to $A$. The desired subalgebra of $A \cong p[(A\otimes C \otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, 1 \otimes \alpha\otimes \beta}}}\mathbb{F}_\infty]p$ is given by $$p[(A\otimes C) {\mathop{\rtimes _{{\mathrm r}, 1 \otimes \alpha}}}\mathbb{F}_\infty]p \subset p[(A\otimes C \otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, 1 \otimes \alpha\otimes \beta}}}\mathbb{F}_\infty]p.$$ Indeed, by Theorem \[Thm:inter\] and [@Suz19], Lemma 5.2, the inclusion has no intermediate [C$^\ast$]{}-algebras. By Theorem \[Thm:rigid\] and Lemma \[Lem:rigidcorner\], the inclusion is rigid. Note that the corner $p[(A\otimes C) {\mathop{\rtimes _{{\mathrm r}, 1 \otimes \alpha}}}\mathbb{F}_\infty]p$ is isomorphic to $A\otimes p(C{\mathop{\rtimes _{{\mathrm r}, \alpha}}}\mathbb{F}_\infty)p$, which is not nuclear by the choice of $\alpha$. By [@Pim] or [@PV] (see the proof of Theorem \[Thmint:Main\] for details), the inclusion gives a KK-equivalence. We next construct an ambient non-exact [C$^\ast$]{}-algebra of $p[(A \otimes C \otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, 1\otimes \alpha\otimes \beta}}}\mathbb{F}_\infty]p \cong A$ as in the statement. We first take a non-exact unital simple separable [C$^\ast$]{}-algebra $D_0$ such that the inclusion $\mathbb{C}\subset D_0$ is a KK-equivalence. (Example: Take a unital non-exact separable [C$^\ast$]{}-algebra $P_0$. Set $P:=\{f\in C([0, 1]^2, P_0):f(t, 0)\in \mathbb{C} {\rm~for~all~}t\in [0, 1]\}$. Note that $P$ is non-exact and homotopy equivalent to $\mathbb{C}$. Take a faithful state $\varphi$ on $P$ satisfying the conditions in Theorem 2 of [@Dy]. By Exercise 4.8.1 in [@BO], there is a Hilbert $P$-bimodule whose Toeplitz–Pimsner algebra $D_0$ [@Pim] is isomorphic to the reduced free product $(P, \varphi) \ast (\mathcal{T}, \omega)$. Here $\mathcal{T}$ is the Toeplitz algebra and $\omega$ is a non-degenerate state on $\mathcal{T}$. By Theorem 4.4 of [@Pim2], the inclusion $P \subset D_0$ is a KK-equivalence. By Theorem 2 of [@Dy], $D_0$ is simple. Thus $D_0$ gives the desired [C$^\ast$]{}-algebra.) Set $D:= D_0 \otimes \mathcal{O}_\infty$. Then $D$ is separable, purely infinite, simple, and the inclusion $\mathbb{C} \subset D$ gives a KK-equivalence. By applying Proposition \[Prop:upert2\] to the trivial action $\mathbb{F}_\infty \curvearrowright D$, we obtain an (inner) action $\gamma \colon \mathbb{F}_\infty \curvearrowright D$ satisfying conditions (1), (2) in Proposition \[Prop:upert2\]. By the same reasons as in the previous paragraph, the inclusion $$p[(A\otimes C \otimes \mathcal{O}_\infty) {\mathop{\rtimes _{{\mathrm r}, 1\otimes \alpha\otimes \beta}}}\mathbb{F}_\infty]p \subset p[(A\otimes C \otimes \mathcal{O}_\infty \otimes D) {\mathop{\rtimes _{{\mathrm r}, 1\otimes \alpha\otimes \beta\otimes \gamma}}}\mathbb{F}_\infty]p$$ is rigid, gives a KK-equivalence, and has no intermediate [C$^\ast$]{}-algebras. The non-exactness of the largest [C$^\ast$]{}-algebra is obvious. Finally, by Kirchberg’s theorem ([@Ror], Theorem 4.1.10 (i)), all these [C$^\ast$]{}-algebras are purely infinite. By a similar method to the Proposition in [@Suzfp] (by using [@Oz]), one can arrange the smallest algebra in Theorem \[Thmint:Kir\] not having the completely bounded approximation property (see Section 12.3 of [@BO] for the definition). Recall that in the proof of [@Suz19], Theorem 5.1, we obtained an amenable action $\alpha \colon \mathbb{F}_\infty \curvearrowright D$ on a unital Kirchberg algebra and a projection $p\in D$ such that $p(D {\mathop{\rtimes _{{\mathrm r}, \alpha}}} \mathbb{F}_\infty)p$ is isomorphic to $\mathcal{O}_\infty$. Denote by $1\colon \mathbb{F}_\infty \curvearrowright A$ the trivial action on $A$. By the Kirchberg $\mathcal{O}_\infty$-absorption theorem [@KP], $p((A\otimes D) {\mathop{\rtimes _{{\mathrm r}, 1\otimes \alpha}}} \mathbb{F}_\infty)p \cong A$. Let $B$ be a given unital separable [C$^\ast$]{}-algebra. Choose a faithful state $\varphi$ on $B$. Let $\psi$ denote the state on $C([0, 1])$ defined by the Riemann integral. Then by Theorem 2 of [@Dy], the reduced free product $$P_0:=(B, \varphi) \ast (C[0, 1], \psi)$$ is simple. Note that by Theorem 4.8.5 of [@BO], the canonical inclusion $B \subset P_0$ admits a faithful conditional expectation (as $\psi$ is faithful). Set $$P:=P_0 \otimes \mathcal{O}_\infty.$$ Then $P$ is unital, simple, separable, and purely infinite. The canonical inclusion $B\subset P$ still admits a faithful conditional expectation. Applying Proposition \[Prop:upert2\] to the trivial action of $\mathbb{F}_\infty$ on $P$, we obtain an (inner) action $\beta\colon \mathbb{F}_\infty \curvearrowright P$ satisfying conditions (1) and (2) in the statement. Now define $$C:= p[(A \otimes D \otimes P){\mathop{\rtimes _{{\mathrm r}, 1\otimes \alpha \otimes \beta}}} \mathbb{F}_\infty]p.$$ Observe that the map $x\in P \mapsto xp \in C$ defines a [C$^\ast$]{}-algebra embedding. We identify $B\subset P$ with [C$^\ast$]{}-subalgebras of $C$ via this embedding. We now show that $B\subset C$ admits a faithful conditional expectation. Since $p \in D$, the canonical conditional expectation $$E\colon (A \otimes D \otimes P){\mathop{\rtimes _{{\mathrm r}, 1\otimes \alpha \otimes \beta}}} \mathbb{F}_\infty \rightarrow A\otimes D \otimes P$$ restricts to the faithful conditional expectation $\Phi \colon C \rightarrow p(A\otimes D\otimes P)p$. Any faithful state $\omega$ on $p(A\otimes D)p$ induces a faithful conditional expectation $\Psi \colon p(A \otimes D\otimes P)p \rightarrow P$ by the formula $\Psi(p(x \otimes y)p)=\omega(pxp)y$; $x\in A\otimes D$, $y\in P$. The composite $\Psi\circ \Phi \colon C \rightarrow P$ gives a faithful conditional expectation. Since $B\subset P$ has a faithful conditional expectation, consequently so does $B \subset C$. It follows from Theorem \[Thm:inter\] and [@Suz19], Lemma 5.2 that the inclusion $$A \cong p((A\otimes D) {\mathop{\rtimes _{{\mathrm r}, 1\otimes \alpha}}} \mathbb{F}_\infty)p \subset C$$ has no intermediate [C$^\ast$]{}-algebras. By Theorem \[Thm:rigid\] and Lemma \[Lem:rigidcorner\], the inclusion $A\subset C$ is rigid. \[Rem:LF\] By a similar method to the proof of Theorem \[Thmint:3\] (using [@Dy94] instead of [@Dy]), one can confirm the following property for the free group factor $L(\mathbb{F}_\infty)$: Any von Neumann algebra $M$ with separable predual embeds into a factor $N$ with a normal faithful conditional expectation which contains $L(\mathbb{F}_\infty)$ as a rigid maximal von Neumann subalgebra. Here we say that a von Neumann subalgebra $M\subset N$ is *rigid* if ${\mbox{\rm id}}_M$ is the only normal completely positive map $\Phi \colon N \rightarrow N$ satisfying $\varphi|_M={\mbox{\rm id}}_M$ Tensor splitting theorem for non-unital simple [C$^\ast$]{}-algebras ==================================================================== Here we record a few necessary and useful technical lemmas on non-unital [C$^\ast$]{}-algebras. Although these results would be known for some experts, we do not know an appropriate reference. As a result of these lemmas, we obtain the tensor splitting theorem (cf. [@GK], [@Zac], [@Zsi]) for non-unital simple [C$^\ast$]{}-algebras. An element of a [C$^\ast$]{}-algebra $A$ is said to be [*full*]{} if it generates $A$ as a closed ideal of $A$. \[Lem:full\] Let $A$ be a [C$^\ast$]{}-algebra. Let $a$ be a full positive element of $A$. Then for any finite subset $F$ of $A$ and any $\epsilon>0$, there is a sequence $x_1, \ldots, x_n\in A$ satisfying $$\|\sum_{i=1}^n x_i a x_i^\ast\|\leq 1 \qquad {\rm~and~} \qquad \| \sum_{i=1}^n x_i a x_i^\ast b-b\|<\epsilon \quad {\rm ~for~} b\in F.$$ Observe that for any sequence $x_1, \ldots, x_n\in A$ and any $b\in F$, the [C$^\ast$]{}-norm condition implies $$\| \sum_{i=1}^n x_i a x_i^\ast b-b\| \leq \| \sum_{i=1}^n x_i a x_i^\ast c-c\|,$$ where $c:= \left(\sum_{d\in F}d d^\ast \right)^{1/2}$. Therefore we only need to show the statement when $F$ is a singleton in $A_+$. By the fullness of $a$, we may further assume that the element $b$ in $F$ is of the form $\sum_{i=1}^n y_i a z_i$; $y_1, \ldots, y_n, z_1, \ldots, z_n \in A$. In this case, we have $$b^2 = b^\ast b =\sum_{i, j=1}^n z_i^\ast a y_i^\ast y_j a z_j \leq C \sum_{i=1}^n z_i^\ast a z_i,$$ where $C:= \| a\| \|(y_i^\ast y_j)_{1\leq i, j \leq n}\|_{\mathbb{M}_n(A)}$. Put $w:= \left(\sum_{i=1}^n z_i^\ast a z_i \right)^{1/2}$. Choose a sequence $(f_k)_{k=1}^\infty$ in $C_0(]0, \infty[)_+$ satisfying $t f_k(t)\leq 1$ for all $k\in \mathbb{N}$ and all $t\in [0, \infty[$, and $\lim_{k \rightarrow \infty} tf_k(t)=1$ uniformly on compact subsets of $]0, \infty[$. Then, for each $k\in \mathbb{N}$, we have $$\sum_{i=1}^n f_k(w)z_i^\ast a z_i f_k(w) = f_k(w)^2 w^2\leq 1,$$ $$\| \sum_{i=1}^n f_k(w) z_i^\ast a z_i f_k(w) b-b\| \leq \sqrt{C} \| ( w^2 f_k(w)^2-1)w\|.$$ The last term tends to zero as $k\rightarrow \infty$. Therefore, for a sufficiently large $N\in \mathbb{N}$, the sequence $(f_N(w)z_i^\ast)_{i=1}^n$ satisfies the required conditions. For simple [C$^\ast$]{}-algebras, one can strengthen Lemma \[Lem:full\] as follows. \[Lem:simple\] Let $A$ be a simple [C$^\ast$]{}-algebra. Let $a\in A_+ \setminus \{0\}$. Then for any $b\in A_+$ and any $\epsilon>0$, there is a sequence $x_1, \ldots, x_n\in A$ satisfying $$\|\sum_{i=1}^n x_i x_i^\ast\| \leq \|a\|^{-1}\|b\|,\qquad \sum_{i=1}^n x_i a x_i^\ast \approx_\epsilon b.$$ We may assume $\|a\|=\|b\|=1$. Take $f \in C([0, 1])_+$ satisfying $f(1)\neq 0$ and $\operatorname{supp}(f)\subset [1- \epsilon/2, 1]$. Then $$f(a)\neq 0,\qquad \left(1- \frac{\epsilon}{2}\right) f(a)^2 \leq f(a) a f(a) \leq f(a)^2.$$ Applying Lemma \[Lem:full\] to $f(a)^2$ and $F=\{b^{1/2}\}$, we obtain a sequence $y_1, \ldots, y_n \in A$ satisfying $$\|\sum_{i=1}^n y_i f(a)^2 y_i^\ast \|\leq 1,\qquad \|\sum_{i=1}^n y_i f(a)^2 y_i^\ast b^{\frac{1}{2}}-b^{\frac{1}{2}}\|< \frac{\epsilon}{2}.$$ Set $x_i:=b^{1/2}y_i f(a) $ for $i=1, \ldots, n$. Then $$\sum_{i=1}^n x_ix_i^\ast = \sum_{i=1}^n b^{\frac{1}{2}} y_i f(a)^2 y_i^\ast b^{\frac{1}{2}} \leq 1.$$ Straightforward estimations show that $$\sum_{i=1}^n x_i a x_i^\ast \approx_{ \epsilon/2} \sum_{i=1}^n b^{\frac{1}{2}} y_i f(a)^2 y_i^\ast b^{\frac{1}{2}} \approx_{ \epsilon/2} b.$$ Therefore $x_1, \ldots, x_n$ form the desired sequence. As an application of Lemma \[Lem:simple\], one can remove the unital condition from the tensor splitting theorem [@Zac], [@Zsi] (cf. [@GK]). For a [C$^\ast$]{}-subalgebra $C$ of $A\otimes B$, we define the subset $\mathcal{S}_A(C)$ of $B$ to be $$\mathcal{S}_A(C):=\left\{(\varphi \otimes {\mbox{\rm id}}_B)(c): \varphi \in A^\ast, c\in C\right\}.$$ \[Thm:TS\] Let $A$ be a simple [C$^\ast$]{}-algebra and $B$ be a [C$^\ast$]{}-algebra. Let $C$ be a [C$^\ast$]{}-subalgebra of $A\otimes B$ closed under multiplications by $A$. Then $\mathcal{S}_A(C)$ forms a [C$^\ast$]{}-subalgebra of $C$ and satisfies $A \otimes \mathcal{S}_A(C) \subset C$. Thus, when $A$ satisfies the strong operator approximation property [@HK] or when the inclusion $\mathcal{S}_A(C) \subset B$ admits a completely bounded projection, we have $C= A\otimes \mathcal{S}_A(C)$. To show the first statement, it suffices to show the following claim. For any pure state $\varphi$ on $A$ and any $a\in A_+$, $c\in C$, with $b := (\varphi \otimes {\mbox{\rm id}}_B)(c)$, we have $a\otimes b \in C$. Indeed the claim implies that, since the set of pure states on $A$ spans a weak-$\ast$ dense subspace of $A^\ast$ and $A_+$ spans $A$, for any $a\in A$ and any $\psi \in A^\ast$ with $\psi(a)\neq 0$, the subspace $X:=(\psi \otimes {\mbox{\rm id}}_B)(C)$ of $B$ satisfies $a \otimes X =(a\otimes B) \cap C$. This implies $X=\mathcal{S}_A(C)$, and proves the first statement. To show the claim, for any $\epsilon>0$, by the Akemann–Anderson–Pedersen excision theorem [@AAP] (Theorem 1.4.10 in [@BO]), one can take $e\in A_+$ with $\|e\|=1$, $e c e \approx_{\epsilon} e^2\otimes b$. By Lemma \[Lem:simple\], there is a sequence $x_1, \ldots, x_n \in A$ satisfying $\sum_{i=1}^n x_ie c ex_i^\ast \approx_{\epsilon} a\otimes b$. The left term is contained in $C$ by assumption. Thus $a\otimes b \in C$. For the last statement, when $A$ satisfies the strong operator approximation property, the claim follows from Theorem 12.4.4 in [@BO]. When we have a completely bounded projection $P\colon B \rightarrow \mathcal{S}_A(C)$, it is not hard to see that for any $\varphi \in A^\ast$, $(\varphi\otimes {\mbox{\rm id}}_B)(({\mbox{\rm id}}_A \otimes P)(c)-c)=0$ for all $c\in C$. This proves $({\mbox{\rm id}}_A \otimes P)|_C={\mbox{\rm id}}_C$ and thus $C\subset A \otimes \mathcal{S}_A(C)$. Acknowledgements {#acknowledgements .unnumbered} ---------------- Parts of the present work are greatly improved during the author’s visiting in Research Center for Operator Algebras (Shanghai) for the conference “Special Week on Operator Algebras 2019”. He is grateful to the organizers of the conference for kind invitation. This work was supported by JSPS KAKENHI Early-Career Scientists (No. 19K14550) and tenure track funds of Nagoya University. [99]{} C.  A. Akemann, J. Anderson, G.  K.  Pedersen, [*Excising states of [C$^\ast$]{}-algebras.*]{} Canad.  J.  Math., [**38**]{} (1986), 1239–1260. C.  Anantharaman-Delaroche, [*Action mentionable d’un groupe localement compact sur une algèbre de von Neumann.*]{} Math.  Scand.  [**45**]{} (1979), 289–304. C. Anantharaman-Delaroche, [*Systèmes dynamiques non commutatifs et moyennabilité*]{}, Math. Ann. 279 (1987), 297–315. C. Anantharaman-Delaroche, [*Amenability and exactness for dynamical systems and their [C$^\ast$]{}-algebras.*]{} Trans. Amer. Math. Soc. [**354**]{} (2002), no. 10, 4153–4178. P.  Baum, A.  Connes, N.  Higson, [*Classifying space for proper actions and $K$-theory of group [C$^\ast$]{}-algebras.*]{} Contemp.  Math.  [**167**]{} (1994), 241–291. B. Blackadar, [*K-Theory for operator algebras.*]{} Second edition, Mathematical Sciences Research Institute Publications 5 (1998), Berkeley, CA. E.  Breuillard, M.  Kalantar, M.  Kennedy, N. Ozawa, [*[C$^\ast$]{}-simplicity and the unique trace property for discrete groups.*]{} Publ.  Math.  I.H.É.S.  [**126**]{} (2017), 35–71. L.  G.  Brown, G.  K.  Pedersen, [*[C$^\ast$]{}-algebras of real rank zero.*]{} J.  Funct.  Anal.  [**99**]{} (1991), 131–149. N.  P.  Brown, N. Ozawa, [*[C$^\ast$]{}-algebras and finite-dimensional approximations.*]{} Graduate Studies in Mathematics [**88**]{}. American Mathematical Society, Providence, RI, 2008. I.  Chifan, S.  Das, [*Rigidity results for von Neumann algebras arising from mixing extensions of profinite actions of groups on probability spaces.*]{} Preprint, arXiv:1903.07143. J. Cuntz, [*K-theory for certain [C$^\ast$]{}-algebras.*]{} Ann. of Math.  [**113**]{} (1981), 181–197. M. Dadarlat, [*Nonnuclear subalgebras of AF-algebras.*]{} Amer.  J.  Math.  [**122**]{} (2000), no. 2, 581–597. M.  Dadarlat, U.  Pennig, [*A Dixmier–Douady theory for strongly self-absorbing [C$^\ast$]{}-algebras.*]{} J.  reine angew.  Math.  [**718**]{} (2016), 153–181. K.  Dykema, [*Factoriality and Connes’ invariant $T(M)$ for free products of von Neumann algebras.*]{} J.  reine angew.  Math., [**450**]{} (1994), 159–180. K.  Dykema, [*Simplicity and the stable rank of some free product [C$^\ast$]{}-algebras.*]{} Trans.  Amer.  Math.  Soc.  [**351**]{} (1999), 1–40. G.  A.  Elliott, G.  Gong, H. Lin, Z.  Niu, [*On the classification of simple amenable [C$^\ast$]{}-algebras with finite decomposition rank II.*]{} Preprint, arXiv:1507.03437. L.  Ge, [*On “Problems on von Neumann algebras by R.  Kadison, 1967”.*]{} Acta Math.  Sin.  [**19**]{} (2003), no. 3, 619–624. L.  Ge, R.  Kadison, [*On tensor products of von Neumann algebras.*]{} Invent.  Math.  [**123**]{} (1996), 453–466. U.  Haagerup, J.  Kraus, [*Approximation properties for group [C$^\ast$]{}-algebras and group von Neumann algebras.*]{} Trans.  Amer.  Math.  Soc.  [**344**]{} (1994), 667–699. M.  Hamana. [*Injective envelopes of operator systems.*]{} Publ.  Res.  Inst.  Math.  Sci., [**15**]{}(3) (1979), 773–785. M.  Hamana, [*Injective envelopes of [C$^\ast$]{}-algebras.*]{} J.  Math.  Soc.  Japan [**31**]{} (1979), 181–197. M.  Hamana, [*Injective envelopes of [C$^\ast$]{}-dynamical systems.*]{} Tohoku Math.  J.  (2) [**37**]{} (1985), 463–487. P.  de la Harpe, G.  Skandalis, [*Powers’ property and simple [C$^\ast$]{}-algebras.*]{} Math.  Ann.  [**273**]{} (1986), 241–250. N.  Higson, [*Bivariant $K$-theory and the Novikov conjecture.*]{} Geom.  Funct.  Anal.  [**10**]{} (2000), no. 3, 563–581. N.  Higson, G.  Kasparov. [*$E$-theory and $KK$-theory for groups which act properly and isometrically on Hilbert space.*]{} Invent.  Math., [**144**]{}(1) (2001), 23–74. M.  Izumi, R.  Longo, S.  Popa, [*A Galois correspondence for compact groups of automorphisms of von Neumann algebras with a generalization to Kac algebras.*]{} J.  Funct.  Anal.  [**155**]{} (1998), no. 1, 25–63. M. Izumi, H. Matui, [*Poly-$\mathbb{Z}$ group actions on Kirchberg algebras I.*]{} To appear in Int. Math. Res. Not., arXiv:1810.05850. M. Izumi, H. Matui, [*Poly-$\mathbb{Z}$ group actions on Kirchberg algebras II.*]{} Preprint, arXiv:1906.03818. M.  Kalantar, M.  Kennedy, [*Boundaries of reduced [C$^\ast$]{}-algebras of discrete groups.*]{} J.  reine angew.  Math.  [**727**]{} (2017), 247–267. E.  Kirchberg, [*The classification of purely infinite [C$^\ast$]{}-algebras using Kasparov’s theory.*]{} Preprint. E.  Kirchberg, N.  C.  Phillips, [*Embedding of exact [C$^\ast$]{}-algebras in the Cuntz algebra $\mathcal{O}_2$.*]{} J.  reine angew.  Math.  [**525**]{} (2000), 17–53. A. Kishimoto, [*Outer automorphisms and reduced crossed products of simple [C$^\ast$]{}-algebras.*]{} Comm.  Math.  Phys.  [**81**]{} (1981), no. 3, 429–435. A.  Kishimoto, N.  Ozawa, S.  Sakai, [*Homogeneity of the pure state space of a separable [C$^\ast$]{}-algebra.*]{} Canad.  Math.  Bull.  [**46**]{} (2003), 365–372. E.  C.  Lance, [*Hilbert [C$^\ast$]{}-modules: a toolkit for operator algebraists.*]{} LMS Lecture Note Series [**210**]{}, Cambridge University Press, Cambridge, 1995. R.  Longo, [*Simple injective subfactors.*]{} Adv.  Math.  [**63**]{} (1987), 152–171. H.  Matui, Y.  Sato, [*Decomposition rank of UHF-absorbing [C$^\ast$]{}-algebras.*]{} Duke Math.  J.  [**163**]{} (2014), no. 14, 2687-–2708. S.  Neshveyev, E.  St[ø]{}rmer, [*Ergodic theory and maximal abelian subalgebras of the hyperfinite factor.*]{} J.  Funct.  Anal., [**195**]{} (2002), no. 2, 239–261. N. Ozawa, [*Boundaries of reduced free group [C$^\ast$]{}-algebras.*]{} Bull.  London Math.  Soc. [**39**]{} (2007), 35–38. N. Ozawa, [*Examples of groups which are not weakly amenable.*]{} Kyoto J.  Math., [**52**]{} (2012), 333–344. G. Pedersen, [*[C$^\ast$]{}-algebras and their automorphism groups.*]{} 2nd Edition. N.  C.  Phillips, [*A classification theorem for nuclear purely infinite simple [C$^\ast$]{}-algebras.*]{} Doc.  Math.  [**5**]{} (2000), 49–114. M. V.  Pimsner, [*KK-groups of crossed products by groups acting on trees.*]{} Invent.  Math. [**86**]{} (1986), no. 3, 603–634. M. V.  Pimsner, [*A class of [C$^\ast$]{}-algebras generalizing both Cuntz–Krieger algebras and crossed products by $\mathbb{Z}$.*]{} Free probability theory, 189–212, Fields Inst. Commun., 12, Amer. Math. Soc., Providence, RI, 1997. M. Pimsner, D. Voiculescu, [*Exact sequences for K-groups and Ext-groups of certain cross-products of [C$^\ast$]{}-algebras.*]{} J. Operator Theory, [**4**]{} (1980), 93–118. M.  Pimsner, D.  Voiculescu, [*K-groups of reduced crossed products by free groups.*]{} J.  Operator Theory [**8**]{} (1982), 131–156. S.  Popa, [*On a problem of R.V.  Kadison on maximal abelian $\ast$-subalgebras in factors.*]{} Invent.  Math.  [**65**]{} (1981), 269–-281. S. Popa, [*Maximal injective subalgebras in factors associated with free groups.*]{} Adv.  Math.  [**50**]{} (1983), 27–48. S.  Popa, [*Deformation and rigidity for group actions and von Neumann algebras.*]{} Proceedings of ICM 2006 Vol. I, 445–477. R.  T.  Powers, [*Simplicity of the [C$^\ast$]{}-algebra associated with the free group on two generators.*]{} Duke Math.  J.  [**42**]{} (1975), 151–156. M.  R[ø]{}rdam, [*Classification of nuclear, simple [C$^\ast$]{}-algebras.*]{} vol. 126 of Encyclopaedia Math.  Sci., Springer, Berlin, 2002, 1–145. Y. Suzuki, [*Group [C$^\ast$]{}-algebras as decreasing intersection of nuclear [C$^\ast$]{}-algebras.*]{} Amer.  J.  Math.  [**139**]{} (2017), no. 3, 681–705. Y. Suzuki, [*Minimal ambient nuclear [C$^\ast$]{}-algebras.*]{} Adv. Math. [**304**]{} (2017), 421–433. Y. Suzuki, [*Simple equivariant [C$^\ast$]{}-algebras whose full and reduced crossed products coincide.*]{} To appear in J.  Noncommut.  Geom., arXiv:1801.06949. Y. Suzuki, [*Complete descriptions of intermediate operator algebras by intermediate extensions of dynamical systems.*]{} To appear in Comm. Math. Phys., arXiv:1805.02077. Y. Suzuki, [*Rigid sides of approximately finite dimensional simple operator algebras in non-separable category.*]{} To appear in Int. Math. Res. Not., arXiv:1809.08810. Y. Suzuki, [*On pathological properties of fixed point algebras in Kirchberg algebras.*]{} To appear in Proc.  Roy.  Soc.  Edinburgh Sect.  A, arXiv:1905.13004v2. A.  Tikuisis, S.  White, W.  Winter, [*Quasidiagonality of nuclear [C$^\ast$]{}-algebras.*]{} Ann.  of Math. (2) [**185**]{} (2017), 229–-284. W.  Winter, [*Structure of nuclear [C$^\ast$]{}-algebras: From quasidiagonality to classification, and back again.*]{} Proc.  Int.  Congr.  Math.  (2017), 1797–1820. J. Zacharias, [*Splitting for subalgebras of tensor products.*]{} Proc.  Amer.  Math.  Soc.  [**129**]{} (2001), 407–413. S.  Zhang, [*A property of purely infinite simple [C$^\ast$]{}-algebras.*]{} Proc.  Amer.  Math.  Soc.  [**109**]{} (1990), 717–720. L.  Zsido, [*A criterion for splitting [C$^\ast$]{}-algebras in tensor products.*]{} Proc. Amer.  Math.  Soc.  [**128**]{} (2000), 2001–2006.
{ "pile_set_name": "ArXiv" }
--- author: - | L. Martínez Alonso$^{1}$ and E. Medina$^{2}$\ *$^1$ Departamento de Física Teórica II, Universidad Complutense*\ *E28040 Madrid, Spain*\ *$^2$ Departamento de Matemáticas, Universidad de Cádiz*\ *E11510 Puerto Real, Cádiz, Spain* title: ' A common integrable structure in the hermitian matrix model and Hele-Shaw flows [^1]' --- Introduction ============ The Toda hierarchy represents a relevant integrable structure which emerges in several random matrix models [@ger]-[@avm]. Thus, the partition functions $$\label{1} Z_N(\mbox{Hermitian})=\int {\mathrm{d}}H \exp\Big(\mbox{tr}(\sum_{k\geq 1}t_k\,H^k)\Big),$$ $$\label{2} Z_N(\mbox{Normal})=\int {\mathrm{d}}M\,{\mathrm{d}}M^{\dagger} \exp\Big(\mbox{tr}\,(M\,M^{\dagger}+\sum_{k\geq 1}(t_k\,M^k+\bar{t}_k\,M^{\dagger\,k}))\Big),$$ of the hermitian ($H=H^{\dagger}$) and the normal matrix models ($[M,M^{\dagger}]=0$) , where $N$ is the matrix dimension, are tau-functions of the 1-Toda and 2-Toda hierarchy, respectively. As a consequence of this connection new facets of the Toda hierarchy have been discovered. Thus the analysis of the large $N$-limit of the Hermitian matrix model lead to introduce an interpolated continuous version of the 2-Toda hierarchy: the *dispersionful* 2-Toda hierarchy (see for instance [@tt]). On the other hand, the leading contribution to the large $N$-limit (planar contribution) motivated the introduction of a *classical* version of the Toda hierarchy [@tt] which is known as the *dispersionless* 2-Toda (d2-Toda) hierarchy. Laplacian growth processes describe evolutions of two-dimensional domains driven by harmonic fields. It was shown in [@zab1] that the d2-Toda is a relevant integrable structure in Laplacian growth problems and conformal maps dynamics. For example, if a given analytic curve $\gamma\, \,(z=z(p),\, |p|=1)$ is the boundary of a simply-connected bounded domain, then $\gamma$ evolves with respect to its harmonic moments according to a solution of the d2-Toda hierarchy. These solutions are characterized by the string equations $$\label{11} \bar{z}=m,\quad \overline{m}=-z.$$ Here $(z,m)$ and $(\bar{z},\bar{m})$ denote the two pairs of Lax-Orlov operators of the d2-Toda hierarchy. As it was noticed in [@zab1]-[@wz], this integrable structure also emerges in the planar limit of the normal matrix model and describes the evolution of the support of eigenvalues under a change of the parameters $t_k$ of the potential. The present paper is motivated by the recent discovery [@lee] of an integrable structure provided by the dispersionless AKNS hierarchy which describes the bubble break-off in Hele-Shaw flows. In this work we prove that this integrable structure is also characterized by the solution of a pair of string equations $$\label{trii} z=\bar{z},\quad m=\overline{m},$$ of the d2-Toda hierarchy. Since the system describes the planar limit of , it constitutes a common integrable structure arising in the Hermitian matrix model and the theory of Hele-Shaw flows. Our strategy is inspired by previous results [@mel3]-[@mano2] on solution methods for dispersionless string equations. We also develop some useful standard technology of the theory of Lax equations in the context of the d2-Toda hierarchy. The paper is organized as follows: In the next section the basic theory of the d2-Toda hierarchy, the method of string equations and the solution of are discussed. In Section 3 we show how the solution of appears in the planar limit of the Hermitian matrix model and the Hele-Shaw bubble break-off processes studied in [@lee]. The dispersionless Toda hierarchy ================================== String equations in the d2-Toda hierarchy ----------------------------------------- The dispersionless d2-Toda hierarchy[@tt] can be formulated in terms of two pairs $(z,m)$ and $(\bar{z},\overline{m})$ of Lax-Orlov functions, where $z$ and $\bar{z}$ are series in a complex variable $p$ of the form $$\label{d0a} z=p+u+\dfrac{u_1}{p}+\cdots,\quad \bar{z}=\dfrac{v}{p}+v_0+v_1\,p+\cdots,$$ while $m$ and $\bar{m}$ are series in $z$ and $\bar{z}$ of the form $$\label{act0} m:=\sum_{j=1}^\infty j\, t_j z^{j-1}+\dfrac{x}{z}+\sum_{j\geq 1}\dfrac{S_{j+1}}{z^{j+1}} ,\quad \overline{m} :=\sum_{j=1}^\infty j\,\bar{t}_j \bar{z}^{j-1}-\dfrac{x}{\bar{z}}+\sum_{j\geq 1}\dfrac{\bar{S}_{j+1}}{\bar{z}^{j+1}}.$$ The coefficients in the expansions and depend on a complex variable $x$ and two infinite sets of complex variables ${\boldsymbol{\mathrm{t}}}:=(t_1,t_2,\ldots)$ and ${\bar{\boldsymbol{\mathrm{t}}}}:=(\bar{t}_1,\bar{t}_2,\ldots)$. The d2-Toda hierarchy is encoded in the equation $$\label{2.a} {\mathrm{d}}z\wedge{\mathrm{d}}m={\mathrm{d}}\bar{z}\wedge{\mathrm{d}}\overline{m}= {\mathrm{d}}\Big( \log{p}\,{\mathrm{d}}x+ \sum_{j=1}^\infty\Big( (z^{j})_{+}\,{\mathrm{d}}t_j+(\bar{z}^{j})_{-}\,{\mathrm{d}}\bar{t}_j\Big)\Big).$$ Here the $(\pm)$ parts of $p$-series denote the truncations in the positive and strictly negative power terms, respectively. As a consequence there exist two *action* functions $S$ and $\bar{S}$ verifying $$\begin{aligned} {\mathrm{d}}S&=m\,{\mathrm{d}}z+\log{p}\,{\mathrm{d}}x+ \sum_{j=1}^\infty\Big( (z^{j})_{+}\,{\mathrm{d}}t_j+(\bar{z}^{j})_{-}\,{\mathrm{d}}\bar{t}_j\Big), \\ {\mathrm{d}}\bar{S}&=\overline{m}\,{\mathrm{d}}\bar{z}+\log{p}\,{\mathrm{d}}x+\sum_{j=1}^\infty \Big( (z^{j})_{+}\,{\mathrm{d}}t_j+(\bar{z}^{j})_{-}\,{\mathrm{d}}\bar{t}_j\Big),\end{aligned}$$ and such that they admit expansions $$\label{act} S =\sum_{j=1}^\infty t_j z^{j}+x\,\log{z}-\sum_{j\geq 1}\dfrac{S_{j+1}}{jz^{j}},\quad \bar{S} =\sum_{j=1}^\infty \bar{t}_j \bar{z}^{j}-x\,\log{\bar{z}}-\bar{S}_0+\sum_{j\geq 1}\dfrac{\bar{S}_{j+1}}{j\bar{z}^{j+1}}.$$ From one derives the d2-Toda hierarchy in Lax form $$\label{d3} \dfrac{\partial \mathcal{K}}{\partial t_j}=\{(z^j)_+,\mathcal{K}\},\quad \dfrac{\partial \mathcal{K}}{\partial \bar{t}_j}=\{(\bar{z}^j)_-,\mathcal{K}\},$$ where $\mathcal{K}=z,\,m,\,\bar{z},\,\overline{m}$, and we are using the Poisson bracket $\{f,g\}:=p\,(f_p\,g_x-f_x\,g_p)$. The following result was proved by Takasaki and Takebe (see [@tt]): Let $(P(z,m),Q(z,m))$ and $(\overline{P}(\bar{z},\overline{m}),\overline{Q} (\bar{z},\overline{m})))$ be functions such that $$\{P,Q\}=\{z,m\} ,\quad \{\overline{P},\overline{Q}\}=\{\bar{z},\overline{m}\}.$$ If $(z,m)$ and $(\bar{z},\overline{m})$ are functions which can be expanded in the form - and satisfy the pair of constraints $$\label{dstring} P(z,m)=\overline{P}(\bar{z},\overline{m}),\quad Q(z,m)=\overline{Q}(\bar{z},\overline{m}) ,$$ then they verify $\{z,m\}=\{\bar{z},\overline{m}\}=1$ and are solutions of the Lax equations for the d2-Toda hierarchy . Constraints of the form are called *dispersionless string equations*. In this paper we are concerned with the system . The first equation $z=\bar{z}$ of defines the 1-Toda reduction of the d2-Toda hierarchy $$\label{d4} z=\bar{z}=p+u+\dfrac{v}{p},$$ where $$\label{d6} u=\partial_x S_2,\quad \log{v}=-\partial_x\bar{S}_0.$$ As a consequence the Lax equations imply that $u$ and $v$ depend on $({\boldsymbol{\mathrm{t}}},{\bar{\boldsymbol{\mathrm{t}}}})$ through the combination ${\boldsymbol{\mathrm{t}}}-{\bar{\boldsymbol{\mathrm{t}}}}$. Due to there are two branches of $p$ as a function of $z$ $$\begin{aligned} \label{d5} \nonumber &p(z)=\dfrac{1}{2}\Big((z-u)+\sqrt{(z-u)^2-4v}\Big)=z-u-\dfrac{v}{z}+\cdots\\\\ \nonumber &\bar{p}(z)=\dfrac{1}{2}\Big((z-u)-\sqrt{(z-u)^2-4v}\Big)=\dfrac{v}{z}+\cdots.\end{aligned}$$ To characterize the members of the d1-Toda hierarchy of integrable systems as well as to solve the string equations it is required to determine $(z^j)_{-}(p(z))$ and $(z^j)_{+}(\bar{p}(z))$ in terms of $(u,v)$. By using it is clear that there are functions $\alpha_j,\,\beta_j,\,\bar{\alpha}_j, \, \bar{\beta}_j)$, which depend polynomially, in $z$ such that $$\begin{aligned} \partial_{t_j} S(z)=(z^j)_+(p(z))&=\alpha_j+\beta_j\,p(z),\quad \partial_{\bar{t}_j} S(z)=(z^j)_-(p(z))=\bar{\alpha}_j+\bar{\beta}_j\,p(z),\\ \partial_{t_j} \bar{S}(z)=(z^j)_+(\bar{p}(z))&=\alpha_j+\beta_j\,\bar{p}(z),\quad \partial_{\bar{t}_j} \bar{S}(z)=(z^j)_-(\bar{p}(z))=\bar{\alpha}_j+\bar{\beta}_j\,\bar{p}(z),\end{aligned}$$ and $$\label{d8} \bar{\alpha}_j=z^j-\alpha_j,\quad \bar{\beta}_j=-\beta_j.$$ Now we have $$\label{d7} \alpha_j+\beta_j\,p(z)=\partial_{t_j} S(z)= z^j+\mathcal{O}\Big(\dfrac{1}{z}\Big),\quad \alpha_j+\beta_j\,\bar{p}(z)=\partial_{t_j}\bar{S}(z) =-\partial_{t_j} \bar{S}_0+ \mathcal{O}\Big(\dfrac{1}{z}\Big),$$ so that $$\label{d7aa} \alpha_j=\dfrac{1}{2}\Big(z^j-\partial_{t_j} \bar{S}_0-(p+\bar{p})\,\beta_j\Big),\quad \beta_j=\Big(\dfrac{z^j}{p-\bar{p}}\Big)_\oplus,$$ where $(\;)_\oplus$ and $(\;)_\ominus$ stand for the projection of $z$-series on the positive and strictly negative powers, respectively. Thus, by introducing the generating function $$\label{d7b} R :=\dfrac{z}{p-\bar{p}}=\dfrac{z}{\sqrt{(z-u)^2-4v}}=\sum_{k\geq 0}\dfrac{r_k(u,v)}{z^k},\quad r_0=1.$$ we deduce $$\begin{aligned} \label{d7c} \nonumber (z^j)_+(p(z))&=z^j-\dfrac{1}{2}\,\partial_{t_j} \bar{S}_0-\dfrac{z}{2\,R}\,\Big(z^{j-1}\,R\Big)_\ominus\\ &=z^j-\dfrac{1}{2}(\partial_{t_j} \bar{S}_0+r_j)-\dfrac{1}{2\,z}\,(r_{j+1}-u\,r_j)+\mathcal{O}\Big(\dfrac{1}{z^2}\Big).\end{aligned}$$ Hence $$\partial_{t_j} \bar{S}_0=-r_j,\quad \partial_{t_j} S_2=\dfrac{1}{2}\,(r_{j+1}-u\,r_j),$$ so that the equations of the $d1$-Toda hierarchy are given by $$\label{dto1} \partial_{t_j} u=\dfrac{1}{2}\,\partial_x\,(r_{j+1}-u\,r_j),\quad \partial_{t_j} v=v\,\partial_x\,r_{j}.$$ Furthermore, we have found $$\label{d7d0} (z^j)_-(p(z))= -\dfrac{1}{2}\,r_j+\dfrac{z}{2\,R}\,\Big(z^{j-1}\,R\Big)_\ominus, \quad (z^j)_+(\bar{p}(z))=r_j+(z^j)_-(p(z)).$$ Hence, the first terms of their asymptotic expansions as $z\rightarrow\infty$ are $$\label{d7d} (z^j)_-(p(z))=\dfrac{1}{2\,z}\,(r_{j+1}-u\,r_j)+\mathcal{O}\Big(\dfrac{1}{z^2}\Big), \quad (z^j)_+(\bar{p}(z))=r_j+\dfrac{1}{2\,z}\,(r_{j+1}-u\,r_j)+\mathcal{O}\Big(\dfrac{1}{z^2}\Big).$$ Notice that since $r_0=1$ and $ r_1=u$, these last equations hold for $j\geq 0$. Hodograph solutions of the $1$-dToda hierarchy ---------------------------------------------- In the above paragraph we have used the first string equation of . Let us now deal with the second one. To this end we set $$m=\overline{m}= \sum_{j=1}^\infty j\,t_j\,(z^{j-1})_++ \sum_{j=1}^\infty j\,\bar{t}_j\,(z^{j-1})_-,$$ which leads to the following expressions for the Orlov functions $(m,\overline{m})$ $$\begin{aligned} \label{d8a} \nonumber &m(z)=\sum_{j=1}^\infty j\,t_j\,z^{j-1}+\sum_{j=1}^\infty j\,(\bar{t}_j-t_j)\,(z^{j-1})_-(p(z)),\\\\ \nonumber &\overline{m}(z)=\sum_{j=1}^\infty j\,\bar{t}_j\,z^{j-1}-\sum_{j=1}^\infty j\,(\bar{t}_j-t_j)\,(z^{j-1})_+(\bar{p}(z)).\end{aligned}$$ In order to apply Theorem 1 we have to determine $u$ and $v$ and ensure that $(m,\overline{m})$ verify the correct asymptotic form -. Both things can be achieved by reducing to the form $$\begin{aligned} \label{d9} \nonumber &\dfrac{x}{z}+\sum_{j\geq 2}\dfrac{1}{z^j}S_j=\sum_{j=1}^\infty j\,(\bar{t}_j-t_j)\,(z^{j-1})_-(p(z)),\\\\ \nonumber &-\dfrac{x}{z}+\sum_{j\geq 2}\dfrac{1}{z^j}\bar{S}_j=-\sum_{j=1}^\infty j\,(\bar{t}_j-t_j)\,(z^{j-1})_+(\bar{p}(z)),\end{aligned}$$ and equating coefficients of powers of $z$. Indeed, from we see that identifying the coefficients of $z^{-1}$ in both sides of the two equations of yields the same relation. This equation together with the one supplied by identifying the coefficients of the constant terms in the second equation of provides the following system of *hodograph-type* equations to determine $(u,v)$ $$\label{ho}\begin{cases} \sum_{j=1}^\infty j\,(\bar{t}_j-t_j) r_{j-1}=0,\\\\ \dfrac{1}{2}\,\sum_{j=1}^\infty j\,(\bar{t}_j-t_j)\Big)\,r_j=x. \end{cases}$$ It can be rewritten as $$\label{hoin}\everymath{\displaystyle} \begin{cases} \oint_{\gamma}\dfrac{dz}{2\pi i} \dfrac{V_{z}}{\sqrt{(z-u)^2-4v}}\, =0,\\\\ \oint_{\gamma}\dfrac{dz}{2\pi i}\dfrac{z\,V_{z}}{\sqrt{(z-u)^2-4v}}\, =-2\,x, \end{cases}$$ where $\gamma$ is a large enough positively oriented closed path and $V_{z}$ denotes the derivative with respect to $z$ of the function $$\label{U} V(z,{\boldsymbol{\mathrm{t}}}-{\bar{\boldsymbol{\mathrm{t}}}}):=\sum_{j=1}^\infty (t_j-\bar{t}_j)\Big)\,z^j.$$ The remaining equations arising from characterize the functions $S_j^{(0)}$ and $\overline{S}_j^{(0)}$ for $j\geq 1$ in terms of $(u,v)$. Therefore we have characterized a solution $(z,m)$ and $(\bar{z},\overline{m})$ of the system of string equations verifying the conditions of Theorem 1 and, consequently, it solves the d1-Toda hierarchy. Planar limit of the Hermitian matrix model and bubble break-off in Hele-Shaw flows ================================================================================== The Hermitian matrix model -------------------------- If we write the partition function of the Hermitian matrix model in terms of eigenvalues and slow variables ${\boldsymbol{\mathrm{t}}}:=\epsilon\,{\boldsymbol{t}}$, where $\epsilon=1/N$, we get $$\label{mat} Z_n(N\,{\boldsymbol{\mathrm{t}}})=\int_{\mathbb{R}^n}\prod_{k=1}^{n}\Big(d\,x_k\,e^{N\,V(x_k,{\boldsymbol{\mathrm{t}}})})\Big)(\Delta(x_1,\cdots,x_n))^2,\quad V(z,{\boldsymbol{\mathrm{t}}}):=\sum_{k\geq 1}t_k\,z^k.$$ The large $N$-limit of the model is determined by the asymptotic expansion of $Z_n(N\,{\boldsymbol{\mathrm{t}}}) $ for $n=N$ as $N\rightarrow \infty$ $$Z_N(N\,{\boldsymbol{\mathrm{t}}}) =\int_{\mathbb{R}^N}\prod_{k=1}^{N}\Big(d\,x_k\,e^{N\,V(x_k,{\boldsymbol{\mathrm{t}}})})\Big)(\Delta(x_1,\cdots,x_N))^2,$$ It is well-known [@avm] that $Z_n({\boldsymbol{t}})$ is a $\tau$-function of the semi-infinite 1-Toda hierarchy , then there exists a $\tau$-function $\tau(\epsilon,x,{\boldsymbol{\mathrm{t}}})$ of the dispersionful 1-Toda hierarchy verifying $$\label{rel} \tau(\epsilon,\epsilon\,n,{\boldsymbol{\mathrm{t}}})=Z_n(N\,{\boldsymbol{\mathrm{t}}}),$$ and consequently $$\label{rel1} \tau(\epsilon,1,{\boldsymbol{\mathrm{t}}})=Z_N(N\,{\boldsymbol{\mathrm{t}}}).$$ Hence the large $N$-limit expansion of the partition function $$\label{tau1} {\mathbb{Z}}_N(N\,{\boldsymbol{\mathrm{t}}})=\exp{\Big(N^2\,\mathbb{F}\Big)},\quad \mathbb{F}=\sum_{k\geq 0}\dfrac{1}{N^{2k}}\,F^{(2k)},$$ is determined by a solution of the dispersionful 1-Toda hierarchy at $x=1$. As a consequence of the above analysis one concludes that the leading term (planar limit) $F^{(0)}$ is determined by a solution of the 1-dToda hierarchy at $x=1$. Furthermore, the leading terms of the $N$-expansions of the main objects of the hermitian matrix model can be expressed in terms of quantities of the 1-dToda hierarchy. For example, in the *one-cut* case , the density of eigenvalues $$\rho(z)=M(z)\,\sqrt{(z-a)(z-b)},$$ is supported on a single interval $[a,b]$. These objects are related to the leading term $W^{(0)}$ of the one-point correlator [@gin] $$W(z):=\dfrac{1}{N}\,\sum_{j\geq 0}\dfrac{1}{z^{j+1}}\langle tr M^j\rangle=\dfrac{1}{z}+\dfrac{1}{N^2}\,\sum_{j\geq 1}\dfrac{1}{z^{j+1}}\, \dfrac{\partial \log\,Z_N(N\,{\boldsymbol{\mathrm{t}}})}{\partial t_j},$$ in the form $$W^{(0)}=-\dfrac{1}{2}V_z(z)+i\pi\,\rho(z).$$ On the other hand, it can be proved (see for instance [@eyn]) that $$W^{(0)}=\label{m1} m(z,1,{\boldsymbol{\mathrm{t}}})-\sum_{j=1}^\infty j\,t_j\,z^{j-1},$$ so that and yield $$\begin{aligned} \label{e1} \nonumber &-\dfrac{1}{2}V_z(z)+i\pi\,\rho(z)=-\sum_{j=1}^\infty j\,t_j\,(z^{j-1})_-(p(z))\\\\ \nonumber &=\dfrac{1}{2}\sum_{j=1}^\infty j\,t_j\,r_{j-1}-\dfrac{1}{2}\sum_{j=1}^\infty j\,t_j\,z^{j-1}+\dfrac{1}{2}(p-\bar{p})\sum_{j=2}^\infty j\,t_j\,\Big(z^{j-2}\,R\Big)_\oplus,\end{aligned}$$ Since we are setting $\bar{t}_j=0,\,\forall j\geq 1$, according to the first hodograph equation the first term in the last equation vanishes. Therefore the density of eigenvalues and its support $[a,b]$ are characterized by $$\begin{aligned} \label{den} \nonumber \rho(z)&:=\dfrac{1}{2\pi i}\Big(\dfrac{V_z}{\sqrt{(z-a)(z-b)}}\Big)_\oplus\,\sqrt{(z-a)(z-b)},\\\\ \nonumber a&:=u-2\,\sqrt{v},\quad b:=u+2\,\sqrt{v},\end{aligned}$$ where we set $x=1$ in all the $x$-dependent functions. Observe that according to $$\label{e2} i\,\pi\,\rho(z)=\dfrac{1}{2}\,V_z(z)+\dfrac{x}{z}+\mathcal{O}\Big(\dfrac{1}{z^2}\Big),\quad z\rightarrow\infty,$$ so that the constraint $x=1$ means that the density of eigenvalues is normalized on its support $$\int_a^b\, \rho(z)\,dz=1.$$ Moreover, from we obtain $$\label{ho3} \oint_{\gamma}\dfrac{dz}{2\pi i}\dfrac{V_{z}}{\sqrt{(z-a)(z-b)}}\, =0,\quad \oint_{\gamma}\dfrac{dz}{2\pi i}\dfrac{z\,V_{z}}{\sqrt{(z-a)(z-b)}}\, =-2,$$ with $\gamma$ being a positively oriented closed path encircling the interval $[a,b]$. These are the quations which determine the zero-genus contribution or planar limit to the partition function of the hermitian model [@eyn]-[@mig]. Bubble break-off in Hele-Shaw flows ----------------------------------- A Hele-Shaw cell is a narrow gap between two plates filled with two fluids: say oil surrounding one or several bubbles of air. Let $D$ denote the domain in the complex plane ${\mathbb{C}}$ of the variable $\lambda$ occupied by the air bubbles. By assuming that $D$ is an *algebraic domain* [@lee], the boundary $\gamma$ of $D$ is characterized by a *Schwarz function* ${\mathbb{S}}={\mathbb{S}}(\lambda)$ such that $$\label{sch} \lambda^*={\mathbb{S}}(\lambda),\quad \lambda\in\gamma.$$ The geometry of the domain ${\mathbb{C}}-D$ is completely encoded in ${\mathbb{S}}$ and it can be conveniently described in terms of the *Schottky double* [@wz]: a Riemann surface $\mathcal{R}$ resulting from gluing two copies $H_{\pm}$ of ${\mathbb{C}}-D$ trough $\gamma$, adding two points at infinity $(\infty,\overline{\infty})$ and defining the complex coordinates $$\begin{cases} \lambda_+(\lambda)=\lambda,\quad \lambda\in H_+,\\ \lambda_-(\lambda)=\lambda^*,\quad \lambda\in H_-. \end{cases}$$ In particular ${\mathbb{S}}\,d\lambda$ can be extended to a unique meromorphic differential $\omega$ on $\mathcal{R}$. The evolution of $\gamma$ is governed by D’Arcy law: the velocity in the oil domain is proportional to the gradient of the pressure. In the absence of surface tension, pressure is continuous across $\gamma$ and then if the bubbles are assumed to be kept at zero pressure, we are lead to the Dirichlet boundary problem $$\label{dir} \begin{cases} \bigtriangleup \mathcal{P}=0,\quad \mbox{on ${\mathbb{C}}-D$},\\ \quad \mathcal{P}=0 \quad \mbox{on $\gamma$},\\ \quad \mathcal{P}\rightarrow -\log|z|,\quad z\rightarrow\infty. \end{cases}$$ If one assumes D’Arcy law in the form $\vec{v}=-2\,\vec{\nabla}\mathcal{P}$, then by introducing the function $$\label{cpo} \Phi(\lambda):=\xi(\lambda)+i\,\mathcal{P}(\lambda),$$ where $\xi$ and $\mathcal{P}$ are the *stream function* and the pressure, respectively, D’Arcy law can be rewritten as $$\label{dar} \partial_t\, {\mathbb{S}}=2\,i\,\partial_{\lambda}\,\Phi,$$ where $t$ stands for the time variable. In the set-up considered in [@lee] air is drawn out from two fixed points of a simply-connected air bubble making the bubble breaks into two emergent bubbles with highly curved tips. Before the break-off the interface oil-air remains free of cusp-like singularities and develops a smooth neck. As it is shown in [@wz]-[@lee], the condition for bubbles to be at equal pressure implies that the integral $$\Pi:=\dfrac{1}{2}\oint_{\beta}\,\omega,$$ where $\omega$ is the meromorphic extension of ${\mathbb{S}}\,d\lambda$ to $\mathcal{R}$ and $\beta$ is a cycle connecting the bubbles, is a constant of the motion. Since at break-off $\beta$ contracts to a point, it is obvious that a necessary condition for break-off is that $\Pi$ vanishes. The following pair of complex-valued functions were introduced in [@lee] to describe the bubble break-off near the breaking point $$\label{car} X(\lambda):=\dfrac{1}{2}\Big(\lambda+{\mathbb{S}}(\lambda)\Big),\quad Y(\lambda):=\dfrac{1}{2\,i}\Big(\lambda-{\mathbb{S}}(\lambda)\Big).$$ They analytically extends the Cartesian coordinates $(X,Y)$ of the interface $\gamma$ $$\label{car1} X=Re\, \lambda,\quad Y=Im\, \lambda,\quad \lambda\in\gamma,$$ and allow to write the evolution law in the form $$\label{dar1} \partial_t\, Y(X)=-\partial_X\,\Phi(X).$$ The analysis of [@lee] concludes that after the break-off the local structure of a small part of the interface containing the tips of the bubbles falls into universal classes characterized by two even integers $(4\,n, 2),\, n\geq 1,$ and a finite number $2n$ of real deformation parameters $t_k$. By assuming symmetry of the curve with respect to the $X$-axis, the general solution for the curve and the potential in the $(4\,n, 2)$ class are $$\label{den} Y:=\Big(\dfrac{U_X}{\sqrt{(X-a)(X-b)}}\Big)_\oplus\,\sqrt{(X-a)(Y-b)},\quad \Phi=-\sqrt{(X-a)(Y-b)},$$ where $a$ and $b$ are the positions of the bubbles tips and $$\label{U} U(X,{\boldsymbol{\mathrm{t}}}):=\sum_{j=1}^{2n} t_{j+1}\,X^{j+1}.$$ Here the subscript $\oplus$ denotes the projection of $X$-series on the positive powers. Due to the physical assumptions of the problem, the function $Y$ inherates two conditions for its expansion as $X\rightarrow\infty$ $$\label{hoh} Y(X)=\sum_{j=1}^{2n} (j+1)\,t_{j+1}\,X^{j}+\sum_{j=0}^{\infty}\dfrac{Y_n}{X^n}.$$ which determine the positions $a$, $b$ of the tips. The conditions are 1. From $\Phi\rightarrow -i\,\log \lambda$ as $\lambda \rightarrow \infty$. Hence implies that the constant term $Y_0$ in should be equal to $t$. 2. The coefficient $Y_1$ in front of $X^{-1}$ turns to be equal to $\Pi$, so that it must vanish for a break-off [@lee]. As it was shown in [@lee], imposing these two conditions on leads to a pair of hodograph equations which arise in the dispersionless AKNS hierachy. However, from it is straightforward to see that these equations coincide with the hodograph equations associated with the system of string equations provided one sets $$\begin{aligned} \label{sett} \nonumber X&=z,\quad Y=2\,m-V_z,\quad \Phi=z-u-2\,p,\\\\ \nonumber t_j&=0,\quad \forall j\geq 2n+2;\quad t=t_1,\quad x=\dfrac{\Pi}{2}=0.\end{aligned}$$ For instance, we observe that the evolution law derives in a very natural form from the d1-Toda hierarchy. Indeed, from and we have $$p=\dfrac{1}{2}\,(z-u-\Phi),$$ so that implies $$\partial_t Y=2\,\partial_{t_1} m(z)-1=2\,\partial_{z}(z)_+-1 =2\,\partial_{z}(p+u)-1=-\partial_{z}\Phi=-\partial_{X}\Phi.$$ In this way the integrable structure associated to the system of string equations of the d2-Toda hierarchy manifests a duality between the planar limit of the Hermitian matrix model and the bubble break-off in Hele-Shaw cells. According to this relationship the density of eigenvalues $\rho$ and the end-points $a,\,b$ of its support in the Hermitian model are identified with the interface function $Y$ and the positions of the bubbles tips, respectively, in the Hele-Show model. [**Acknowledgments**]{} The authors wish to thank the Spanish Ministerio de Educacion y Ciencia (research project FIS2005-00319) and the European Science Foundation (MISGAM programme) for their financial support. [10]{} A. Gerasimov, A. Marshakov, A. Mironov, A. Morozov and A. Orlov, Nuc. Phys. B [**357**]{}, 565 (1991) S. Y. Alexandrov, V. A. Kazakov and I. K. Kostov, Nuc. Phys. B [**667**]{}, 90 (2003) M. Adler and P. Van Moerbeke, Comm. Math. Phys. [**203**]{} , 185 (1999); Comm. Math. Phys. [**207**]{} , 589 (1999) K. Takasaki and T. Takebe, Rev. Math. Phys. [**7**]{}, 743 (1995) P. W. Wiegmann and P. B. Zabrodin, Comm. Math. Phys. [**213**]{} , 523 (2000) M. Mineev-Weinstein, P. Wiegmann and A. Zabrodin, Phys. Rev. Lett. [**84**]{}, 5106 O. Agam, E. Bettelheim, P. Wiegmann and A. Zabrodin, Phys. Rev. Lett. [**88**]{}, 236801 (2002) R. Teodorescu, E. Bettelheim, O. Agam, A. Zabrodin and P. Wiegmann, Nuc. Phys. B [**700**]{}, 521 (2004); Nuc. Phys. B [**704**]{}, 407 (2005) I. Krichever, M. Mineev-Weinstein, P. Wiegmann and A. Zabrodin, Physica D [**198**]{}, 1 (2004) S-Y. Lee, E. Bettelheim and P. Wiegmann, Physica D [**219**]{}, 23(2006) L. Martinez Alonso and E. Medina, Phys. Lett. B [**610**]{}, 227 (2005) L. Martinez Alonso, M. Mañas and E. Medina, J. Math. Phys. [**47**]{}, 83512 (2006) P. Di Francesco, P. Ginsparg and Z. Zinn-Justin , *2D Gravity and Random Matrices* hep-th/9306153. B. Eynard, *An introduction to random matrices*, lectures given at Saclay, October 2000, http://www-spht.cea.fr/articles/t01/014/. E. Brézin, C. Itzikson, G. Parisi and B. Zuber, Comm. Math. Phys. [**59**]{} , 35 (1978) D. Bessis, C. Itzikson, G. Parisi and B. Zuber, Adv. in Appl. Math. [**1**]{} , 109 (1980) C. Itzikson and B. Zuber, J. Math. Phys. [**21**]{} , 411 (1980) A. A. Migdal, Phys. Rep. [**102**]{}, 199 (1983) \[lastpage\] [^1]: Partially supported by MEC project FIS2005-00319 and ESF programme MISGAM
{ "pile_set_name": "ArXiv" }
--- abstract: 'Padé Approximants can be used to go beyond Vector Meson Dominance in a systematic approximation. We illustrate this fact with the case of the pion vector form factor and extract values for the first two coefficients of its Taylor expansion. Padé Approximants are shown to be a useful and simple tool for incorporating high-energy information, allowing an improved determination of these Taylor coefficients.' --- [**Vector Meson Dominance\ as a first step in a systematic approximation:\ the pion vector form factor** ]{}\ [**P. Masjuan, S. Peris**]{} and [**J.J. Sanz-Cillero**]{}\ Grup de F[í]{}sica Te[ò]{}rica and IFAE\ Universitat Aut[ò]{}noma de Barcelona, 08193 Barcelona, Spain.\ Introduction ============ It has been known for a long time that the pion vector form factor (VFF) in the space-like region is very well described by a monopole ansatz of the type given by Vector Meson Dominance (VMD) in terms of the rho meson. However, it has remained unclear whether there is a good reason for this from QCD or it is just a mere coincidence and, consequently, it is not known how to go about improving on this ansatz. To begin our discussion, let us define the form factor, $ F(Q^2)$, by the matrix element $$\label{def} \langle \pi^{+}(p')| \ \frac{2}{3}\ \overline{u}\gamma^{\mu}u-\frac{1}{3}\ \overline{d}\gamma^{\mu} d- \frac{1}{3}\ \overline{s}\gamma^{\mu} s\ | \pi^{+}(p)\rangle= (p+p')^{\mu} \ F(Q^2)\ ,$$ where $Q^2=-(p'-p)^2$, such that $Q^2>0$ corresponds to space-like data. Since the spectral function for the corresponding dispersive integral for $F(Q^2)$ starts at twice the pion mass, the form factor can be approximated by a Taylor expansion in powers of the momentum for $|Q^2|< (2 m_\pi)^2$. At low momentum, Chiral Perturbation Theory is the best tool for organizing the pion interaction in a systematic expansion in powers of momenta and quark masses [@chpt-Weinberg; @chpt-SU2; @chpt-SU3]. With every order in the expansion, there comes a new set of coupling constants, the so-called low-energy constants (LECs), which encode all the QCD physics from higher energies. This means, in particular, that the coefficients in the Taylor expansion can be expressed in terms of these LECs and powers of the quark masses. Consequently, by learning about the low-energy expansion, one may indirectly extract important information about QCD. In principle, the coefficients in the Taylor expansion may be obtained by means of a polynomial fit to the experimental data in the space-like region [^1] below $Q^2=4m_{\pi}^2$. However, such a polynomial fit implies a tradeoff. Although, in order to decrease the (systematic) error of the truncated Taylor expansion, it is clearly better to go to a low-momentum region, this also downsizes the set of data points included in the fit which, in turn, increases the (statistical) error. In order to achieve a smaller statistical error one would have to include experimental data from higher energies, i.e. from $Q^2> 4 m_\pi^2$. Since this is not possible in a polynomial fit, the use of alternative mathematical descriptions may be a better strategy. One such description, which includes time-like data as well, is based on the use of the Roy equations and Omnés dispersion relations. This is the avenue followed by [@Colangelo; @ColangeloB], which has already produced interesting results on the scalar channel [@Caprini], and which can also be applied to the vector channel. Other procedures have relied on conformal transformations for the joint analysis of both time-like and space-like data [@Yndurain], or subtracted Omnés relations [@Pich; @Portoles]. Further analyses may be found in Ref. [@Caprini2]. On the other hand, as already mentioned above, one may also consider an ansatz of the type $$\label{vmd} F(Q^2)_{_{\rm VMD}}=\left(1+\frac{Q^2}{M^2_{_{\rm VMD}} }\right)^{-1}\ .$$ Even though the simplicity of the form of Eq. (\[vmd\]) is quite astonishing, it reproduces the space-like data rather well, even for a range of momentum of the order of a few GeV, i.e. $Q^2\gg 4 m_\pi^2$. If this fact is not merely a fluke, it could certainly be interesting to consider the form (\[vmd\]) as the first step in a systematic approximation, which would then allow improvement on this VMD ansatz. In this article, we would like to point out that the previous VMD ansatz for the form factor (\[vmd\]) can be viewed as the first element in a sequence of Padé Approximants (PAs) which can be constructed in a systematic way. By considering higher-order terms in the sequence, one may be able to describe the space-like data with an increasing level of accuracy [^2]. Of course, whether this is actually the case and the sequence is a convergent one in the strict mathematical sense or, on the contrary, the sequence eventually diverges, remains to be seen. But the important difference with respect to the traditional VMD approach is that, as a Padé sequence, the approximation is well-defined and can be systematically improved upon. Although polynomial fitting is more common, in general, rational approximants (i.e. ratios of two polynomials) are able to approximate the original function in a much broader range in momentum than a polynomial [@Baker]. This will be the great advantage of the Padés compared to other methods: they allow the inclusion of low and high energy information in a rather simple way which, furthermore, can in principle be systematically improved upon. In certain cases, like when the form factor obeys a dispersion relation given in terms of a positive definite spectral function (i.e. becomes a Stieltjes function), it is known that the Padé sequence is convergent everywhere on the complex plane, except on the physical cut [@PerisPades]. Another case of particular interest is in the limit of an infinite number of colors in which the form factor becomes a meromorphic function. In this case there is also a theorem which guarantees convergence of the Padé sequence everywhere in a compact region of the complex plane, except perhaps at a finite number of points (which include the poles in the spectrum contained in that region) [@PerisMasjuan07]. In the real world, in which a general form factor has a complicated analytic structure with a cut, and whose spectral function is not positive definite, we do not know of any mathematical result assuring the convergence of a Padé sequence [@JuanjoVirtoMasjuan]. One just has to try the approximation on the data to learn what happens. In this work we have found that, to the precision allowed by the experimental data, there are sequences of PAs which improve on the lowest order VMD result in a rather systematic way. This has allowed us to extract the values of the lowest-order coefficients of the low-energy expansion. We would like to emphasize that, strictly speaking, the Padé Approximants to a given function are ratios of two polynomials $P_N(z)$ and $Q_M(z)$ (with degree $N$ and $M$, respectively), constructed such that the Taylor expansion around the origin exactly coincides with that of $f(z)$ up to the highest possible order, i.e. $f(z)-R^N_M(z) ={{\cal O}}(z^{M+N+1})$. However, in our case the Taylor coefficients are not known. They are, in fact, the information we are seeking for. Our strategy will consist in determining these coefficients by a least-squares fit of a Padé Approximant to the vector form factor data in the space-like region. There are several types of PAs that may be considered. In order to achieve a fast numerical convergence, the choice of which one to use is largely determined by the analytic properties of the function to be approximated. In this regard, a glance at the time-like data of the pion form factor makes it obvious that the form factor is clearly dominated by the rho meson contribution. The effect of higher resonance states, although present, is much more suppressed. In these circumstances the natural choice is a $P^{L}_{1}$ Padé sequence [@Baker], i.e. the ratio of a polynomial of degree $L$ over a polynomial of degree one [^3]. Notice that, from this perspective, the VMD ansatz in (\[vmd\]) is nothing but the $P^0_1$ Padé Approximant. However, to test the aforementioned single-pole dominance, one should check the degree to which the contribution from resonances other than the rho may be neglected. Consequently, we have also considered the sequence $P^{L}_{2}$, and the results confirm those found with the PAs $P^{L}_{1}$. Furthermore, for completeness, we have also considered the so-called Padé-Type approximants (PTs) [@math; @PerisMasjuan08]. These are rational approximants whose poles are predetermined at some fixed values, which we take to be the physical masses since they are known. Notice that this is different from the case of the ordinary PAs, whose poles are left free and determined by the fit. Finally, we have also considered an intermediate case, the so-called Partial-Padé approximants (PPs) [@math], in which some of the poles are predetermined (again at the physical masses) and some are left free. We have fitted all these versions of rational approximants to all the available pion VFF space-like data [@Amendolia]-[@Dally]. The result of the fit is rather independent of the kind of rational approximant sequence used and all the results show consistency among themselves. The structure of this letter is the following. In section \[sec:model\] we begin by testing the efficiency of the $P^L_1$ Padés with the help of a model. In section \[sec:FF\] we apply this very same method to the experimental VFF. Firstly, in sec. \[sec:PAL1\], we use the Padé Approximants $P^L_1$; then, in Sec. \[sec:PAL2\], this result is cross-checked with a $P^L_2$ PA. Finally, in sec. \[sec:PTPP\], we study the Padé-Type and Partial-Padé approximants. The outcome of these analyses is combined in section \[sec:results\] and some conclusions are extracted. A warm-up model {#sec:model} =============== In order to illustrate the usefulness of the PAs as fitting functions in the way we propose here, we will first use a phenomenological model as a theoretical laboratory to check our method. Furthermore, the model will also give us an idea about the size of possible systematic uncertainties. We will consider a VFF phase-shift with the right threshold behavior and with approximately the physical values of the rho mass and width. The form-factor is recovered through a once-subtracted Omnés relation, $$\label{model} F(Q^2)=\exp\left \{-\frac{Q^2}{\pi} \int_{4 \hat{m}_{\pi}^{2}}^{\infty}\ dt\ \frac{\delta(t)}{t (t+Q^2)}\right\}\ ,$$ where $\delta(t)$, which plays the role of the vector form factor phase-shift [@Pich; @Portoles; @Cillero], is given by $$\label{model2} \delta(t)=\tan^{-1}\left[\frac{\hat{M}_{\rho} \hat{\Gamma}_{\rho}(t)}{\hat{M}_{\rho}^2-t} \right]\ ,$$ with the $t$-dependent width given by $$\label{width} \hat{\Gamma}_{\rho}(t)= \Gamma_{0}\ \left( \frac{t}{\hat{M}_{\rho}^2} \right)\ \frac{\sigma^3(t)}{\sigma^3(\hat{M}_{\rho}^2)}\ \theta\left( t- 4 \hat{m}_{\pi}^{2} \right)\ ,$$ and $\sigma(t)=\sqrt{1-4 \hat{m}_{\pi}^{2}/t}$. The input parameters are chosen to be close to their physical values: $$\label{param} \Gamma_{0} = 0.15\ \mathrm{GeV}\quad ,\quad \hat{M_{\rho}}^2= 0.6\ \mathrm{GeV}^2\quad ,\quad 4 \hat{m}_{\pi}^{2}= 0.1 \ \mathrm{GeV}^2\, .$$ We emphasize that the model defined by the expressions (\[model\]-\[width\]) should be considered as quite realistic. In fact, it has been used in Ref. [@Pich; @Portoles; @Cillero] for extracting the values for the physical mass and width of the rho meson through a direct fit to the (timelike) experimental data. Expanding $F(Q^2)$ in Eq. (\[model\]) in powers of $Q^2$ we readily obtain $$\label{expmodel} F(Q^2) \, =\, 1 \, - \, a_1\ Q^2 \, + \, a_2\ Q^4 \, - \ a_3\ Q^6 + ... \,\, ,$$ with known values for the coefficients $a_i$. In what follows, we will use Eq. (\[expmodel\]) as the definition of the coefficients $a_i$. To try to recreate the situation of the experimental data [@Amendolia]-[@Dally] with the model, we have generated fifty “data” points in the region $0.01\leq Q^2\leq 0.25$, thirty data points in the interval $0.25\leq Q^2 \leq 3$, and seven points for $3\leq Q^2\leq 10$ (all these momenta in units of GeV$^2$). These points are taken with vanishing error bars since our purpose here is to estimate the systematic error derived purely from our approximate description of the form factor. We have fitted a sequence of Padé Approximants $P^{L}_{1}(Q^2)$ to these data points and, upon expansion of the Padés around $Q^2=0$, we have used them to predict the values of the coefficients $a_i$. The comparison may be found in Table \[table1\]. The last PA we have fitted to these data is $P^6_1$. Notice that the pole position of the Padés differs from the true mass of the model, given in Eq. (\[param\]). $P^{0}_{1}$ $P^{1}_{1}$ $P^{2}_{1}$ $P^{3}_{1}$ $ P^{4}_{1}$ $P^{5}_{1}$ $P^{6}_{1}$ $F(Q^2)$(exact) -------------------- ------------- ------------- ------------- ------------- -------------- ------------- ------------- ----------------- $a_1$(GeV$^{-2}$) 1.549 1.615 1.639 1.651 1.660 1.665 1.670 1.685 $a_2$ (GeV$^{-4}$) 2.399 2.679 2.809 2.892 2.967 3.020 3.074 3.331 $a_3$(GeV$^{-6}$) 3.717 4.444 4.823 5.097 5.368 5.579 5.817 7.898 $s_p$(GeV$^{2}$) $0.646$ $0.603$ $0.582$ $0.567$ $0.552$ $0.540$ $0.526$ $0.6$ : [Results of the various fits to the form factor $F(Q^2)$ in the model, Eq. (\[model\]). The exact values for the coefficients $a_i$ in Eq. (\[expmodel\]) are given on the last column. The last row shows the predictions for the corresponding pole for each Padé ($s_p$), to be compared to the true mass $\hat{M}_{\rho}^{2}=0.6\ $GeV$^2$ in the model.]{}[]{data-label="table1"} A quick look at Table \[table1\] shows that the sequence seems to converge to the exact result, although in a hierarchical way, i.e. much faster for $a_1$ than for $a_2$, and this one much faster than $a_3$, etc... The relative error achieved in determining the coefficients $a_i$ by the last Padé, $P^6_1$, is respectively $0.9\%$, $8\%$ and $26\%$ for $a_1, a_2$ and $a_3$. Naively, one would expect these results to improve as the resonance width decreases since the $P^{L}_{1}$ contains only a simple pole, and this is indeed what happens. Repeating this exercise with the model, but with a $\Gamma_0=0.015$ GeV ($10$ times smaller than the previous one), the relative error achieved by $P^6_1$ for the same coefficients as before is $0.12\%$, $1.1\%$ and $4.7\%$. On the other hand, a model with $\Gamma_0$ five times bigger than the first one produces, respectively, differences of $2.1\%$, $14.4\%$ and $37.8\%$. As we have mentioned in the introduction, it is possible to build a variation of the PAs, the Padé-Type Approximants, where one fixes the pole in the denominator at the physical mass and only the numerator is fitted. We have also studied the convergence of this kind of rational approximant with the model. Thus, in this case, we have placed the $P^L_1$ pole at $s_p=\hat{M}^2_\rho$ and found a similar pattern as in Table \[table1\]. For $P^6_1$, the Padé-Type coefficient $a_1$ differs a $2.5\%$ from its exact value, $a_2$ by $16\%$ and $a_3$ by $40\%$. Based on the previous results, we will take the values in Table \[table1\] as a rough estimate of the systematic uncertainties when fitting to the experimental data in the following sections. Since, as we will see, the best fit to the experimental data comes from the Padé $P^4_1$, we will take the error in Table \[table1\] from this Padé as a reasonable estimate and add to the final error an extra systematic uncertainty of $1.5\%$ and $10\%$ for $a_1$ and $a_2$ (respectively). The pion vector form factor {#sec:FF} =========================== We will use all the available experimental data in the space-like region, which may be found in Refs. [@Amendolia]-[@Dally]. These data range in momentum from $Q^2=0.015$ up to 10 GeV$^2$. As discussed in the introduction, the prominent role of the rho meson contribution motivates that we start with the $P^{L}_{1}$ Padé sequence. Padé Approximants $P^{L}_{1}$ {#sec:PAL1} ----------------------------- ![[The sequence of $P^L_1$ PAs is compared to the available space-like data [@Amendolia]-[@Dally]: $P^0_1$ (brown dashed), $P^1_1$ (green thick-dashed), $P^2_1$ (orange dot-dashed), $P^3_1$ (blue long-dashed), $P^4_1$ (red solid).]{}[]{data-label="fig:VFF"}](DataAllFQ2.eps){width="13cm"} Without any loss of generality, a $P^{L}_{1}$ Padé is given by $$P_1^L(Q^2) \,\, \,= \,\,\, 1\, +\, \sum_{k=0}^{L-1}a_k (-Q^2)^{k} \,\, + \, (-Q^2)^{L} \, \frac{\ a_L}{1+{\frac{\displaystyle a_{L+1} }{\displaystyle a_L}} \, Q^2}\ , \label{PL1}$$ where the vector current conservation condition $P^L_1(0)=1$ has been imposed and the coefficients $a_{k}$ are the low-energy coefficients of the corresponding Taylor expansion of the VFF (compare with (\[expmodel\]) for the case of the model in the previous section). The fit of $P^L_1$ to the space-like data points in Refs.  [@Amendolia]-[@Dally] determines the coefficients $a_{k}$ that best interpolate them. According to Ref. [@brodsky-lepage], the form factor is supposed to fall off as $1/Q^2$ (up to logarithms) at large values of $Q^2$. This means that, for any value of $L$, one may expect to obtain a good fit only up to a finite value of $Q^2$, but not for asymptotically large momentum. This is clearly seen in Fig.  \[fig:VFF\], where the Padé sequence $P^L_1$ is compared to the data up to $L=4$. Fig. \[fig:a1PL1\] shows the evolution of the fit results for the Taylor coefficients $a_1$ and $a_2$ for the $P^L_1$ PA from $L=0$ up to $L=4$. As one can see, after a few Padés these coefficients become stable. Since the experimental data have non zero error it is only possible to fit a $P^L_1$ PA up to a certain value for $L$. From this order on, the large error bars in the highest coefficient in the numerator polynomial make it compatible with zero and, therefore, it no longer makes sense to talk about a new element in the sequence. For the data in Refs.  [@Amendolia]-[@Dally], this happened at $L=4$ and this is why our plots stop at this value. Therefore, from the PA $P^4_1$ we obtain our best fit and, upon expansion around $Q^2=0$, this yields $$a_1\, =\, 1.92 \pm 0.03\,\,\mbox{GeV}^{-2} \, , \qquad\qquad a_2\, =\, 3.49 \pm 0.26\,\,\mbox{GeV}^{-4} \, ;$$ with a $\chi^2/\mathrm{dof}=117/90$. ![[$a_1$ and $a_2$ Taylor coefficients for the $P^L_1$ PA sequence. ]{}[]{data-label="fig:a1PL1"}](a1-PL1.eps "fig:"){width="7cm"} ![[$a_1$ and $a_2$ Taylor coefficients for the $P^L_1$ PA sequence. ]{}[]{data-label="fig:a1PL1"}](a2-PL1.eps "fig:"){width="7cm"} Eq. (\[PL1\]) shows that the pole of each $P^L_1$ PA is determined by the ratio $s_p=a_L/a_{L+1}$. This ratio is shown in Fig. \[fig:spPL1\], together with a gray band whose width is given by $\pm M_\rho\Gamma_\rho$ for comparison. From this figure one can see that the position of the pole of the PA is close to the physical value $M_\rho^2$ [@PDG], although it does not necessarily agree with it, as we already saw in the model of the previous section. ![[Position $s_p$ of the pole for the different $P^L_1$. The range with the physical values $M_\rho^2\pm M_\rho\Gamma_\rho$ is shown (gray band) for comparison. ]{}[]{data-label="fig:spPL1"}](sp-L1.eps){width="6.5cm"} Comment on $P^L_2$ Padés {#sec:PAL2} ------------------------ Although the time-like data of the pion form factor is clearly dominated by the $\rho(770)$ contribution, consideration of two-pole $P^L_2$ PAs will give us a way to assess any possible systematic bias in our previous analysis, which was limited to only single-pole PAs. We have found that the results of the fits of $P^L_2$ PAs to the data tend to reproduce the VMD pattern found for the $P^L_1$ PAs in the previous section. The $P^L_2$ PAs place the first of the two poles around the rho mass, while the second wanders around the complex momentum plane together with a close-by zero in the numerator. This association of a pole and a close-by zero is what is called a “defect" in the mathematical literature[@Baker2]. A defect is only a local perturbation and, at any finite distance from it, its effect is essentially negligible. This has the net effect that the $P^L_2$ Padé in the euclidean region looks just like a $P^L_1$ approximant and, therefore, yields essentially the same results. For example, for the $P^2_2$, one gets $$a_1\, =\, 1.924 \pm 0.029 \,\,\mbox{GeV}^{-2} \, , \qquad\qquad a_2\, =\, 3.50 \pm 0.14 \,\,\mbox{GeV}^{-4} \, ,$$ with a $\chi^2/\mathrm{dof}= 120/92$. Padé Type and Partial Padé Approximants {#sec:PTPP} --------------------------------------- Besides the ordinary Padé Approximants one may consider other kinds of rational approximants. These are the Padé Type and Partial Padé Approximants  [@PerisMasjuan07; @math; @PerisMasjuan08]. In the Padé Type Approximants (PTAs) the poles of the Padé are fixed to certain particular values, which in this context are naturally the physical masses. On the other hand, in the Partial Padé Approximants (PPAs) one has an intermediate situation beytween the PAs and the PTAs in which some poles are fixed while others are left as free parameters to fit. ![Low-energy coefficient $a_1$ from the $T^L_1$ Padé-Type sequence[]{data-label="fig:a1PTL1"}](PTa1final.eps "fig:"){width="8cm"}\ Since the value of the physical rho mass is known ($M_{\rho}=775.5$ MeV), it is natural to attempt a fit of PTAs to the data with a pole fixed at that mass. The corresponding sequence will be called $T^L_1$. This has the obvious advantage that the number of parameters in the fit decreases by one and allows one to go a little further in the sequence. Our best value is then given by the Padé Type Approximant $T^5_1$, whose expansion around $Q^2=0$ yields the following values for the Taylor coefficients: $$a_1\, =\, 1.90 \pm 0.03\,\,\mbox{GeV}^{-2} \, , \qquad\qquad a_2\, =\, 3.28 \pm 0.09 \,\,\mbox{GeV}^{-4} \, ,$$ with a $\chi^2/\mathrm{dof}=118/90$. The previous analysis of PTAs may be extended by making further use of our knowledge of the vector spectroscopy [@PDG]. For instance, by taking $M_{\rho}=775.5$ MeV, $M_{\rho'}=1459$ MeV and $M_{\rho''}=1720$ MeV,[^4] we may construct further Padé-Type sequences of the form $T^L_2$ and $T^L_3$. In the PTA sequence $T^L_2$ one needs to provide the value of two poles. For the first pole, the natural choice is $M_{\rho}^2$. For the second pole, we found that choosing either $M_{\rho'}^2$ or $M_{\rho''}^2$ (the second vector excitation) does not make any difference. Both outcomes are compared in Fig. (\[fig:a1PT2\]). Using $M_{\rho'}^2$, we found that the $T^3_2$ PTA yields the best values as $$a_1\, =\, 1.902 \pm 0.024\,\,\mbox{GeV}^{-2} \, , \qquad\qquad a_2\, =\, 3.29 \pm 0.07 \,\,\mbox{GeV}^{-4} \, ,$$ with a $\chi^2/\mathrm{dof}=118/92$. Using $M_{\rho''}^{2}$ as the second pole one also gets the best value from the $T^3_2$ PTA, with the following results: $$a_1\, =\, 1.899 \pm 0.023\,\,\mbox{GeV}^{-2} \, , \qquad\qquad a_2\, =\, 3.27 \pm 0.06 \,\,\mbox{GeV}^{-4} \, ,$$ with a $\chi^2/\mathrm{dof}=119/92$. We find the stability of the results for the coefficients $a_{1,2}$ quite reassuring. ![[ Low energy coefficient $a_1$ for the $T^L_2$ Padé-Type sequence with $M_{\rho}$ and $M_{\rho'}$ (left), and with $M_{\rho}$ and $M_{\rho''}$ (right). ]{}[]{data-label="fig:a1PT2"}](PT2a1final.eps "fig:"){width="7cm"} ![[ Low energy coefficient $a_1$ for the $T^L_2$ Padé-Type sequence with $M_{\rho}$ and $M_{\rho'}$ (left), and with $M_{\rho}$ and $M_{\rho''}$ (right). ]{}[]{data-label="fig:a1PT2"}](PlotPT2rhoprime.eps "fig:"){width="7cm"} We have also performed an analysis of the PTA sequence $T^L_3$, with similar conclusions. From the $T^3_3$ we obtain the following values for the coefficients: $$a_1\, =\, 1.904 \pm 0.023\,\,\mbox{GeV}^{-2} \, , \qquad\qquad a_2\, =\, 3.29 \pm 0.09 \,\,\mbox{GeV}^{-4} \, ,$$ with a $\chi^2/\mathrm{dof}=119/92$. Finally, to complete our analysis, we will also consider Partial Padé Approximants, in which only part of the denominator is given in advance. In particular, we study the PPA sequence $P^L_{1,1}$ [^5] in which the first pole is given by $M_{\rho}^2$ and the other is left free. The best determination of the Taylor coefficients is given by $P^2_{1,1}$, and they yield $$a_1\, =\, 1.902 \pm 0.029 \,\,\mbox{GeV}^{-2} \, , \qquad\qquad a_2\, =\, 3.28 \pm 0.09 \,\,\mbox{GeV}^{-4} \, ,$$ with the free pole of the PPA given by $M_{free}^2=(1.6 \pm 0.4 $ GeV$)^2$ and a $\chi^2/\mathrm{dof}=119/92$. Combined Results and conclusions {#sec:results} ================================ \[sec:conclusion\] Combining all the previous rational approximants results in an average given by $$a_1\, =\, 1.907\pm 0.010_{\mathrm{stat}} \pm 0.03_{\mathrm{syst}}\,\,\mbox{GeV}^{-2} \ , \ a_2\, =\, 3.30 \pm 0.03_{\mathrm{stat}} \pm 0.33_{\mathrm{syst}}\,\,\mbox{GeV}^{-4} \, .$$ The first error comes from combining the results of the different fits by means of a weighted average. On top of that, we have added what we believe to be a conservative estimate of the theoretical (i.e. systematic) error based on the analysis of the VFF model in Sec. \[sec:model\]. We expect the latter to give an estimate for the systematic uncertainty due to the approximation of the physical form factor with rational functions. For comparison with previous analyses, we also provide in Table \[table2\] the value of the quadratic radius, which is given by $\langle r^2 \rangle \, =\, 6 \, a_1$ . In summary, in this work we have used rational approximants as a tool for fitting the pion vector form factor. Because these approximants are capable of describing the region of large momentum, they may be better suited than polynomials for a description of the space-like data. As our results in Table 2 show, the errors achieved with these approximants are competitive with previous analyses existing in the literature. $\langle r^2\rangle$ (fm$^2$) $a_2$ (GeV$^{-4}$) ------------------------------- ----------------------------------------------------------- -------------------------------------------------------- -- -- -- -- -- -- This work $0.445\pm 0.002_{\mathrm{stat}}\pm 0.007_{\mathrm{syst}}$ $3.30\pm 0.03_{\mathrm{stat}}\pm 0.33_{\mathrm{syst}}$ CGL [@Colangelo; @ColangeloB] $ 0.435\pm 0.005$ ... TY [@Yndurain] $ 0.432\pm 0.001 $ $ 3.84\pm 0.02$ BCT [@op6-VFF] $0.437\pm 0.016$ $3.85\pm 0.60$ PP [@Portoles] $0.430\pm 0.012$ $3.79\pm 0.04$ Lattice [@lattice] $0.418\pm 0.031$ ... : [Our results for the quadratic radius $\langle r^2\rangle$ and second derivative $a_2$ are compared to other determinations [@Colangelo; @ColangeloB; @Yndurain; @op6-VFF; @Portoles; @lattice]. Our first error is statistical. The second one is systematic, based on the analysis of the VFF model in section 2.]{}[]{data-label="table2"} **Acknowledgements** We would like to thank G. Huber and H. Blok for their help with the experimental data. This work has been supported by CICYT-FEDER-FPA2005-02211, SGR2005-00916, the Spanish Consolider-Ingenio 2010 Program CPAN (CSD2007-00042) and by the EU Contract No. MRTN-CT-2006-035482, “FLAVIAnet”. [99]{} S. Weinberg, Physica [**96A**]{} (1979) 327. J. Gasser and H. Leutwyler, [*Annals Phys.*]{} [**158**]{} (1984) 142. J. Gasser and H. Leutwyler, Nucl. Phys. B [**250**]{} (1985) 465; H. Leutwyler, \[arXiv:hep-ph/0212324\];\ G. Colangelo, Nucl. Phys. Proc. Suppl. [**131**]{} (2004) 185-191. G. Colangelo, J. Gasser and H. Leutwyler, Nucl. Phys. B [**603**]{} (2001) 125. I. Caprini, G. Colangelo, J. Gasser and H. Leutwyler, Phys. Rev.  D [**68**]{} (2003) 074006 \[arXiv:hep-ph/0306122\]. J.F. de Troconiz and F.J. Yndurain, Phys. Rev. D [**65**]{} (2002) 093001;\ Phys. Rev. D [**71**]{} (2005) 073008. F. Guerrero and A. Pich, Phys. Lett. B [**412**]{} (1997) 382-388. A. Pich and J. Portolés, Phys. Rev. D [**63**]{} (2001) 093005. I. Caprini, Eur. Phys. J.  C [**13**]{} (2000) 471 \[arXiv:hep-ph/9907227\]; B. Ananthanarayan and S. Ramanan, Eur. Phys. J.  C [**54**]{} (2008) 461 \[arXiv:0801.2023 \[hep-ph\]\]. G.A. Baker and P. Graves-Morris, [*Padé Approximants, Encyclopedia of Mathematics and its Applications*]{}, Cambridge Univ. Press. 1996; chapter 3, sections 3.1 and 3.2 . C. Bender and S. Orszag, *Advanced Mathematical Methods for Scientists and Engineers I: asymptotic methods and perturbation theory*, Springer 1999, section 8.6. First book in Ref. [@Baker], section 5.4, Theorem 5.4.2. See also, S. Peris, Phys. Rev.  D [**74**]{} (2006) 054013 \[arXiv:hep-ph/0603190\]. C. Pommerenke, *Padé approximants and convergence in capacity*, J. Math. Anal. Appl. **41** (1973) 775. Reviewed in the first book of Ref. [@Baker], section 6.5, Theorem 6.5.4, Collorary 1. See also, P. Masjuan and S. Peris, JHEP [**0705**]{} (2007) 040 \[arXiv:0704.1247 \[hep-ph\]\]. P. Masjuan, J. J. Sanz-Cillero and J. Virto, arXiv:0805.3291 \[hep-ph\]. C. Brezinski and J. Van Inseghem, [*Padé Approximations, Handbook of Numerical Analysis*]{}, P.G. Ciarlet and J.L. Lions (editors), North Holland, vol. III. See also, e.g., C. Diaz-Mendoza, P. Gonzalez-Vera and R. Orive, Appl. Num. Math. **53** (2005) 39 and references therein. P. Masjuan and S. Peris, Phys. Lett.  B [**663**]{} (2008) 61 \[arXiv:0801.3558 \[hep-ph\]\]. S.R. Amendolia [*et al.*]{} (NA7 Collaboration), Nucl. Phys. B [**277**]{} (1986) 168. V. Tadevosyan [*et al.*]{} (JLab F(pi) Collaboration), Phys. Rev. C [**75**]{} (2007) 055205; T. Horn [*et al.*]{} (JLab F(pi)-2 Collaboration), Phys. Rev. Lett. [**97**]{} (2006) 192001;\ T. Horn [*et al.*]{} (JLab) \[arXiv:0707.1794 \[nucl-ex\]\]. C. N. Brown [*et al.*]{}, Phys. Rev. D [**8**]{} (1973) 92;\ C. J. Bebek [*et al.*]{}, Phys. Rev.  D [**9**]{} (1974) 1229.\ C. J. Bebek [*et al.*]{}, Phys. Rev.  D [**13**]{} (1976) 25.\ We take as input the reanalysis of these results and the final compilation performed in C. J. Bebek [*et al.*]{}, Phys. Rev. D [**17**]{} (1978) 1693. P. Brauel [*et al.*]{}, Z. Phys. C3, 101 (1979). For our input we took the reanalysis of these data performed in Ref. [@JLAB1]. Dally [*et al.*]{}, Phys. Rev. Lett. [**39**]{} (1977) 1176. D. Gomez Dumm, A. Pich and J. Portoles, Phys. Rev.  D [**62**]{} (2000) 054014 \[arXiv:hep-ph/0003320\]; J. J. Sanz-Cillero and A. Pich, Eur. Phys. J.  C [**27**]{} (2003) 587 \[arXiv:hep-ph/0208199\]. G. P. Lepage and S. J. Brodsky, Phys. Lett.  B [**87**]{} (1979) 359; Phys. Rev.  D [**22**]{} (1980) 2157; Phys. Rev.  D [**24**]{} (1981) 1808. W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G [**33**]{} (2006) 1. G.A. Baker, *Essentials of Padé Approximants*, Academic Press 1975; chapter 14, Corollary 14.3 . J. Bijnens, G. Colangelo and P. Talavera, JHEP [**05**]{} (1998) 014. P. A. Boyle [*et al.*]{}, arXiv:0804.3971 \[hep-lat\]. [^1]: Time-like data is provided by $\pi\pi$ production experiments and, consequently, they necessarily correspond to values of the momentum above the $\pi\pi$ cut, i.e. $ |Q^2|> 4m_\pi^2$ with $Q^2<0$. [^2]: Obviously, unlike the space-like data, one should not expect to reproduce the time-like data since a Padé Approximant contains only isolated poles and cannot reproduce a time-like cut. [^3]: Conventionally, without loss of generality, the polynomial in the denominator is normalized to unity at the origin. [^4]: As will be seen, results do not depend on the precise value chosen for these masses. [^5]: See Ref. [@PerisMasjuan07] for notation.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We experimentally study the propagation of circularly polarized light in the sub-diffusion regime by exploiting enhanced backscattering (EBS, also known as coherent backscattering) of light under low spatial coherence illumination. We demonstrate for the first time that circular polarization memory effect exists in EBS over a large range of scatterers’ sizes in this regime. We show that EBS measurements under low spatial coherence illumination from the helicity preserving and orthogonal helicity channels cross over as the mean free pathlength of light in media varies, and that the cross point indicates the transition from multiple to double scattering in EBS of light.' author: - 'Young L. Kim' - Prabhakar Pradhan - 'Min H. Kim' - Vadim Backman bibliography: - 'Cir\_pol\_memo.bib' title: | Circular polarization memory effect in enhanced backscattering of light\ under partially coherent illumination --- The circular polarization memory effect is an unexpected preservation of the initial helicity (or handedness) of circular polarization of multiply scattered light in scattering media consisting of large particles. Mackintosh *et al*. \[1\] first observed that the randomization of the helicity required unexpectedly far more scattering events than did the randomization of its propagation in media of large scatterers. Bicout *et al*. \[2\] demonstrated that the memory effect can be shown by measuring the degree of circular polarization of transmitted light in slabs. Using numerical simulations of vector radiative transport equations, Kim and Moscoso \[3\] explained the effect as the result of successive near-forward scattering events in large scatterers. Recently, Xu and Alfano \[4\] derived a characteristic length of the helicity loss in the diffuse regime and showed that this characteristic length was greater than the transport mean free pathlength $l_s^*$ for the scatterers of large sizes. Indeed, the propagation of circularly polarized light in random media has been investigated mainly using either numerical simulations or experiments in the diffusion regime, in part because its experimental investigation in the sub-diffusion regime has been extremely challenging. Therefore, the experimental investigation of circularly polarized light in the low-order scattering (or short traveling photons) regime using enhanced backscattering (EBS, also known as coherent backscattering) of light under low spatial coherence illumination will provide a better understanding of its mechanisms and the polarization properties of EBS as well. EBS is a self-interference effect in elastic light scattering, which gives rise to an enhanced scattered intensity in the backward direction. In our previous publications, \[5-8\] we demonstrated that low spatial coherence illumination (the spatial coherence length of illumination $L_{sc}\!<<l_s^*$) dephases the time-reversed partial waves outside its finite coherence area, rejecting long traveling waves in weakly scattering media. EBS under low spatial coherence illumination ($L_{sc}\!<<l_s^*$) is henceforth referred to as low-coherence EBS (LEBS). The angular profile of LEBS, $I_{LEBS}(\theta)$, can be expressed as an integral transform of the radial probability distribution $P(r)$ of the conjugated time-reversed light paths:\[6-8\] $$I_{LEBS}(\theta)\propto \int^\infty_0 C(r)rP(r)\exp(i2\pi r \theta / \lambda)dr,$$ where $r$ is the radial distance from the first to the last points on a time-reversed light path and $C(r) =|2J_1(r/L_{sc})/(r/L_{sc})|$ is the degree of spatial coherence of illumination with the first order Bessel function $J_1$.\[9\] As $C(r)$ is a decay function of $r$, it acts as a spatial filter, allowing only photons emerging within its coherence areas ($\sim L_{sc}^2$ ) to contribute to $P(r)$. Therefore, LEBS provides the information about $P(r)$ for a small $r$ ($<\sim100~\mu m$) that is on the order of $L_{sc}$ as a tool for the investigation of light propagation in the sub-diffusion regime. ![Representative $I_{LEBS}(\theta)$ with $L_{sc} = 110~\mu µm$ obtained from the suspensions of microspheres ($a = 0.15~\mu m$, $ka = 2.4$, and $g = 0.73$). We obtained $I_{LEBS}(\theta)$ for various $l_s^* = 67 - 1056 ~\mu m$ ($l_s = 18 - 285 ~\mu m$) from the (h$||$h) and (h$\bot$h) channels. The insets show the enhancement factors $E$. ](Image1) To investigate the helicity preservation of circularly polarized light in the sub-diffusion regime by exploiting LEBS, we used the experimental setup described in detail elsewhere.\[5,6\] In brief, a beam of broadband cw light from a 100 W xenon lamp (Spectra-Physics Oriel) was collimated using a 4-$f$ lens system, polarized, and delivered onto a sample with the illumination diameter of $3~mm$. By changing the size of the aperture in the 4-$f$ lens system, we varied spatial coherence length $L_{sc}$ of the incident light from $35~\mu m$ to $200~\mu m$. The temporal coherence length of illumination was $0.7~\mu m$ with the central wavelength = $520~nm$ and its FWHM = $135~nm$. The circular polarization of LEBS signals was analyzed by means of an achromatic quarter-wavelet plate (Karl Lambrecht) positioned between the beam splitter and the sample. The light backscattered by the sample was collected by a sequence of a lens, a linear analyzer (Lambda Research Optics), and a CCD camera (Princeton Instruments). We collected LEBS signals from two different circular polarization channels: the helicity preserving (h$||$h) channel and the orthogonal helicity (h$\bot$h) channel. In the (h$||$h) channel, the helicity of the detected circular polarization was the same as that of the incident circular polarization. In the (h$\bot$h) channel, the helicity of the detected circular polarization was orthogonal to that of the incident circular polarization. In our experiments, we used media consisting of aqueous suspensions of polystyrene microspheres ($n_{sphere} = 1.599$ and $n_{water} = 1.335$ at $520~nm$) (Duke Scientific) of various radii $a$ = 0.05, 0.10, 0.15, 0.25, and 0.45 $\mu m$ (the size parameter $ka = 0.8 - 7.2$ and the anisotropic factor $g = 0.11- 0.92$). The dimension of the samples was $\pi \times 252~mm^2 \times 50~mm$. Using Mie theory,\[10\] we calculated the optical properties of the samples such as the scattering mean free pathlength of light in the medium $l_{s}$ ($= 1/\mu_s$, where $\mu_s$ is the scattering coefficient), the anisotropy factor $g$ (= the average cosine of the phase function), and the transport mean free pathlength $l_{s}^*$ ($= 1/\mu_s^* = l_{s}/(1 - g)$, where $\mu_s^*$ is the reduced scattering coefficient). We also varied $L_{sc}$ from 40 to 110 $\mu m$. We used $g$ as a metric of the tendency of light to be scattered in the forward direction. ![$I_{LEBS}$ in the backward direction from Fig. 1. (a) $I_{LEBS}^{||}(\theta = 0)$ and $I_{LEBS}^{\bot}(\theta = 0)$ cross over at $l_s^* = 408 ~\mu m$ ($l_s = 110~\mu m m$). The lines are third-degree polynomial fitting. (b) Inset: $I_{LEBS}^{||}(\theta)$ and $I_{LEBS}^{\bot}(\theta)$ at the cross point. $C(r)rP(r)$ obtained by calculating the inverse Fourier transform of $I_{LEBS}(\theta)$ reveals helicity preserving in the (h$||$h) channel when $r > \sim50~\mu m$. ](Image2) The total experimental backscattered intensity $I_{T}$ can be expressed as $I_T = I_{SS} + I_{MS} + I_{EBS}$, where $I_{SS}$, $I_{MS}$, and $I_{EBS}$ are the contributions from single scattering, multiple scattering, and interference from the time-reserved waves (i.e., EBS), respectively. In media of relatively small particles (radius, $a\leq\lambda$), the angular dependence of $I_T(\theta)$ around the backward direction is primarily due to the interference term, while the multiple and single scattering terms have weaker angular dependence.Thus, $I_{SS} + I_{MS}$ ($=$ the baseline intensity) can be measured at large backscattering angles ($\theta > 3^{\circ}$). Conventionally, the enhancement factor $E = 1 + I_{EBS}(\theta=0^{\circ})/(I_{SS}+I_{MS})$ is commonly used. However, in the studies of circularly polarized light, the enhancement factor should be modified, because the intensity of multiple scattering can be different in the two different channels and because in the (h$||$h) channel, single scattering is suppressed due to the helicity flip. Thus, in our studies, we calculated $I_{EBS}$ by subtracting $I_{SS} + I_{MS}$ from $I_T$. Figure 1 shows representative LEBS intensity profiles $I_{LEBS}(\theta)$ from the suspension of the microspheres with $a$ = 0.15 $\mu m$ ($ka = 2.4$ and $g = 0.73$ at $\lambda = 520~nm$). $I_{LEBS}^{||}$ and $I_{LEBS}^{\bot}$ denote from the (h$||$h) and (h$\bot$h) channels, respectively. We varied $l_{s}^*$ from 67 to 1056 $\mu m$ ($l_s$ from 18 to 285 $\mu m$) with $L_{sc} = 110~ \mu m$. In Fig. 2(a), we plot as a function of $l_s^*$ (the lines are third-degree polynomial fitting), showing two characteristic regimes: (i) the multiply scattering regime ($L_{sc} \gg l_s^*$) and (ii) the minimally scattering regime ($L_{sc} \ll l_s^*$). As expected, in the multiply scattering regime (i), $I_{LEBS}^{||}$ is higher than $I_{LEBS}^{\bot}$ because of the reciprocity principle in the (h$||$h) channel. On the other hand, in the minimal scattering regime (ii), a priori surprisingly, $I_{LEBS}^{||}$ is lower than $I_{LEBS}^{\bot}$ . This is because in this regime, LEBS originates mainly from the time-reversed paths of the minimal number of scattering events in EBS (i.e., mainly double scattering) in a narrow elongated coherence volume.\[8\] In this case, the direction of light scattered by one of the scatterers should be close to the forward direction, while the direction of the light scattered by the other scatterer should be close to the backscattering that flips the helicity of circular polarization. After the cross point, the difference between and remains nearly constant, indicating that LEBS reaches to the asymptotic regime of double scattering. More importantly, Fig. 2(a) shows that and $I_{LEBS}^{||}$ and $I_{LEBS}^{\bot}$ cross over at $l_s^* = 408 \mu m$ ($l_s = 110 \mu m$). The cross point can be understood in the context of the circular polarization memory effect as follows. As shown in the inset of Fig. 2(b), at the cross point,$\int^\infty_0C(r)P^{||}(r)=\int^\infty_0C(r)P^{\bot}(r)$, where $P^{||}(r)$ and $P^{\bot}(r)$ are the radial intensity distributions of the (h${||}$h) and (h${\bot}$h) channels, respectively. Thus, the cross point $R_i$ determines the optical properties ($l_s^*$ or $l_s$) such that $\int^{\sim L_{sc}}_0P^{||}(r)=\int^{\sim L_{sc}}_0P^{\bot}(r)$. In other words, $R_i$ defines $l_s^*$ or $l_s$ such that and are equal within $L_{sc}$ and thus, the degree of circular polarization within $L_{sc}$ becomes zero as well. As shown in Fig. 2(b), the $C(r)rP(r)$, which can be obtained by the inverse Fourier transform of $I_{LEBS}(\theta)$ using Eq. (1), reveals more detailed information about the helicity preservation. For small $r$, $P^{||}(r) < P^{\bot}(r)$. For $r > \sim50~\mu m$, ($\sim l_s/2$), $P^{||}(r) > P^{\bot}(r)$, showing that the initial helicity is preserved. This is because the successive scattering events of the highly forward scatterers direct photons away from the incident point of illumination, while maintaining the initial helicity. ![Dependence of $R_i$ on $L_{sc}$ and $g$ in LEBS measurements. (a) Plot of $R_i$ (in the units of $l_s^*$ ) versus $L_{sc}$ for a fixed $g$ = 0.86 ($ka$ = 4.0). (b) $R_i$ (in the units of $l_s^*$)/$L_{sc}$ as a function of $g$. (c) $R_i$ is recalculated in the units of $l_s$. ](Image3) As discussed above, the cross point $R_i$ is determined by both the spatial coherence length of illumination $L_{sc}$ and the optical properties of the media. Thus, we investigated the relationship between $L_{sc}$ and $R_i$ using the fixed scatterer size with $a = 0.25 \mu m$ ($ka = 4.0$, and $g = 0.86$). Fig. 3(a) shows that $R_i$ (in the units of $l_s^*$) is linearly proportional to $L_{sc}$ and that small reduced scattering coefficients $\mu_s^*$ ($= 1/l_s^*$) are necessary to reach a cross point as $L_{sc}$ increases. Because the linear fitting line passes through the origin (the 95% confidence interval of the intercept of the $L_{sc}$ axis is \[$-32~\mu m$, $44~\mu m$\]), $R_i$ can be normalized by $L_{sc}$. Next, in order to elucidate how the tendency of the propagation direction (i.e., $g$) plays a role in the memory effect, we further studied the effect of $g$ on $R_i$ using the various size parameters $ka$ ranging from 0.8 to 7.2 ($g = 0.11 - 0.92$) with the fixed $L_{sc} = 110~\mu m$. In Fig. 3(b), we plot $R_i$ (in the units of $l_s^*$ ) versus $g$. This shows $R_i$ increases dramatically as $g$ increases, which is in good agreement with the conventional notion that a small $\mu_s*$ is required for the memory effect to occur in media of larger particles because of the stronger memory effect in media of larger scatterers. When we plot $R_i$ in the units of $l_s$ versus $g$, as shown in Fig. 3(c), on the other hand, $R_i$ does not depend strongly on $g$. This result shows that when $l_s$ is on the order of $L_{sc}$, the helicity of circular polarization is maintained over a large range of the size parameters. Moreover, Fig. 3(c) demonstrates that the average distance of single scattering events (i.e., $l_s$) is a main characteristic length scale that plays major roles in the memory effect in the sub-diffusion regime. In summary, we experimentally investigated for the first time the circular polarization memory effect in the sub-diffusion regime by taking advantage of LEBS, which suppresses time-reserved waves beyond the spatial coherence area; and thus isolates low-order scattering in weakly scattering media. We reported that LEBS introduces the new length scale (i.e., cross point) at which the degree of circular polarization becomes zero; and the scale is determined by both the spatial coherence length of illumination and the optical properties of the media. Using the cross point of the LEBS measurements from the (h$||$h) and (h$\bot$h) channels, we further elucidated the memory effect in the sub-diffusion regime. Our results demonstrate that the memory effect exists in the EBS phenomenon. Furthermore, we show that the cross point is the transition point from multiple scattering to double scattering events this regime. Finally, our results will further facilitate the understanding of the propagation of circularly polarized light in weakly scattering media such as biological tissue. ———————————————–\ 1. F. C. Mackintosh, J. X. Zhu, D. J. Pine, and D. A. Weitz, “Polarization memory of multiply scattered light,” Phys. Rev. B 40, 9342 (1989).\ 2. D. Bicout, C. Brosseau, A. S. Martinez, and J. M. Schmitt, “Depolarization of Multiply Scattered Waves by Spherical Diffusers - Influence of the Size Parameter,” Phys. Rev. E 49, 1767 (1994).\ 3. A. D. Kim and M. Moscoso, “Backscattering of circularly polarized pulses,” Opt. Lett. 27, 1589 (2002).\ 4. M. Xu and R. R. Alfano, “Circular polarization memory of light,” Phys. Rev. E 72, 065601(R) (2005).\ 5. Y. L. Kim, Y. Liu, V. M. Turzhitsky, H. K. Roy, R. K. Wali, and V. Backman, “Coherent Backscattering Spectroscopy,” Opt. Lett. 29, 1906 (2004).\ 6. Y. L. Kim, Y. Liu, R. K. Wali, H. K. Roy, and V. Backman, “Low-coherent backscattering spectroscopy for tissue characterization,” Appl. Opt. 44, 366 (2005).\ 7. Y. L. Kim, Y. Liu, V. M. Turzhitsky, R. K. Wali, H. K. Roy, and V. Backman, “Depth-resolved low-coherence enhanced backscattering,” Opt. Lett. 30, 741 (2005).\ 8. Y. L. Kim, P. Pradhan, H. Subramanian, Y. Liu, M. H. Kim, and V. Backman, “Origin of low-coherence enhanced backscattering,” Opt. Lett. 31, 1459 (2006).\ 9. M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light, 7th ed. (Cambridge University Press, Cambridge; New York, 1999).\ 10. H. C. van de Hulst, Light scattering by small particles (Dover Publications, New York, 1995).\
{ "pile_set_name": "ArXiv" }
--- abstract: 'Holistic 3D indoor scene understanding refers to jointly recovering the i) object bounding boxes, ii) room layout, and iii) camera pose, all in 3D. The existing methods either are ineffective or only tackle the problem partially. In this paper, we propose an end-to-end model that *simultaneously* solves all three tasks in *real-time* given only a single RGB image. The essence of the proposed method is to improve the prediction by i) *parametrizing* the targets ([*e*.*g*.]{}, 3D boxes) instead of directly estimating the targets, and ii) *cooperative training* across different modules in contrast to training these modules individually. Specifically, we parametrize the 3D object bounding boxes by the predictions from several modules, [*i*.*e*.]{}, 3D camera pose and object attributes. The proposed method provides two major advantages: i) The parametrization helps maintain the consistency between the 2D image and the 3D world, thus largely reducing the prediction variances in 3D coordinates. ii) Constraints can be imposed on the parametrization to train different modules simultaneously. We call these constraints “cooperative losses” as they enable the joint training and inference. We employ three cooperative losses for 3D bounding boxes, 2D projections, and physical constraints to estimate a *geometrically consistent* and *physically plausible* 3D scene. Experiments on the SUN RGB-D dataset shows that the proposed method significantly outperforms prior approaches on 3D object detection, 3D layout estimation, 3D camera pose estimation, and holistic scene understanding.' bibliography: - 'nips\_2018.bib' title: 'Cooperative Holistic Scene Understanding: Unifying 3D Object, Layout, and Camera Pose Estimation' --- Introduction ============ Holistic 3D scene understanding from a single RGB image is a fundamental yet challenging computer vision problem, while humans are capable of performing such tasks effortlessly within 200 ms [@potter1975meaning; @potter1976short; @schyns1994blobs; @thorpe1996speed]. The primary difficulty of the holistic 3D scene understanding lies in the vast, but ambiguous 3D information attempted to recover from a single RGB image. Such estimation includes three essential tasks: - The estimation of the 3D camera pose that captures the image. This component helps to maintain the *consistency* between the 2D image and the 3D world. - The estimation of the 3D room layout. Combining with the estimated 3D camera pose, it recovers a *global* geometry. - The estimation of the 3D bounding boxes for each object in the scene, recovering the *local* details. ![Overview of the proposed framework for cooperative holistic scene understanding. (a) We first detect 2D objects and generate their bounding boxes, given a single RGB image as the input, from which (b) we can estimate 3D object bounding boxes, 3D room layout, and 3D camera pose. The blue bounding box is the estimated 3D room layout. (c) We project 3D objects to the image plane with the learned camera pose, forcing the projection from the 3D estimation to be consistent with 2D estimation.[]{data-label="fig:overview"}](framework){width="\linewidth"} Most current methods either are inefficient or only tackle the problem partially. Specifically, - Traditional methods [@gupta2010estimating; @zhao2011image; @zhao2013scene; @choi2013understanding; @schwing2013box; @zhang2014panocontext; @izadinia2016im2cad; @huang2018holistic] apply sampling or optimization methods to infer the geometry and semantics of indoor scenes. However, those methods are computationally expensive; it usually takes a long time to converge and could be easily trapped in an unsatisfactory local minimum, especially for cluttered indoor environments. Thus both stability and scalability become issues. - Recently, researchers attempt to tackle this problem using deep learning. The most straightforward way is to directly predict the desired targets ([*e*.*g*.]{}, 3D room layouts or 3D bounding boxes) by training the individual modules separately with isolated losses for each module. Thereby, the prior work [@mousavian20173d; @lee2017roomnet; @kehl2017ssd; @kundu20183d; @zou2018layoutnet; @liu2018planenet] only focuses on the individual tasks or learn these tasks separately rather than jointly inferring all three tasks, or only considers the inherent relations without explicitly modeling the connections among them [@tulsiani2017factoring]. - Another stream of approach takes both an RGB-D image and the camera pose as the input [@lin2013holistic; @song2014sliding; @song2016deep; @song2017semantic; @deng2017amodal; @zou2017complete; @qi2017frustum; @lahoud20172d; @zhang2016deepcontext], which provides sufficient geometric information from the depth images, thereby relying less on the consistency among different modules. In this paper, we aim to address the missing piece in the literature: to recover a *geometrically consistent* and *physically plausible* 3D scene and jointly solve all three tasks in an *efficient* and *cooperative* way, only from a single RGB image. Specifically, we tackle three important problems: 1. *2D-3D consistency*A good solution to the aforementioned three tasks should maintain a high consistency between the 2D image plane and the 3D world coordinate. How should we design a method to achieve such consistency? 2. *Cooperation*Psychological studies have shown that our biologic perception system is extremely good at rapid scene understanding [@schyns1994blobs], particularly utilizing the fusion of different visual cues [@landy1995measurement; @jacobs2002determines]. Such findings support the necessities of cooperatively solving all the holistic scene tasks together. Can we devise an algorithm such that it can *cooperatively* solve these tasks, making different modules reinforce each other? 3. *Physically Plausible*As humans, we excel in inferring the physical attributes and dynamics [@kubricht2017intuitive]. Such a deep understanding of the physical environment is imperative, especially for an interactive agent ([*e*.*g*.]{}, a robot) to navigate the environment or collaborate with a human agent. How can the model estimate a 3D scene in a physically plausible fashion, or at least have some sense of physics? To address these issues, we propose a novel parametrization of the 3D bounding box as well as a set of cooperative losses. Specifically, we parametrize the 3D boxes by the predicted camera pose and object attributes from individual modules. Hence, we can construct the 3D boxes starting from the 2D box centers to maintain a 2D-3D consistency, rather than predicting 3D coordinates directly or assuming the camera pose is given, which loses the 2D-3D consistency. Cooperative losses are further imposed on the parametrization in addition to the direct losses to enable the joint training of all the individual modules. Specifically, we employ three cooperative losses on the parametrization to constrain the 3D bounding boxes, projected 2D bounding boxes, and physical plausibility, respectively: - The 3D bounding box loss encourages accurate 3D estimation. - The differentiable 2D projection loss measures the consistency between 3D and 2D bounding boxes, which permits our networks to learn the 3D structures with only 2D annotations ([*i*.*e*.]{}, no 3D annotations are required). In fact, we can directly supervise the learning process with 2D objects annotations using the common sense of the object sizes. - The physical plausibility loss penalizes the intersection between the reconstructed 3D object boxes and the 3D room layout, which prompts the networks to yield a physically plausible estimation. shows the proposed framework for cooperative holistic scene understanding. Our method starts with the detection of 2D object bounding boxes from a single RGB image. Two branches of convolutional neural networks are employed to learn the 3D scene from both the image and 2D boxes: i) The *global geometry network* (GGN) learns the global geometry of the scene, predicting both the 3D room layout and the camera pose. ii) The *local object network* (LON) learns the object attributes, estimating the object pose, size, distance between the 3D box center and camera center, and the 2D offset from the 2D box center to the projected 3D box center on the image plane. The details are discussed in . By combining the camera pose from the GGN and object attributes from the LON, we can parametrize 3D bounding boxes, which grants jointly learning of both GGN and LON with 2D and 3D supervisions. Another benefit of the proposed parametrization is improving the training stability by reducing the variance of the 3D boxes prediction, due to that i) the estimated 2D offset has relatively low variance, and ii) we adopt a hybrid of classification and regression method to estimate the variables of large variances, inspired by [@ren2015faster; @mousavian20173d; @qi2017frustum]. We evaluate our method on SUN RGB-D Dataset [@song2015sun]. The proposed method outperforms previous methods on four tasks, including 3D layout estimation, 3D object detection, 3D camera pose estimation, and holistic scene understanding. Our experiments demonstrate that a cooperative method performing holistic scene understanding tasks can significantly outperform existing methods tackling each task in isolation, further indicating the necessity of joint training. Our contributions are four-fold. i) We formulate an end-to-end model for 3D holistic scene understanding tasks. The essence of the proposed model is to cooperatively estimate 3D room layout, 3D camera pose, and 3D object bounding boxes. ii) We propose a novel parametrization of the 3D bounding boxes and integrate physical constraint, enabling the cooperative training of these tasks. iii) We bridge the gap between the 2D image plane and the 3D world by introducing a differentiable objective function between the 2D and 3D bounding boxes. iv) Our method significantly outperforms the state-of-the-art methods and runs in real-time. Method {#sec:method} ====== ![Illustration of (a) network architecture and (b) parametrization of 3D object bounding box.[]{data-label="fig:architecture"}](architecture){width="\linewidth"} In this section, we describe the parametrization of the 3D bounding boxes and the neural networks designed for the 3D holistic scene understanding. The proposed model consists of two networks, shown in : a *global geometric network* (GGN) that estimates the 3D room layout and camera pose, and a *local object network* (LON) that infers the attributes of each object. Based on these two networks, we further formulate differentiable loss functions to train the two networks cooperatively. Parametrization {#sec:param} --------------- #### 3D Objects We use the 3D bounding box $X^W \in \mathbb{R}^{3\times 8}$ as the representation of the estimated 3D object in the world coordinate. The 3D bounding box is described by its 3D center $C^{W} \in \mathbb{R}^3$, size $S^W \in \mathbb{R}^3$, and orientation $R(\theta^W) \in \mathbb{R}^{3 \times 3}$: $X^W = h(C^W, R(\theta^W), S)$, where $\theta$ is the heading angle along the up-axis, and $h(\cdot)$ is the function that composes the 3D bounding box. Without any depth information, estimating 3D object center $C^{W}$ directly from the 2D image may result in a large variance of the 3D bounding box estimation. To alleviate this issue and bridge the gap between 2D and 3D object bounding boxes, we parametrize the 3D center $C^W$ by its corresponding 2D bounding box center $C^{I} \in \mathbb{R}^2$ on the image plane, distance $D$ between the camera center and the 3D object center, the camera intrinsic parameter $K \in \mathbb{R}^{3 \times 3}$, and the camera extrinsic parameters $R(\phi, \psi) \in \mathbb{R}^{3 \times 3}$ and $T \in \mathbb{R}^3$, where $\phi$ and $\psi$ are the camera rotation angles. As illustrated in (b), since each 2D bounding box and its corresponding 3D bounding box are both manually annotated, there is always an offset $\delta^I \in \mathbb{R}^2$ between the 2D box center and the projection of 3D box center. Therefore, the 3D object center $C^W$ can be computed as $$C^W = T + DR(\phi, \psi)^{-1}\frac{K^{-1}\left[C^I + \delta^I, 1\right]^T}{\left\|K^{-1}\left[C^I + \delta^I, 1\right]^T\right\|}.$$ Since $T$ becomes $\Vec{0}$ when the data is captured from the first-person view, the above equation could be written as $C^W = p(C^I, \delta^I, D, \phi, \psi, K)$, where $p$ is a differentiable projection function. In this way, the parametrization of the 3D object bounding box unites the 3D object center $C^W$ and 2D object center $C^I$, which helps maintain the 2D-3D consistency and reduces the variance of the 3D bounding box estimation. Moreover, it integrates both object attributes and camera pose, promoting the cooperative training of the two networks. #### 3D Room Layout Similar to 3D objects, we parametrize 3D room layout in the world coordinate as a 3D bounding box $X^L \in \mathbb{R}^{3 \times 8}$, which is represented by its 3D center $C^L \in \mathbb{R}^3$, size $S^L \in \mathbb{R}^3$, and orientation $R(\theta^L) \in \mathbb{R}^{3 \times 3}$, where $\theta^L$ is the rotation angle. In this paper, we estimate the room layout center by predicting the offset from the pre-computed average layout center. Direct Estimations ------------------ As shown in (a), the *global geometry network* (GGN) takes a single RGB image as the input, and predicts both 3D room layout and 3D camera pose. Such design is driven by the fact that the estimations of both the 3D room layout and 3D camera pose rely on low-level global geometric features. Specifically, GGN estimates the center $C^L$, size $S^L$, and the heading angle $\theta^L$ of the 3D room layout, as well as the two rotation angles $\phi$ and $\psi$ for predicting the camera pose. Meanwhile, the *local object network* (LON) takes 2D image patches as the input. For each object, LON estimates object attributes including distance $D$, size $S^W$, heading angle $\theta^W$, and the 2D offsets $\delta^I$ between the 2D box center and the projection of the 3D box center. Direct estimations are supervised by two losses $\mathcal{L}_\text{GGN}$ and $\mathcal{L}_\text{LON}$. Specifically, $\mathcal{L}_\text{GGN}$ is defined as $$\mathcal{L}_\text{GGN} = \mathcal{L}_{\phi} + \mathcal{L}_{\psi} + \mathcal{L}_{C^L} + \mathcal{L}_{S^L} + \mathcal{L}_{\theta^L},$$ and $\mathcal{L}_\text{LON}$ is defined as $$\mathcal{L}_\text{LON} = \frac{1}{N} \sum_{j=1}^{N} (\mathcal{L}_{D_j} + \mathcal{L}_{\delta^I_j} + \mathcal{L}_{S_j^W} + \mathcal{L}_{\theta_j^W}),$$ where $N$ is the number of objects in the scene. In practice, directly regressing objects’ attributes ([*e*.*g*.]{}, heading angle) may result in a large error. Inspired by [@ren2015faster; @mousavian20173d; @qi2017frustum], we adopt a hybrid method of classification and regression to predict the sizes and heading angles. Specifically, we pre-define several size templates or equally split the space into a set of angle bins. Our model first classifies size and heading angles to those pre-defined categories, and then predicts residual errors within each category. For example, in the case of the rotation angle $\phi$, we define $\mathcal{L}_{\phi} = \mathcal{L}_{\phi-cls} + \mathcal{L}_{\phi-reg}$. Softmax is used for classification and smooth-L1 (Huber) loss is used for regression. Cooperative Estimations ----------------------- Psychological experiments have shown that human perception of the scene often relies on global information instead of local details, known as the gist of the scene [@oliva2005gist; @oliva2006building]. Furthermore, prior studies have demonstrated that human perceptions on specific tasks involve the cooperation from multiple visual cues, [*e*.*g*.]{}, on depth perception [@landy1995measurement; @jacobs2002determines]. These crucial observations motivate the idea that the attributes and properties are naturally coupled and tightly bounded, thus should be estimated cooperatively, in which individual component would help to boost each other. Using the parametrization described in , we hope to cooperatively optimize GGN and LON, simultaneously estimating 3D camera pose, 3D room layout, and 3D object bounding boxes, in the sense that the two networks enhance each other and cooperate to make the definitive estimation during the learning process. Specifically, we propose three cooperative losses which jointly provide supervisions and fuse 2D/3D information into a physically plausible estimation. Such cooperation improves the estimation accuracy of 3D bounding boxes, maintains the consistency between 2D and 3D, and generates a physically plausible scene. We further elaborate on these three aspects below. #### 3D Bounding Box Loss As neither GGN or LON is directly optimized for the accuracy of the final estimation of the 3D bounding box, learning directly through GGN and LON is evidently not sufficient, thus requiring additional regularization. Ideally, the estimation of the object attributes and camera pose should be cooperatively optimized, as both contribute to the estimation of the 3D bounding box. To achieve this goal, we propose the 3D bounding box loss with respect to its 8 corners $$\mathcal{L}_{\text{3D}} = \frac{1}{N}\sum_{j = 1}^N \left\|h(C^W_j, R(\theta_j), S_j) - X_j^{W*}\right\|_2^2,$$ where $X^{W*}$ is the ground truth 3D bounding boxes in the world coordinate. @qi2017frustum proposes a similar regularization in which the parametrization of 3D bounding boxes is different. #### 2D Projection Loss In addition to the 3D parametrization of the 3D bounding boxes, we further impose an additional consistency as the 2D projection loss, which maintains the coherence between the 2D bounding boxes in the image plane and the 3D bounding boxes in the world coordinate. Specifically, we formulate the learning objective of the projection from 3D to 2D as $$\mathcal{L}_{\text{PROJ}} = \frac{1}{N}\sum_{j=1}^N \left\|f(X_j^{W}, R, K) - X_j^{I*}\right\|_2^2,$$ where $f(\cdot)$ denotes a differentiable projection function which projects a 3D bounding box to a 2D bounding box, and $X_j^{I*} \in \mathbb{R}^{2 \times 4}$ is the 2D object bounding box (either detected or the ground truth). #### Physical Loss In the physical world, 3D objects and room layout should not intersect with each other. To produce a physically plausible 3D estimation of a scene, we integrate the physical loss that penalizes the physical violations between 3D objects and 3D room layout $$\mathcal{L}_{\text{PHY}} = \frac{1}{N}\sum_{j=1}^N \left(\operatorname{ReLU}(\operatorname{Max}(X_{j}^W) - \operatorname{Max}(X^L)) + \operatorname{ReLU}(\operatorname{Min}(X^L) - \operatorname{Min}(X_j^W))\right),$$ where $\operatorname{ReLU}$ is the activate function, $\operatorname{Max}(\cdot)$ / $\operatorname{Min}(\cdot)$ takes a 3D bounding box as the input and outputs the max/min value along three world axes. By adding the physical constraint loss, the proposed model connects the 3D environments and the 3D objects, resulting in a more natural estimation of both 3D objects and 3D room layout. To summarize, the total loss can be written as $$\mathcal{L}_{\text{Total}} = \mathcal{L}_{\text{GGN}} + \mathcal{L}_{\text{LON}} + \lambda_{\text{COOP}}\left(\mathcal{L}_{\text{3D}} + \mathcal{L}_{\text{PROJ}} + \mathcal{L}_{\text{PHY}}\right),$$ where $\lambda_{\text{COOP}}$ is the trade-off parameter that balances the cooperative losses and the direct losses. Implementation ============== Both the GGN and LON adopt ResNet-34 [@he2016deep] architecture as the encoder, which encodes a 256x256 RGB image into a 2048-D feature vector. As each of the networks consists of multiple output channels, for each channel with an L-dimensional output, we stack two fully connected layers (2048-1024, 1024-L) on top of the encoder to make the prediction. We adopt a two-step training procedure. First, we fine-tune the 2D detector [@dai2017deformable; @bodla2017softnms] with 30 most common object categories to generate 2D bounding boxes. The 2D and 3D bounding box are matched to ensure each 2D bounding box has a corresponding 3D bounding box. Second, we train two 3D estimation networks. To obtain good initial networks, both GGN and LON are first trained individually using the synthetic data (SUNCG dataset [@song2017semantic]) with photo-realistically rendered images [@zhang2017physically]. We then fix six blocks of the encoders of GGN and LON, respectively, and fine-tune the two networks jointly on SUN RGBD dataset [@song2015sun]. To avoid over-fitting, a data augmentation procedure is performed by randomly flipping the images or randomly shifting the 2D bounding boxes with corresponding labels during the cooperative training. We use Adam [@kingma2014adam] for optimization with a batch size of 1 and a learning rate of 0.0001. In practice, we train the two networks cooperatively for ten epochs, which takes about 10 minutes for each epoch. We implement the proposed approach in PyTorch [@paszke2017automatic]. Evaluation ========== ![Qualitative results (top 50%). (Left) Original RGB images. (Middle) Results projected in 2D. (Right) Results in 3D. Note that the depth input is only used to visualize the 3D results.[]{data-label="fig:results"}](results){width="\linewidth"} We evaluate our model on SUN RGB-D dataset [@song2015sun], including 5050 test images and 10335 images in total. The SUN RGB-D dataset has 47 scene categories with high-quality 3D room layout, 3D camera pose, and 3D object bounding boxes annotations. It also provides benchmarks for various 3D scene understanding tasks. Here, we only use the RGB images as the input. shows some qualitative results. We discard the rooms with no detected 2D objects or invalid 3D room layout annotation, resulting in a total of 4783 training images and 4220 test images. More results can be found in the supplementary materials. We evaluate our model on five tasks: i) 3D layout estimation, ii) 3D object detection, iii) 3D box estimation iv) 3D camera pose estimation, and v) holistic scene understanding, all with the test images across all scene categories. For each task, we compare our cooperatively trained model with the settings in which we train GGN and LON individually without the proposed parametrization of 3D object bounding box or cooperative losses. In the individual training setting, LON directly estimates the 3D object centers in the 3D world coordinate. #### 3D Layout Estimation Since SUN RGB-D dataset provides the ground truth of 3D layout with arbitrary numbers of polygon corners, we parametrize each 3D room layout as a 3D bounding box by taking the output of the Manhattan Box baseline from [@song2015sun] with eight layout corners, which serves as the ground truth. We compare the estimation of the proposed model with three previous methods—3DGP [@choi2013understanding], IM2CAD [@izadinia2016im2cad] and HoPR [@huang2018holistic]. Following the evaluation protocol defined in [@song2015sun], we compute the average between the free space of the ground truth and the free space estimated by the proposed method. shows our model outperforms HoPR by 2.0%. The results further show that there is an additional 1.5% performance improvement compared with individual training, demonstrating the efficacy of our method. Note that IM2CAD [@izadinia2016im2cad] manually selected 484 images from 794 test images of living rooms and bedrooms. For fair comparisons, we evaluate our method on the entire set of living room and bedrooms, outperforming IM2CAD by 2.1%. [l|c| c c c c c]{} & &\ & IoU & $P_g$ & $R_g$ & $R_r$ & IoU\ 3DGP [@choi2013understanding] & 19.2 & 2.1 & 0.7 & 0.6 & 13.9\ HoPR [@huang2018holistic] & 54.9 & 37.7 & 23.0 & 18.3 & 40.7\ Ours (individual) & 55.4 & 36.8 & 22.4 & 20.1 & 39.6\ Ours (cooperative) & **56.9** & **49.3** & **29.7** & **28.5** & **42.9**\ \[tab:holistic\] \[tab:detection\] [c c c c c c c c c c c c]{} & bed & chair & sofa & table & desk & toilet & bin & sink & shelf & lamp & mIoU\ IoU (3D) & 33.1 & 15.7 & 28.0 & 20.8 & 15.6 & 25.1 & 13.2 & 9.9 & 6.9 & 5.9 & 17.4\ IoU (2D) & 75.7 & 68.1 & 74.4 & 71.2 & 70.1 & 72.5 & 69.7 & 59.3 & 62.1 & 63.8 & 68.7\ \[tab:box\_estimation\] #### 3D Object Detection We evaluate our 3D object detection results using the metrics defined in [@song2015sun]. Specifically, the is computed using the 3D between the predicted and the ground truth 3D bounding boxes. In the absence of depth, the threshold of is adjusted from 0.25 (evaluation setting with depth image input) to 0.15 to determine whether two bounding boxes are overlapped. The 3D object detection results are reported in . We report 10 out of 30 object categories here, and the rest are reported in the supplementary materials. The results indicate our method outperforms HoPR by 9.64% on and improves the individual training result by 8.41%. Compared with the model using individual training, the proposed cooperative model makes a significant improvement, especially on small objects such as bins and lamps. The accuracy of the estimation easily influences 3d detection of small objects; oftentimes, it is nearly impossible for prior approaches to detect. In contrast, benefiting from the parametrization method and 2D projection loss, the proposed cooperative model maintains the consistency between 3D and 2D, substantially reducing the estimation variance. Note that although IM2CAD also evaluates the 3D detection, they use a metric related to a specific distance threshold. For fair comparisons, we further conduct experiments on the subset of living rooms and bedrooms, using the same object categories with respect to this particular metric rather than an threshold. We obtain an of 78.8%, 4.2% higher than the results reported in IM2CAD. \[tab:camera\] #### 3D Box Estimation The 3D object detection performance of our model is determined by both the 2D object detection and the 3D bounding box estimation. We first evaluate the accuracy of the 3D bounding box estimation, which reflects the ability to predict 3D boxes from 2D image patches. Instead of using , 3D is directly computed between the ground truth and the estimated 3D boxes for each object category. To evaluate the 2D-3D consistency, the estimated 3D boxes are projected back to 2D, and the 2D is evaluated between the projected and detected 2D boxes. Results using the full model are reported in , which shows 3D estimation is still under satisfactory, despite the efforts to maintain a good 2D-3D consistency. The underlying reason for the gap between 3D and 2D performance is the increased estimation dimension. Another possible reason is due to the lack of context relations among objects. Results for all object categories can be found in the supplementary materials. #### Camera Pose Estimation We evaluate the camera pose by computing the mean absolute error of yaw and roll between the model estimation and ground truth. As shown in , comparing with the traditional geometry-based method [@hedau2009recovering] and previous learning-based method [@huang2018holistic], the proposed cooperative model gains a significant improvement. It also improves the individual training performance with 0.29 degree on yaw and 1.28 degree on roll. #### Holistic Scene Understanding Per definition introduced in [@song2015sun], we further estimate the holistic 3D scene including 3D objects and 3D room layout on SUN RGB-D. Note that the holistic scene understanding task defined in [@song2015sun] misses 3D camera pose estimation compared to the definition in this paper, as the results are evaluated in the world coordinate. Using the metric proposed in [@song2015sun], we evaluate the geometric precision $P_g$, the geometric recall $R_g$, and the semantic recall $R_r$ with the threshold set to 0.15. We also evaluate the between free space (3D voxels inside the room polygon but outside any object bounding box) of the ground truth and the estimation. shows that we improve the previous approaches by a significant margin. Moreover, we further improve the individually trained results by 8.8% on geometric precision, 5.6% on geometric recall, 6.6% on semantic recall, and 3.7% on free space estimation. The performance gain of total scene understanding directly demonstrates that the effectiveness of the proposed parametrization method and cooperative learning process. Discussion ========== In the experiment, the proposed method outperforms the state-of-the-art methods on four tasks. Moreover, our model runs at 2.5 fps (0.4s for 2D detection and 0.02s for 3D estimation) on a single Titan Xp GPU, while other models take significantly much more time; [*e*.*g*.]{}, [@izadinia2016im2cad] takes about 5 minutes to estimate one image. Here, we further analyze the effects of different components in the proposed cooperative model, hoping to shed some lights on how parametrization and cooperative training help the model using a set of ablative analysis. \[tab:analysis\] Ablative Analysis ----------------- We compare four variants of our model with the full model trained using $\mathcal{L}_{\text{SUM}}$: 1. The model trained without the supervision on 3D object bounding box corners (w/o $\mathcal{L}_{\text{3D}}$, $S_1$). 2. The model trained without the 2D supervision (w/o $\mathcal{L}_{\text{PROJ}}$, $S_2$). 3. The model trained without the penalty of physical constraint (w/o $\mathcal{L}_{\text{PHY}}$, $S_3$). 4. The model trained in an unsupervised fashion where we only use 2D supervision to estimate the 3D bounding boxes (w/o $\mathcal{L}_{\text{3D}}+ \mathcal{L}_{\text{GGN}}+\mathcal{L}_{\text{LON}}$, $S_4$). Additionally, we compare two variants of training settings: i) the model trained directly on SUN RGB-D without pre-train ($S_5$), and ii) the model trained with 2D bounding boxes projected from ground truth 3D bounding boxes ($S_6$). We conduct the ablative analysis over all the test images on the task of holistic scene understanding. We also compare the 3D mIoU and 2D mIoU of 3D box estimation. summarizes the quantitative results. ![Comparison with two variants of our model.[]{data-label="fig:ablative"}](ablative){width="\linewidth"} #### Experiment $\mathbf{S_1}$ and $\mathbf{S_3}$ Without the supervision on 3D object bounding box corners or physical constraint, the performance of all the tasks decreases since it removes the cooperation between the two networks. #### Experiment $\mathbf{S_2}$ The performance on the 3D detection is improved without the projection loss, while the 2D mIoU decreases by 8.0%. As shown in (b), a possible reason is that the 2D-3D consistency $\mathcal{L}_{\text{PROJ}}$ may hurt the performance on 3D accuracy compared with directly using 3D supervision, while the 2D performance is largely improved thanks to the consistency. #### Experiment $\mathbf{S_4}$ The training entirely in an unsupervised fashion for 3D bounding box estimation would fail since each 2D pixel could correspond to an infinite number of 3D points. Therefore, we integrate some common sense into the unsupervised training by restricting the size of the object close to the average size. As shown in (c), we can still estimate the 3D bounding box without 3D supervision quite well, although the orientations are usually not accurate. #### Experiment $\mathbf{S_5}$ and $\mathbf{S_6}$ $S_5$ demonstrates the efficiency of using a large amount of synthetic training data, and $S_6$ indicates that we can gain almost the same performance even if there are no 2D bounding box annotations. Related Work ------------ #### Single Image Scene Reconstruction Existing 3D scene reconstruction approaches fall into two streams. i) Generative approaches model the reconfigurable graph structures in generative probabilistic models [@zhao2011image; @zhao2013scene; @choi2013understanding; @lin2013holistic; @guo2013support; @zhang2014panocontext; @zou2017complete; @huang2018holistic]. ii) Discriminative approaches [@izadinia2016im2cad; @tulsiani2017factoring; @song2017semantic] reconstruct the 3D scene using the representation of 3D bounding boxes or voxels through direct estimations. Generative approaches are better at modeling and inferring scenes with complex context, but they rely on sampling mechanisms and are always computational ineffective. Compared with prior discriminative approaches, our model focus on establishing cooperation among each scene module. #### Gap between 2D and 3D It is intuitive to constrain the 3D estimation to be consistent with 2D images. Previous research on 3D shape completion and 3D object reconstruction explores this idea by imposing differentiable 2D-3D constraints between the shape and silhouettes [@wu2016single; @rezende2016unsupervised; @yan2016perspective; @tulsiani2015viewpoints; @wu2017marrnet]. @mousavian20173d infers the 3D bounding boxes by matching the projected 2D corners in autonomous driving. In the proposed cooperative model, we introduce the parametrization of the 3D bounding box, together with a differentiable loss function to impose the consistency between 2D-3D bounding boxes for indoor scene understanding. Conclusion ========== Using a single RGB image as the input, we propose an end-to-end model that recovers a 3D indoor scene in real-time, including the 3D room layout, camera pose, and object bounding boxes. A novel parametrization of 3D bounding boxes and a 2D projection loss are introduced to enforce the consistency between 2D and 3D. We also design differentiable cooperative losses which help to train two major modules cooperatively and efficiently. Our method shows significant improvements in various benchmarks while achieving high accuracy and efficiency. **Acknowledgement:** The work reported herein was supported by DARPA XAI grant N66001-17-2-4029, ONR MURI grant N00014-16-1-2007, ARO grant W911NF-18-1-0296, and an NVIDIA GPU donation grant. We thank Prof. Hongjing Lu from the UCLA Psychology Department for useful discussions on the motivation of this work, and three anonymous reviewers for their constructive comments.
{ "pile_set_name": "ArXiv" }